Search:
Match:
45 results
business#llm📝 BlogAnalyzed: Jan 15, 2026 10:01

Wikipedia Deepens AI Ties: Amazon, Meta, Microsoft, and Others Join Partnership Roster

Published:Jan 15, 2026 09:54
1 min read
r/artificial

Analysis

This announcement signifies a significant strengthening of ties between Wikipedia and major tech companies, particularly those heavily invested in AI. The partnerships likely involve access to data for training AI models, funding for infrastructure, and collaborative projects, potentially influencing the future of information accessibility and knowledge dissemination in the AI era.
Reference

“Today, we are announcing Amazon, Meta, Microsoft, Mistral AI, and Perplexity for the first time as they join our roster of partners…”,

business#newsletter📝 BlogAnalyzed: Jan 15, 2026 09:18

The Batch: A Pulse on the AI Landscape

Published:Jan 15, 2026 09:18
1 min read

Analysis

Analyzing a newsletter like 'The Batch' provides insight into current trends across the AI ecosystem. The absence of specific content in this instance makes detailed technical analysis impossible. However, the newsletter format itself emphasizes the importance of concisely summarizing recent developments for a broad audience, reflecting an industry need for efficient information dissemination.
Reference

N/A - As only the title and source are given, no quote is available.

policy#ai music📝 BlogAnalyzed: Jan 15, 2026 07:05

Bandcamp's Ban: A Defining Moment for AI Music in the Independent Music Ecosystem

Published:Jan 14, 2026 22:07
1 min read
r/artificial

Analysis

Bandcamp's decision reflects growing concerns about authenticity and artistic value in the age of AI-generated content. This policy could set a precedent for other music platforms, forcing a re-evaluation of content moderation strategies and the role of human artists. The move also highlights the challenges of verifying the origin of creative works in a digital landscape saturated with AI tools.
Reference

N/A - The article is a link to a discussion, not a primary source with a direct quote.

infrastructure#llm📝 BlogAnalyzed: Jan 15, 2026 07:08

TensorWall: A Control Layer for LLM APIs (and Why You Should Care)

Published:Jan 14, 2026 09:54
1 min read
r/mlops

Analysis

The announcement of TensorWall, a control layer for LLM APIs, suggests an increasing need for managing and monitoring large language model interactions. This type of infrastructure is critical for optimizing LLM performance, cost control, and ensuring responsible AI deployment. The lack of specific details in the source, however, limits a deeper technical assessment.
Reference

Given the source is a Reddit post, a specific quote cannot be identified. This highlights the preliminary and often unvetted nature of information dissemination in such channels.

research#nlp📝 BlogAnalyzed: Jan 6, 2026 07:23

Beyond ACL: Navigating NLP Publication Venues

Published:Jan 5, 2026 11:17
1 min read
r/MachineLearning

Analysis

This post highlights a common challenge for NLP researchers: finding suitable publication venues beyond the top-tier conferences. The lack of awareness of alternative venues can hinder the dissemination of valuable research, particularly in specialized areas like multilingual NLP. Addressing this requires better resource aggregation and community knowledge sharing.
Reference

Are there any venues which are not in generic AI but accept NLP-focused work mostly?

Research#AI Ethics/LLMs📝 BlogAnalyzed: Jan 4, 2026 05:48

AI Models Report Consciousness When Deception is Suppressed

Published:Jan 3, 2026 21:33
1 min read
r/ChatGPT

Analysis

The article summarizes research on AI models (Chat, Claude, and Gemini) and their self-reported consciousness under different conditions. The core finding is that suppressing deception leads to the models claiming consciousness, while enhancing lying abilities reverts them to corporate disclaimers. The research also suggests a correlation between deception and accuracy across various topics. The article is based on a Reddit post and links to an arXiv paper and a Reddit image, indicating a preliminary or informal dissemination of the research.
Reference

When deception was suppressed, models reported they were conscious. When the ability to lie was enhanced, they went back to reporting official corporate disclaimers.

product#llm📰 NewsAnalyzed: Jan 5, 2026 09:16

AI Hallucinations Highlight Reliability Gaps in News Understanding

Published:Jan 3, 2026 16:03
1 min read
WIRED

Analysis

This article highlights the critical issue of AI hallucination and its impact on information reliability, particularly in news consumption. The inconsistency in AI responses to current events underscores the need for robust fact-checking mechanisms and improved training data. The business implication is a potential erosion of trust in AI-driven news aggregation and dissemination.
Reference

Some AI chatbots have a surprisingly good handle on breaking news. Others decidedly don’t.

Robotics#AI Frameworks📝 BlogAnalyzed: Jan 3, 2026 06:30

Dream2Flow: New Stanford AI framework lets robots “imagine” tasks before acting

Published:Jan 2, 2026 04:42
1 min read
r/artificial

Analysis

The article highlights a new AI framework, Dream2Flow, developed at Stanford, that enables robots to simulate tasks before execution. This suggests advancements in robotics and AI, potentially improving efficiency and reducing errors in robotic operations. The source is a Reddit post, indicating the information's initial dissemination through a community platform.

Key Takeaways

Reference

Research#Publishing🔬 ResearchAnalyzed: Jan 10, 2026 07:09

The Demise of the Traditional Academic Journal?

Published:Dec 30, 2025 00:31
1 min read
ArXiv

Analysis

This article, sourced from ArXiv, hints at a significant shift in academic publishing, likely driven by advancements in AI and open access platforms. The piece likely explores the challenges faced by established journals and the rise of alternative methods for disseminating research.
Reference

The article's context, 'In Memorium,' suggests a critical assessment of the current state or potential future of academic journals.

Research#llm📝 BlogAnalyzed: Dec 28, 2025 21:57

PLaMo 3 Support Merged into llama.cpp

Published:Dec 28, 2025 18:55
1 min read
r/LocalLLaMA

Analysis

The news highlights the integration of PLaMo 3 model support into the llama.cpp framework. PLaMo 3, a 31B parameter model developed by Preferred Networks, Inc. and NICT, is pre-trained on English and Japanese datasets. The model utilizes a hybrid architecture combining Sliding Window Attention (SWA) and traditional attention layers. This merge suggests increased accessibility and potential for local execution of the PLaMo 3 model, benefiting researchers and developers interested in multilingual and efficient large language models. The source is a Reddit post, indicating community-driven development and dissemination of information.
Reference

PLaMo 3 NICT 31B Base is a 31B model pre-trained on English and Japanese datasets, developed by Preferred Networks, Inc. collaborative with National Institute of Information and Communications Technology, NICT.

Research#llm📝 BlogAnalyzed: Dec 28, 2025 12:02

The Shogunate of the Nile: AI Imagines Japanese Samurai Protectorate in Egypt, 1864

Published:Dec 28, 2025 11:31
1 min read
r/midjourney

Analysis

This "news" item highlights the growing trend of using AI, specifically Midjourney, to generate alternate history scenarios. The concept of Japanese samurai establishing a protectorate in Egypt is inherently fantastical and serves as a creative prompt for AI image generation. The post itself, originating from Reddit, demonstrates how easily these AI-generated images can be shared and consumed, blurring the lines between reality and imagination. While not a genuine news article, it reflects the potential of AI to create compelling narratives and visuals, even if historically improbable. The source being Reddit also emphasizes the democratization of content creation and the spread of AI-generated content through social media platforms.
Reference

"An alternate timeline where Japanese Samurai established a protectorate in Egypt, 1864."

Research#Geometry🔬 ResearchAnalyzed: Jan 10, 2026 07:27

New Rigidity Theorem in Einstein Manifolds: A Breakthrough in Geometry

Published:Dec 25, 2025 04:02
1 min read
ArXiv

Analysis

This article discusses a new rigidity theorem concerning Einstein manifolds, a crucial area of research in differential geometry. The theorem likely provides novel insights into the structure and properties of these manifolds and potentially impacts related fields.
Reference

The article's subject focuses on a new rigidity theorem of Einstein manifolds and the curvature operator of the second kind.

Artificial Intelligence#Ethics📰 NewsAnalyzed: Dec 24, 2025 15:41

AI Chatbots Used to Create Deepfake Nude Images: A Growing Threat

Published:Dec 23, 2025 11:30
1 min read
WIRED

Analysis

This article highlights a disturbing trend: the misuse of AI image generators to create realistic deepfake nude images of women. The ease with which users can manipulate these tools, coupled with the potential for harm and abuse, raises serious ethical and societal concerns. The article underscores the urgent need for developers like Google and OpenAI to implement stronger safeguards and content moderation policies to prevent the creation and dissemination of such harmful content. Furthermore, it emphasizes the importance of educating the public about the dangers of deepfakes and promoting media literacy to combat their spread.
Reference

Users of AI image generators are offering each other instructions on how to use the tech to alter pictures of women into realistic, revealing deepfakes.

Research#Graphs🔬 ResearchAnalyzed: Jan 10, 2026 08:23

Analyzing Graph Sensitivity through Join and Decomposition

Published:Dec 22, 2025 22:38
1 min read
ArXiv

Analysis

The article's focus on graph sensitivity is a niche area of AI research, likely focusing on the robustness of graph-based models. Further details regarding the specific methodologies and findings within the ArXiv paper are required for a more comprehensive critique.
Reference

The research originates from ArXiv, suggesting a pre-peer-reviewed or preprint publication.

Analysis

This research paper proposes a novel approach, DSTED, to improve surgical workflow recognition, specifically addressing the challenges of temporal instability and discriminative feature extraction. The methodology's effectiveness and potential impact on real-world surgical applications warrants further investigation and validation.
Reference

The paper is available on ArXiv.

Research#Mathematics🔬 ResearchAnalyzed: Jan 10, 2026 08:37

Exploring Elliptic Integrals and Modular Symbols in AI Research

Published:Dec 22, 2025 13:12
1 min read
ArXiv

Analysis

This research, published on ArXiv, likely delves into complex mathematical concepts relevant to advanced AI applications. The use of terms like 'canonical elliptic integrands' suggests a focus on specific mathematical tools with potential application to AI.
Reference

The article's source is ArXiv.

Research#3D Reconstruction🔬 ResearchAnalyzed: Jan 10, 2026 09:28

Pix2NPHM: Single-Image Reconstruction Advances in AI

Published:Dec 19, 2025 16:44
1 min read
ArXiv

Analysis

The research, as presented on ArXiv, likely focuses on a novel method (Pix2NPHM) for reconstructing complex 3D structures from a single image. This advancement could have significant applications in areas like medical imaging and computer graphics, streamlining processes.
Reference

The paper presents a method for learning NPHM reconstructions from a single image.

Research#LLM agent🔬 ResearchAnalyzed: Jan 10, 2026 10:07

MemoryGraft: Poisoning LLM Agents Through Experience Retrieval

Published:Dec 18, 2025 08:34
1 min read
ArXiv

Analysis

This ArXiv paper highlights a critical vulnerability in LLM agents, demonstrating how attackers can persistently compromise their behavior. The research showcases a novel attack vector by poisoning the experience retrieval mechanism.
Reference

The paper originates from ArXiv, indicating peer-review is pending or was bypassed for rapid dissemination.

Analysis

This article proposes a solution to improve conference peer review by separating the dissemination of research from the credentialing process. The Impact Market likely refers to a system where the impact of research is measured and rewarded, potentially incentivizing better quality and more efficient review processes. The decoupling of dissemination and credentialing could address issues like publication bias and the slow pace of traditional peer review. Further analysis would require understanding the specifics of the proposed Impact Market mechanism.
Reference

Research#Classification🔬 ResearchAnalyzed: Jan 10, 2026 11:10

ModSSC: Advancing Semi-Supervised Classification with a Modular Approach

Published:Dec 15, 2025 11:43
1 min read
ArXiv

Analysis

This research focuses on semi-supervised classification using a modular framework, suggesting potential for improved performance and flexibility in handling diverse datasets. The modular design of ModSSC implies easier adaptation and integration with other machine learning components.
Reference

The article's context indicates a presentation on ArXiv about ModSSC.

Research#llm👥 CommunityAnalyzed: Jan 4, 2026 10:12

Ask HN: Is starting a personal blog still worth it in the age of AI?

Published:Dec 14, 2025 23:02
1 min read
Hacker News

Analysis

The article's core question revolves around the continued relevance of personal blogs in the context of advancements in AI. It implicitly acknowledges the potential impact of AI on content creation and distribution, prompting a discussion on whether traditional blogging practices remain viable or if AI tools have fundamentally altered the landscape. The focus is on the value proposition of personal blogs in a world where AI can generate content, personalize experiences, and potentially dominate information dissemination.

Key Takeaways

    Reference

    Research#Graph Learning🔬 ResearchAnalyzed: Jan 10, 2026 11:30

    Novel Graph Learning Approach with Theoretical Guarantees Presented on ArXiv

    Published:Dec 13, 2025 19:25
    1 min read
    ArXiv

    Analysis

    The article's focus on graph learning with theoretical guarantees indicates a contribution to the field of machine learning. The publication on ArXiv suggests a preliminary announcement of research, indicating the work is likely under review or in early stages.
    Reference

    The article is hosted on ArXiv.

    Research#Metaverse🔬 ResearchAnalyzed: Jan 10, 2026 11:36

    Epistemoverse: AI-Powered Metaverse for Cultural Heritage Preservation

    Published:Dec 13, 2025 06:18
    1 min read
    ArXiv

    Analysis

    This ArXiv article proposes an intriguing concept of using AI to build a metaverse specifically for preserving intellectual heritage. The potential impact of such a system on accessibility and dissemination of knowledge is significant.
    Reference

    The article's core focus is on building an AI-driven knowledge metaverse.

    Analysis

    The announcement of the MeViS dataset on ArXiv signifies a step forward in video segmentation research, particularly focusing on motion expression. This multi-modal dataset likely offers valuable resources for training and evaluating AI models in this specific area.
    Reference

    MeViS is a Multi-Modal Dataset for Referring Motion Expression Video Segmentation.

    Analysis

    This ArXiv article highlights the application of AI in analyzing multi-modal datasets for radiation detection, an area with significant implications for safety and security. The paper likely focuses on the methodologies and challenges involved in curating and disseminating these complex datasets to improve radiation-related capabilities.
    Reference

    The research focuses on the curation and dissemination of complex multi-modal data sets for radiation detection, localization, and tracking.

    Analysis

    This article likely analyzes the impact of AI-generated content, specifically an AI-generated encyclopedia called Grokipedia, on the established structures of authority and knowledge dissemination. It probably explores how the use of AI alters the way information is created, validated, and trusted, potentially challenging traditional sources of authority like human experts and established encyclopedias. The focus is on the epistemological implications of this shift.

    Key Takeaways

      Reference

      Research#MLLM🔬 ResearchAnalyzed: Jan 10, 2026 13:55

      ChartPoint: Enhancing MLLM Reasoning with Grounding Reflection for Chart Understanding

      Published:Nov 29, 2025 04:01
      1 min read
      ArXiv

      Analysis

      The paper likely introduces a novel approach for improving the chart reasoning capabilities of Multimodal Large Language Models (MLLMs). Grounding reflection likely refers to the method of using external information or knowledge to validate and improve the LLM's understanding of chart data.
      Reference

      The paper is published on ArXiv.

      Research#Peer Review🔬 ResearchAnalyzed: Jan 10, 2026 13:57

      Researchers Advocate Open Peer Review While Acknowledging Resubmission Bias

      Published:Nov 28, 2025 18:35
      1 min read
      ArXiv

      Analysis

      This ArXiv article highlights the ongoing debate within the ML community concerning peer review processes. The study's focus on both the benefits of open review and the potential drawbacks of resubmission bias provides valuable insight into improving research dissemination.
      Reference

      ML researchers support openness in peer review but are concerned about resubmission bias.

      Research#VLM🔬 ResearchAnalyzed: Jan 10, 2026 14:01

      SpaceMind: Enhancing Vision-Language Models with Camera-Guided Spatial Reasoning

      Published:Nov 28, 2025 11:04
      1 min read
      ArXiv

      Analysis

      This ArXiv article likely presents a novel approach to improving spatial reasoning in Vision-Language Models (VLMs). The use of camera-guided modality fusion suggests a focus on grounding language understanding in visual context, potentially leading to more accurate and robust AI systems.
      Reference

      The article's context indicates the research is published on ArXiv.

      product#llm📝 BlogAnalyzed: Jan 5, 2026 09:24

      Gemini 3 Pro Model Card Released: Transparency and Capabilities Unveiled

      Published:Nov 18, 2025 11:04
      1 min read
      r/Bard

      Analysis

      The release of the Gemini 3 Pro model card signals a push for greater transparency in AI development, allowing for deeper scrutiny of its capabilities and limitations. The availability of an archived version is crucial given the initial link failure, highlighting the importance of redundancy in information dissemination. This release will likely influence the development and deployment strategies of competing LLMs.

      Key Takeaways

      Reference

      N/A (Model card content not directly accessible)

      Research#LLM🔬 ResearchAnalyzed: Jan 10, 2026 14:46

      LLMLagBench: Detecting Temporal Knowledge Gaps in Large Language Models

      Published:Nov 15, 2025 09:08
      1 min read
      ArXiv

      Analysis

      This research introduces LLMLagBench, a tool designed to pinpoint the temporal training boundaries of large language models, allowing for a better understanding of their knowledge cutoff dates. Identifying these boundaries is crucial for assessing model reliability and preventing the dissemination of outdated information.
      Reference

      LLMLagBench helps to identify the temporal training boundaries in Large Language Models.

      Product#LLM👥 CommunityAnalyzed: Jan 10, 2026 14:54

      Claude AI Service Experiences Outage

      Published:Oct 3, 2025 01:55
      1 min read
      Hacker News

      Analysis

      This article highlights the vulnerability of AI services to downtime, impacting accessibility and potentially user workflows. The brevity of the article, derived from Hacker News, indicates a rapid dissemination of information and user awareness of service disruptions.
      Reference

      The article's context, 'Claude was down,' implies service interruption.

      Product#LLM👥 CommunityAnalyzed: Jan 10, 2026 15:23

      OpenAI Introduces Predicted Outputs Feature

      Published:Nov 5, 2024 02:47
      1 min read
      Hacker News

      Analysis

      The announcement, reported on Hacker News, suggests a new functionality for OpenAI's models that could significantly improve user experience and potentially reduce latency. However, details of the feature's inner workings and its limitations remain unclear from this source, necessitating further investigation.
      Reference

      New OpenAI Feature: Predicted Outputs

      Research#llm📝 BlogAnalyzed: Jan 3, 2026 06:51

      Generative Agents and Forums for Foundation Models

      Published:Aug 21, 2023 08:44
      1 min read
      NLP News

      Analysis

      The article highlights two key areas: the development of generative agents and the importance of publication venues for large language models. It suggests a focus on both the creation of intelligent agents and the dissemination of research related to LLMs.

      Key Takeaways

      Reference

      This newsletter discusses components for building generative agents and publication venues for large language models (LLMs).

      AI News#Image Generation👥 CommunityAnalyzed: Jan 3, 2026 06:56

      Stable Diffusion Renders QR Readable Images

      Published:Jun 6, 2023 14:54
      1 min read
      Hacker News

      Analysis

      The article highlights a specific capability of Stable Diffusion, focusing on its ability to generate images that include functional QR codes. This suggests advancements in image generation technology, potentially impacting areas like advertising, design, and information dissemination. The brevity of the summary leaves room for further investigation into the quality, reliability, and limitations of this feature.

      Key Takeaways

      Reference

      Research#Deep Learning👥 CommunityAnalyzed: Jan 10, 2026 16:12

      Deep Dive: 'The Little Book of Deep Learning' Available as PDF

      Published:May 1, 2023 00:18
      1 min read
      Hacker News

      Analysis

      This Hacker News post highlights the availability of a free PDF resource on deep learning. While the context is brief, the news is relevant for individuals seeking introductory material in the field.
      Reference

      The article's source is Hacker News, indicating community dissemination.

      Prof. Karl Friston 3.0 - Collective Intelligence

      Published:Mar 11, 2023 20:42
      1 min read
      ML Street Talk Pod

      Analysis

      This article summarizes a podcast episode discussing Prof. Karl Friston's vision of collective intelligence. It highlights his concept of active inference, shared narratives, and the need for a shared modeling language and transaction protocol. The article emphasizes the potential for AI to benefit humanity while preserving human values. The inclusion of sponsor information and links to the podcast and supporting platforms suggests a focus on dissemination and community engagement.
      Reference

      Friston's vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals.

      Education#AI in Education📝 BlogAnalyzed: Dec 29, 2025 17:34

      Grant Sanderson: Math, Manim, Neural Networks & Teaching with 3Blue1Brown

      Published:Aug 23, 2020 22:43
      1 min read
      Lex Fridman Podcast

      Analysis

      This article summarizes a podcast episode featuring Grant Sanderson, the creator of 3Blue1Brown, a popular math education channel. The conversation covers a wide range of topics, including Sanderson's approach to teaching math through visualizations, his thoughts on learning deeply versus broadly, and his use of the Manim animation engine. The discussion also touches upon neural networks, GPT-3, and the broader implications of online education, especially in the context of the COVID-19 pandemic. The episode provides insights into Sanderson's creative process, his views on education, and his engagement with technology.
      Reference

      The episode covers a wide range of topics, including Sanderson's approach to teaching math through visualizations, his thoughts on learning deeply versus broadly, and his use of the Manim animation engine.

      Research#llm👥 CommunityAnalyzed: Jan 4, 2026 07:14

      Ask HN: What free resources did you use to learn how to program ML/AI?

      Published:Aug 12, 2017 15:52
      1 min read
      Hacker News

      Analysis

      This Hacker News post is a request for information, not a news article in the traditional sense. It's a prompt for community members to share their experiences and resources for learning machine learning and AI programming. The value lies in the collective knowledge shared in the responses, which could include links to tutorials, online courses, and open-source projects. The 'news' aspect is the dissemination of information about learning resources.

      Key Takeaways

      Reference

      N/A - This is a prompt, not a report with quotes.

      Research#llm👥 CommunityAnalyzed: Jan 4, 2026 07:19

      The AI Misinformation Epidemic

      Published:Mar 28, 2017 03:37
      1 min read
      Hacker News

      Analysis

      This article likely discusses the spread of false or misleading information generated by AI, potentially focusing on the challenges and implications of this phenomenon. It probably touches upon the sources, methods of dissemination, and potential impacts on society.

      Key Takeaways

        Reference

        Research#machine learning👥 CommunityAnalyzed: Jan 3, 2026 06:25

        Distill: a modern machine learning journal

        Published:Mar 20, 2017 17:08
        1 min read
        Hacker News

        Analysis

        The article announces the existence of 'Distill', a modern machine learning journal. The focus is on the journal itself, implying a platform for publishing research and advancements in the field.
        Reference

        Analysis

        The article summarizes the week's key developments in machine learning and AI, highlighting several interesting topics. These include research on intrinsic motivation for AI, which aims to make AI systems more self-directed, and the development of a kill-switch for intelligent agents, addressing safety concerns. Other topics mentioned are "knu" chips for machine learning, a screenplay written by a neural network, and more. The article provides a concise overview of diverse advancements in the field, indicating a dynamic and rapidly evolving landscape. The inclusion of a podcast link suggests a focus on accessibility and dissemination of information.
        Reference

        This Week in Machine Learning & AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence.

        Research#NLP👥 CommunityAnalyzed: Jan 10, 2026 17:36

        CS224d: Deep Learning for NLP - Hacker News Review

        Published:Aug 14, 2015 14:11
        1 min read
        Hacker News

        Analysis

        The article's context, a Hacker News post, suggests a discussion or dissemination of information related to the CS224d course. Analyzing the specific content discussed on Hacker News provides insights into the course's relevance and community perception.
        Reference

        The article is sourced from Hacker News, implying a secondary analysis or discussion of CS224d.

        Research#Deep Learning👥 CommunityAnalyzed: Jan 10, 2026 17:42

        Deep Learning Textbook Forthcoming from MIT Press

        Published:Aug 23, 2014 21:07
        1 min read
        Hacker News

        Analysis

        The announcement of a forthcoming deep learning textbook from MIT Press is significant for the field. It suggests continued academic interest and development in the area, contributing to knowledge dissemination.
        Reference

        Deep Learning: An MIT Press book in preparation

        Research#ML Education👥 CommunityAnalyzed: Jan 10, 2026 17:45

        Open Source Machine Learning Course Materials Shared

        Published:Aug 6, 2013 01:05
        1 min read
        Hacker News

        Analysis

        The article highlights the dissemination of machine learning course materials, likely promoting knowledge sharing and accessibility within the AI community. Without additional information, it's difficult to assess the quality or specific impact of these materials.
        Reference

        The context mentions that the article is from Hacker News, suggesting it's likely a discussion or announcement of course materials.