Search:
Match:
38 results
business#gpu📝 BlogAnalyzed: Jan 15, 2026 08:46

TSMC Q4 Profit Surges 35% on AI Chip Demand, Signaling Continued Supply Constraints

Published:Jan 15, 2026 08:32
1 min read
钛媒体

Analysis

TSMC's record-breaking profit reflects the insatiable demand for advanced AI chips, driven by the rapid growth of AI applications. The warning of continued supply shortages for two more years highlights the critical need for increased investment in semiconductor manufacturing capacity and the potential impact on AI innovation.
Reference

The article states: "Chip supply shortages will continue for another two years."

product#llm📝 BlogAnalyzed: Jan 14, 2026 20:15

Preventing Context Loss in Claude Code: A Proactive Alert System

Published:Jan 14, 2026 17:29
1 min read
Zenn AI

Analysis

This article addresses a practical issue of context window management in Claude Code, a critical aspect for developers using large language models. The proposed solution of a proactive alert system using hooks and status lines is a smart approach to mitigating the performance degradation caused by automatic compacting, offering a significant usability improvement for complex coding tasks.
Reference

Claude Code is a valuable tool, but its automatic compacting can disrupt workflows. The article aims to solve this by warning users before the context window exceeds the threshold.

ethics#ai ethics📝 BlogAnalyzed: Jan 13, 2026 18:45

AI Over-Reliance: A Checklist for Identifying Dependence and Blind Faith in the Workplace

Published:Jan 13, 2026 18:39
1 min read
Qiita AI

Analysis

This checklist highlights a crucial, yet often overlooked, aspect of AI integration: the potential for over-reliance and the erosion of critical thinking. The article's focus on identifying behavioral indicators of AI dependence within a workplace setting is a practical step towards mitigating risks associated with the uncritical adoption of AI outputs.
Reference

"AI is saying it, so it's correct."

business#code generation📝 BlogAnalyzed: Jan 12, 2026 09:30

Netflix Engineer's Call for Vigilance: Navigating AI-Assisted Software Development

Published:Jan 12, 2026 09:26
1 min read
Qiita AI

Analysis

This article highlights a crucial concern: the potential for reduced code comprehension among engineers due to AI-driven code generation. While AI accelerates development, it risks creating 'black boxes' of code, hindering debugging, optimization, and long-term maintainability. This emphasizes the need for robust design principles and rigorous code review processes.
Reference

The article's key takeaway is the warning about engineers potentially losing understanding of their own code's mechanics, generated by AI.

policy#llm📝 BlogAnalyzed: Jan 6, 2026 07:18

X Japan Warns Against Illegal Content Generation with Grok AI, Threatens Legal Action

Published:Jan 6, 2026 06:42
1 min read
ITmedia AI+

Analysis

This announcement highlights the growing concern over AI-generated content and the legal liabilities of platforms hosting such tools. X's proactive stance suggests a preemptive measure to mitigate potential legal repercussions and maintain platform integrity. The effectiveness of these measures will depend on the robustness of their content moderation and enforcement mechanisms.
Reference

米Xの日本法人であるX Corp. Japanは、Xで利用できる生成AI「Grok」で違法なコンテンツを作成しないよう警告した。

Analysis

This paper addresses a critical climate change hazard (GLOFs) by proposing an automated deep learning pipeline for monitoring Himalayan glacial lakes using time-series SAR data. The use of SAR overcomes the limitations of optical imagery due to cloud cover. The 'temporal-first' training strategy and the high IoU achieved demonstrate the effectiveness of the approach. The proposed operational architecture, including a Dockerized pipeline and RESTful endpoint, is a significant step towards a scalable and automated early warning system.
Reference

The model achieves an IoU of 0.9130 validating the success and efficacy of the "temporal-first" strategy.

Research#llm📝 BlogAnalyzed: Dec 29, 2025 08:32

"AI Godfather" Warns: Artificial Intelligence Will Replace More Jobs in 2026

Published:Dec 29, 2025 08:08
1 min read
cnBeta

Analysis

This article reports on Geoffrey Hinton's warning about AI's potential to displace numerous jobs by 2026. While Hinton's expertise lends credibility to the claim, the article lacks specifics regarding the types of jobs at risk and the reasoning behind the 2026 timeline. The article is brief and relies heavily on a single quote, leaving readers with a general sense of concern but without a deeper understanding of the underlying factors. Further context, such as the specific AI advancements driving this prediction and potential mitigation strategies, would enhance the article's value. The source, cnBeta, is a technology news website, but further investigation into Hinton's full interview is warranted for a more comprehensive perspective.

Key Takeaways

Reference

AI will "be able to replace many, many jobs" in 2026.

Analysis

This paper introduces a new metric, eigen microstate entropy ($S_{EM}$), to detect and interpret phase transitions, particularly in non-equilibrium systems. The key contribution is the demonstration that $S_{EM}$ can provide early warning signals for phase transitions, as shown in both biological and climate systems. This has significant implications for understanding and predicting complex phenomena.
Reference

A significant increase in $S_{EM}$ precedes major phase transitions, observed before biomolecular condensate formation and El Niño events.

User Frustration with AI Censorship on Offensive Language

Published:Dec 28, 2025 18:04
1 min read
r/ChatGPT

Analysis

The Reddit post expresses user frustration with the level of censorship implemented by an AI, specifically ChatGPT. The user feels the AI's responses are overly cautious and parental, even when using relatively mild offensive language. The user's primary complaint is the AI's tendency to preface or refuse to engage with prompts containing curse words, which the user finds annoying and counterproductive. This suggests a desire for more flexibility and less rigid content moderation from the AI, highlighting a common tension between safety and user experience in AI interactions.
Reference

I don't remember it being censored to this snowflake god awful level. Even when using phrases such as "fucking shorten your answers" the next message has to contain some subtle heads up or straight up "i won't condone/engage to this language"

Research#image generation📝 BlogAnalyzed: Dec 29, 2025 02:08

Learning Face Illustrations with a Pixel Space Flow Matching Model

Published:Dec 28, 2025 07:42
1 min read
Zenn DL

Analysis

The article describes the training of a 90M parameter JiT model capable of generating 256x256 face illustrations. The author highlights the selection of high-quality outputs and provides examples. The article also links to a more detailed explanation of the JiT model and the code repository used. The author cautions about potential breaking changes in the main branch of the code repository. This suggests a focus on practical experimentation and iterative development in the field of generative AI, specifically for image generation.
Reference

Cherry-picked output examples. Generated from different prompts, 16 256x256 images, manually selected.

Research#llm📝 BlogAnalyzed: Dec 27, 2025 20:31

Waymo Updates Vehicles for Power Outages, Still Faces Criticism

Published:Dec 27, 2025 19:34
1 min read
Slashdot

Analysis

This article highlights Waymo's efforts to improve its self-driving cars' performance during power outages, specifically addressing the issues encountered during a recent outage in San Francisco. While Waymo is proactively implementing updates to handle dark traffic signals and navigate more decisively, the article also points out the ongoing criticism and regulatory questions surrounding the deployment of autonomous vehicles. The pause in service due to flash flood warnings further underscores the challenges Waymo faces in ensuring safety and reliability in diverse and unpredictable conditions. The quote from Jeffrey Tumlin raises important questions about the appropriate number and management of autonomous vehicles on city streets.
Reference

"I think we need to be asking 'what is a reasonable number of [autonomous vehicles] to have on city streets, by time of day, by geography and weather?'"

Politics#Social Media Regulation📝 BlogAnalyzed: Dec 28, 2025 21:58

New York State to Mandate Warning Labels on Social Media Platforms

Published:Dec 26, 2025 21:03
1 min read
Engadget

Analysis

This article reports on New York State's new law requiring social media platforms to display warning labels, similar to those on cigarette packages. The law targets features like infinite scrolling and algorithmic feeds, aiming to protect young users' mental health. Governor Hochul emphasized the importance of safeguarding children from the potential harms of excessive social media use. The legislation reflects growing concerns about the impact of social media on young people and follows similar initiatives in other regions, including proposed legislation in California and bans in Australia and Denmark. This move signifies a broader trend of governmental intervention in regulating social media's influence.
Reference

"Keeping New Yorkers safe has been my top priority since taking office, and that includes protecting our kids from the potential harms of social media features that encourage excessive use," Gov. Hochul said in a statement.

Analysis

This paper presents a novel approach to geomagnetic storm prediction by incorporating cosmic-ray flux modulation as a precursor signal within a physics-informed LSTM model. The use of cosmic-ray data, which can provide early warnings, is a significant contribution. The study demonstrates improved forecast skill, particularly for longer prediction horizons, highlighting the value of integrating physics knowledge with deep learning for space-weather forecasting. The results are promising for improving the accuracy and lead time of geomagnetic storm predictions, which is crucial for protecting technological infrastructure.
Reference

Incorporating cosmic-ray information further improves 48-hour forecast skill by up to 25.84% (from 0.178 to 0.224).

Research#MLOps📝 BlogAnalyzed: Dec 28, 2025 21:57

Feature Stores: Why the MVP Always Works and That's the Trap (6 Years of Lessons)

Published:Dec 26, 2025 07:24
1 min read
r/mlops

Analysis

This article from r/mlops provides a critical analysis of the challenges encountered when building and scaling feature stores. It highlights the common pitfalls that arise as feature stores evolve from simple MVP implementations to complex, multi-faceted systems. The author emphasizes the deceptive simplicity of the initial MVP, which often masks the complexities of handling timestamps, data drift, and operational overhead. The article serves as a cautionary tale, warning against the common traps that lead to offline-online drift, point-in-time leakage, and implementation inconsistencies.
Reference

Somewhere between step 1 and now, you've acquired a platform team by accident.

Research#Solar Flare🔬 ResearchAnalyzed: Jan 10, 2026 07:17

Early Warning: Ca II K Brightenings Predict Solar Flare Onset

Published:Dec 26, 2025 05:23
1 min read
ArXiv

Analysis

This pilot study presents a significant step towards improved solar flare prediction by identifying a precursory signal. The research leverages advanced observational techniques to enhance our understanding of solar activity.
Reference

Compact Ca II K brightenings precede solar flares.

Analysis

This paper highlights a critical vulnerability in current language models: they fail to learn from negative examples presented in a warning-framed context. The study demonstrates that models exposed to warnings about harmful content are just as likely to reproduce that content as models directly exposed to it. This has significant implications for the safety and reliability of AI systems, particularly those trained on data containing warnings or disclaimers. The paper's analysis, using sparse autoencoders, provides insights into the underlying mechanisms, pointing to a failure of orthogonalization and the dominance of statistical co-occurrence over pragmatic understanding. The findings suggest that current architectures prioritize the association of content with its context rather than the meaning or intent behind it.
Reference

Models exposed to such warnings reproduced the flagged content at rates statistically indistinguishable from models given the content directly (76.7% vs. 83.3%).

Research#llm📝 BlogAnalyzed: Dec 25, 2025 05:55

Cost Warning from BQ Police! Before Using 'Natural Language Queries' with BigQuery Remote MCP Server

Published:Dec 25, 2025 02:30
1 min read
Zenn Gemini

Analysis

This article serves as a cautionary tale regarding the potential cost implications of using natural language queries with BigQuery's remote MCP server. It highlights the risk of unintentionally triggering large-scale scans, leading to a surge in BigQuery usage fees. The author emphasizes that the cost extends beyond BigQuery, as increased interactions with the LLM also contribute to higher expenses. The article advocates for proactive measures to mitigate these financial risks before they escalate. It's a practical guide for developers and data professionals looking to leverage natural language processing with BigQuery while remaining mindful of cost optimization.
Reference

LLM から BigQuery を「自然言語で気軽に叩ける」ようになると、意図せず大量スキャンが発生し、BigQuery 利用料が膨れ上がるリスクがあります。

Research#llm🔬 ResearchAnalyzed: Jan 4, 2026 07:20

Early warning signals for loss of control

Published:Dec 24, 2025 00:59
1 min read
ArXiv

Analysis

This article likely discusses research on identifying indicators that predict when a system, possibly an LLM, might exhibit undesirable or uncontrolled behavior. The focus is on proactive detection rather than reactive measures. The source, ArXiv, suggests this is a scientific or technical paper.

Key Takeaways

    Reference

    Business#Regulation📝 BlogAnalyzed: Dec 28, 2025 21:58

    KSA Fines LeoVegas for Duty of Care Failure and Warns Vbet

    Published:Dec 23, 2025 16:57
    1 min read
    ReadWrite

    Analysis

    The news article reports on the Dutch Gaming Authority (KSA) imposing a fine on LeoVegas for failing to meet its duty of care. The article also mentions a warning issued to Vbet. The brevity of the article suggests it's a brief announcement, likely focusing on the regulatory action taken by the KSA. The lack of detail about the specific failures of LeoVegas or the nature of the warning to Vbet limits the depth of the analysis. Further information would be needed to understand the context and implications of these actions, such as the specific regulations violated and the potential impact on the companies involved.

    Key Takeaways

    Reference

    The Gaming Authority in the Netherlands (KSA) has imposed a half-million euro fine on LeoVegas, on the same day it… Continue reading KSA fines LeoVegas for failing to comply with its duty of care and issues warning to Vbet

    Safety#Forecasting🔬 ResearchAnalyzed: Jan 10, 2026 08:26

    AI Enhances Tsunami Forecasting Accuracy with Bayesian Methods

    Published:Dec 22, 2025 19:01
    1 min read
    ArXiv

    Analysis

    This research utilizes Reduced Order Modeling and Bayesian Hierarchical Pooling to improve tsunami forecasting, a crucial area for public safety. The application of these advanced AI techniques promises more accurate and timely warnings, ultimately saving lives.
    Reference

    The study focuses on Reduced Order Modeling for Tsunami Forecasting.

    Research#Modeling🔬 ResearchAnalyzed: Jan 10, 2026 08:29

    Markov Chain Modeling for Public Health Risk Prediction

    Published:Dec 22, 2025 18:10
    1 min read
    ArXiv

    Analysis

    This research utilizes Markov Chain Modeling to predict spatial clusters in public health, offering potential for improved early warning systems. The ArXiv source suggests that this is a preliminary study, requiring further validation and real-world application to assess its efficacy.
    Reference

    The study focuses on predicting relative risks of spatial clusters in public health.

    Research#Solar Flare🔬 ResearchAnalyzed: Jan 10, 2026 09:00

    Solar Magnetic Field Dip Predicts Major Eruption

    Published:Dec 21, 2025 11:02
    1 min read
    ArXiv

    Analysis

    This research provides valuable insight into the precursors of solar flares, potentially improving space weather forecasting. The study's focus on photospheric horizontal magnetic fields contributes to our understanding of solar dynamics.
    Reference

    The study analyzes the decrease of photospheric horizontal magnetic field preceding a major solar eruption.

    Research#Market Crash🔬 ResearchAnalyzed: Jan 10, 2026 09:47

    AI Framework: Early Market Crash Prediction via Multi-Layer Graphs

    Published:Dec 19, 2025 03:00
    1 min read
    ArXiv

    Analysis

    This research explores a novel application of AI in financial risk management by leveraging multi-layer graphs for early warning signals of market crashes. The study's focus on systemic risk within a graph framework offers a promising approach to enhance financial stability.
    Reference

    The article is sourced from ArXiv, indicating a pre-print research paper.

    Research#AI/Healthcare🔬 ResearchAnalyzed: Jan 10, 2026 10:39

    AI-Powered Early Warning System for Hospital Patient Deterioration

    Published:Dec 16, 2025 18:47
    1 min read
    ArXiv

    Analysis

    This ArXiv article likely presents a novel application of AI in healthcare, focusing on proactive patient monitoring. The research's success hinges on the accuracy and generalizability of its predictive model, which needs to be carefully evaluated.
    Reference

    The article likely details an early warning system.

    Analysis

    This ArXiv article explores the use of AI in predicting student success, focusing on the influence of static features within temporal prediction models. The research likely contributes to a better understanding of which student characteristics are most predictive of future academic outcomes.
    Reference

    The article likely investigates the dominance of static features.

    Research#AI Adoption🔬 ResearchAnalyzed: Jan 10, 2026 13:30

    AI Adoption and Early Warning of Corporate Distress: Evidence from China

    Published:Dec 2, 2025 08:09
    1 min read
    ArXiv

    Analysis

    This research investigates the relationship between AI adoption and the ability to predict corporate financial distress, a crucial area of study. Focusing on Chinese non-financial firms provides a specific and relevant context for understanding the impact of AI in financial risk management.
    Reference

    Evidence from Chinese Non-Financial Firms

    Research#llm📝 BlogAnalyzed: Dec 25, 2025 21:20

    [Paper Analysis] On the Theoretical Limitations of Embedding-Based Retrieval (Warning: Rant)

    Published:Oct 11, 2025 16:07
    1 min read
    Two Minute Papers

    Analysis

    This article, likely a summary of a research paper, delves into the theoretical limitations of using embedding-based retrieval methods. It suggests that these methods, while popular, may have inherent constraints that limit their effectiveness in certain scenarios. The "Warning: Rant" suggests the author has strong opinions or frustrations regarding these limitations. The analysis likely explores the mathematical or computational reasons behind these limitations, potentially discussing issues like information loss during embedding, the curse of dimensionality, or the inability to capture complex relationships between data points. It probably questions the over-reliance on embedding-based retrieval without considering its fundamental drawbacks.
    Reference

    N/A

    Research#llm📝 BlogAnalyzed: Dec 26, 2025 15:56

    AI Research: A Max-Performance Domain Where Singular Excellence Trumps All

    Published:May 30, 2025 06:27
    1 min read
    Jason Wei

    Analysis

    This article presents an interesting perspective on AI research, framing it as a "max-performance domain." The core argument is that exceptional ability in one key area can outweigh deficiencies in others. While this resonates with the observation that some impactful researchers lack well-rounded skills, it's crucial to consider the potential downsides. Over-reliance on this model could lead to neglecting essential skills like communication and collaboration, which are increasingly important in complex AI projects. The warning against blindly following role models is particularly insightful, highlighting the context-dependent nature of success. However, the article could benefit from exploring strategies for mitigating the risks associated with this specialized approach.
    Reference

    Exceptional ability at a single thing outweighs incompetence at other parts of the job.

    Research#AI Regulation🏛️ OfficialAnalyzed: Jan 3, 2026 10:05

    A Primer on the EU AI Act: Implications for AI Providers and Deployers

    Published:Jul 30, 2024 00:00
    1 min read
    OpenAI News

    Analysis

    This article from OpenAI provides a preliminary overview of the EU AI Act, focusing on prohibited and high-risk use cases. The article's value lies in its early warning about upcoming deadlines and requirements, crucial for AI providers and deployers operating within the EU. The focus on prohibited and high-risk applications suggests a proactive approach to compliance. However, the article's preliminary nature implies a lack of detailed analysis, and the absence of specific examples might limit its practical utility. Further elaboration on the implications for different AI models and applications would enhance its value.

    Key Takeaways

    Reference

    The article focuses on prohibited and high-risk use cases.

    Sustainability#AI Applications📝 BlogAnalyzed: Dec 29, 2025 07:25

    Accelerating Sustainability with AI: An Interview with Andres Ravinet

    Published:Jun 18, 2024 15:49
    1 min read
    Practical AI

    Analysis

    This article from Practical AI highlights the intersection of Artificial Intelligence and sustainability. It features an interview with Andres Ravinet from Microsoft, focusing on real-world applications of AI in addressing environmental and societal issues. The discussion covers diverse areas, including early warning systems, food waste reduction, and rainforest conservation. The article also touches upon the challenges of sustainability compliance and the motivations behind businesses adopting sustainable practices. Finally, it explores the potential of LLMs and generative AI in tackling sustainability challenges. The focus is on practical applications and the role of AI in driving positive environmental impact.

    Key Takeaways

    Reference

    We explore real-world use cases where AI-driven solutions are leveraged to help tackle environmental and societal challenges...

    AI-Powered Flood Forecasting Expands Globally

    Published:Mar 20, 2024 16:06
    1 min read
    Google Research

    Analysis

    This article from Google Research highlights their efforts to improve global flood forecasting using AI. The focus is on addressing the increasing frequency and impact of floods, particularly in regions with limited data. The article emphasizes the development of machine learning models capable of predicting extreme floods in ungauged watersheds, a significant advancement for areas lacking traditional monitoring systems. The use of Google's platforms (Search, Maps, Android) for disseminating alerts is a key component of their strategy. The publication in Nature lends credibility to their research and underscores the potential of AI to mitigate the devastating effects of floods worldwide. The article could benefit from more specifics on the AI techniques used and the performance metrics achieved.
    Reference

    Upgrading early warning systems to make accurate and timely information accessible to these populations can save thousands of lives per year.

    Research#llm👥 CommunityAnalyzed: Jan 4, 2026 08:35

    Building an early warning system for LLM-aided biological threat creation

    Published:Jan 31, 2024 18:15
    1 min read
    Hacker News

    Analysis

    The article discusses the development of a system to detect the potential misuse of Large Language Models (LLMs) in creating biological threats. This is a critical area of research, given the increasing capabilities of LLMs and the potential for malicious actors to leverage them. The focus on early warning is crucial for mitigating risks.

    Key Takeaways

      Reference

      Research#llm🏛️ OfficialAnalyzed: Jan 3, 2026 15:24

      OpenAI Develops Blueprint to Assess LLM-Aided Biological Threat Creation

      Published:Jan 31, 2024 08:00
      1 min read
      OpenAI News

      Analysis

      This article from OpenAI highlights their efforts to assess the potential risks associated with large language models (LLMs) assisting in the creation of biological threats. The core of their work involves developing a framework for evaluating this risk. Initial findings, based on evaluations with biology experts and students using GPT-4, suggest a limited impact on accuracy in threat creation. The article emphasizes that this is a preliminary finding and a starting point for further research and discussion within the community. This proactive approach by OpenAI is commendable, as it addresses potential misuse of AI technology.
      Reference

      We found that GPT-4 provides at most a mild uplift in biological threat creation accuracy.

      Business#Leadership👥 CommunityAnalyzed: Jan 10, 2026 15:50

      OpenAI Leadership's Warning Preceded Sam Altman's Ouster

      Published:Dec 8, 2023 20:10
      1 min read
      Hacker News

      Analysis

      This article, sourced from Hacker News, suggests internal conflicts within OpenAI led to Sam Altman's removal, highlighting leadership disagreements. The headline's simplicity directly conveys the core conflict and its significant implications.
      Reference

      The article's context indicates that warnings from OpenAI leaders played a role in Sam Altman's ouster.

      Analysis

      The article highlights a potential power struggle within OpenAI, suggesting that the board's decision to oust the CEO might have been influenced by concerns about the direction of AI development and the researchers' warnings. The focus is on the internal dynamics and the implications of a significant AI breakthrough.

      Key Takeaways

      Reference

      Research#ai safety📝 BlogAnalyzed: Dec 29, 2025 17:07

      Eliezer Yudkowsky on the Dangers of AI and the End of Human Civilization

      Published:Mar 30, 2023 15:14
      1 min read
      Lex Fridman Podcast

      Analysis

      This podcast episode features Eliezer Yudkowsky discussing the potential existential risks posed by advanced AI. The conversation covers topics such as the definition of Artificial General Intelligence (AGI), the challenges of aligning AGI with human values, and scenarios where AGI could lead to human extinction. Yudkowsky's perspective is critical of current AI development practices, particularly the open-sourcing of powerful models like GPT-4, due to the perceived dangers of uncontrolled AI. The episode also touches on related philosophical concepts like consciousness and evolution, providing a broad context for understanding the AI risk discussion.
      Reference

      The episode doesn't contain a specific quote, but the core argument revolves around the potential for AGI to pose an existential threat to humanity.

      Ethics#GPT-4👥 CommunityAnalyzed: Jan 10, 2026 16:18

      OpenAI CEO Highlights Potential Misuse of GPT-4

      Published:Mar 20, 2023 16:06
      1 min read
      Hacker News

      Analysis

      This brief article highlights a critical concern regarding the ethical implications of advanced AI models. The CEO's warning underscores the need for proactive measures to mitigate the potential for GPT-4 to be used maliciously.
      Reference

      OpenAI CEO warns that GPT-4 could be misused for nefarious purposes

      Research#AI in Healthcare📝 BlogAnalyzed: Dec 29, 2025 08:05

      How AI Predicted the Coronavirus Outbreak with Kamran Khan - #350

      Published:Feb 19, 2020 18:31
      1 min read
      Practical AI

      Analysis

      This article discusses how BlueDot, led by Kamran Khan, used AI to predict the coronavirus outbreak. The focus is on the company's algorithms and data processing techniques. The article highlights BlueDot's early warning and aims to explain the technology's functionality, limitations, and the underlying motivations. It suggests an exploration of the technical aspects of AI in public health and the impact of early warnings. The interview likely delves into the specifics of the AI model and its data sources.
      Reference

      The article doesn't contain a specific quote, but the content suggests Kamran Khan will explain how the technology works.