Search:
Match:
38 results
product#agent📝 BlogAnalyzed: Jan 18, 2026 14:00

English Visualizer: AI-Powered Illustrations for Language Learning!

Published:Jan 18, 2026 12:28
1 min read
Zenn Gemini

Analysis

This project showcases an innovative approach to language learning! By automating the creation of consistent, high-quality illustrations, the English Visualizer solves a common problem for language app developers. Leveraging Google's latest models is a smart move, and we're eager to see how this tool develops!
Reference

By automating the creation of consistent, high-quality illustrations, the English Visualizer solves a common problem for language app developers.

product#llm📝 BlogAnalyzed: Jan 15, 2026 07:00

Context Engineering: Optimizing AI Performance for Next-Gen Development

Published:Jan 15, 2026 06:34
1 min read
Zenn Claude

Analysis

The article highlights the growing importance of context engineering in mitigating the limitations of Large Language Models (LLMs) in real-world applications. By addressing issues like inconsistent behavior and poor retention of project specifications, context engineering offers a crucial path to improved AI reliability and developer productivity. The focus on solutions for context understanding is highly relevant given the expanding role of AI in complex projects.
Reference

AI that cannot correctly retain project specifications and context...

research#preprocessing📝 BlogAnalyzed: Jan 14, 2026 16:15

Data Preprocessing for AI: Mastering Character Encoding and its Implications

Published:Jan 14, 2026 16:11
1 min read
Qiita AI

Analysis

The article's focus on character encoding is crucial for AI data analysis, as inconsistent encodings can lead to significant errors and hinder model performance. Leveraging tools like Python and integrating a large language model (LLM) such as Gemini, as suggested, demonstrates a practical approach to data cleaning within the AI workflow.
Reference

The article likely discusses practical implementations with Python and the usage of Gemini, suggesting actionable steps for data preprocessing.

business#agent📝 BlogAnalyzed: Jan 11, 2026 19:00

Why AI Agent Discussions Often Misalign: A Multi-Agent Perspective

Published:Jan 11, 2026 18:53
1 min read
Qiita AI

Analysis

The article highlights a common problem: the vague understanding and inconsistent application of 'AI agent' terminology. It suggests that a multi-agent framework is necessary for clear communication and effective collaboration in the evolving AI landscape. Addressing this ambiguity is crucial for developing robust and interoperable AI systems.

Key Takeaways

Reference

A quote from the content is needed.

business#workflow📝 BlogAnalyzed: Jan 10, 2026 05:41

From Ad-hoc to Organized: A Lone Entrepreneur's AI Transformation

Published:Jan 6, 2026 23:04
1 min read
Zenn ChatGPT

Analysis

This article highlights a common challenge in AI adoption: moving beyond fragmented usage to a structured and strategic approach. The entrepreneur's journey towards creating an AI organizational chart and standardized development process reflects a necessary shift for businesses to fully leverage AI's potential. The reported issues with inconsistent output quality underscore the importance of prompt engineering and workflow standardization.
Reference

「このコード直して」「いい感じのキャッチコピー考えて」と、その場しのぎの「便利な道具」として使っていませんか?

product#llm🏛️ OfficialAnalyzed: Jan 5, 2026 09:10

User Warns Against 'gpt-5.2 auto/instant' in ChatGPT Due to Hallucinations

Published:Jan 5, 2026 06:18
1 min read
r/OpenAI

Analysis

This post highlights the potential for specific configurations or versions of language models to exhibit undesirable behaviors like hallucination, even if other versions are considered reliable. The user's experience suggests a need for more granular control and transparency regarding model versions and their associated performance characteristics within platforms like ChatGPT. This also raises questions about the consistency and reliability of AI assistants across different configurations.
Reference

It hallucinates, doubles down and gives plain wrong answers that sound credible, and gives gpt 5.2 thinking (extended) a bad name which is the goat in my opinion and my personal assistant for non-coding tasks.

AI Research#LLM Quantization📝 BlogAnalyzed: Jan 3, 2026 23:58

MiniMax M2.1 Quantization Performance: Q6 vs. Q8

Published:Jan 3, 2026 20:28
1 min read
r/LocalLLaMA

Analysis

The article describes a user's experience testing the Q6_K quantized version of the MiniMax M2.1 language model using llama.cpp. The user found the model struggled with a simple coding task (writing unit tests for a time interval formatting function), exhibiting inconsistent and incorrect reasoning, particularly regarding the number of components in the output. The model's performance suggests potential limitations in the Q6 quantization, leading to significant errors and extensive, unproductive 'thinking' cycles.
Reference

The model struggled to write unit tests for a simple function called interval2short() that just formats a time interval as a short, approximate string... It really struggled to identify that the output is "2h 0m" instead of "2h." ... It then went on a multi-thousand-token thinking bender before deciding that it was very important to document that interval2short() always returns two components.

Research#llm📝 BlogAnalyzed: Jan 3, 2026 07:48

LLMs Exhibiting Inconsistent Behavior

Published:Jan 3, 2026 07:35
1 min read
r/ArtificialInteligence

Analysis

The article expresses a user's observation of inconsistent behavior in Large Language Models (LLMs). The user perceives the models as exhibiting unpredictable performance, sometimes being useful and other times producing undesirable results. This suggests a concern about the reliability and stability of LLMs.
Reference

“these things seem bi-polar to me... one day they are useful... the next time they seem the complete opposite... what say you?”

Analysis

The article highlights serious concerns about the accuracy and reliability of Google's AI Overviews in providing health information. The investigation reveals instances of dangerous and misleading medical advice, potentially jeopardizing users' health. The inconsistency of the AI summaries, pulling from different sources and changing over time, further exacerbates the problem. Google's response, emphasizing the accuracy of the majority of its overviews and citing incomplete screenshots, appears to downplay the severity of the issue.
Reference

In one case described by experts as "really dangerous," Google advised people with pancreatic cancer to avoid high-fat foods, which is the exact opposite of what should be recommended and could jeopardize a patient's chances of tolerating chemotherapy or surgery.

ChatGPT's Excel Formula Proficiency

Published:Jan 2, 2026 18:22
1 min read
r/OpenAI

Analysis

The article discusses the limitations of ChatGPT in generating correct Excel formulas, contrasting its failures with its proficiency in Python code generation. It highlights the user's frustration with ChatGPT's inability to provide a simple formula to remove leading zeros, even after multiple attempts. The user attributes this to a potential disparity in the training data, with more Python code available than Excel formulas.
Reference

The user's frustration is evident in their statement: "How is it possible that chatGPT still fails at simple Excel formulas, yet can produce thousands of lines of Python code without mistakes?"

Paper#3D Scene Editing🔬 ResearchAnalyzed: Jan 3, 2026 06:10

Instant 3D Scene Editing from Unposed Images

Published:Dec 31, 2025 18:59
1 min read
ArXiv

Analysis

This paper introduces Edit3r, a novel feed-forward framework for fast and photorealistic 3D scene editing directly from unposed, view-inconsistent images. The key innovation lies in its ability to bypass per-scene optimization and pose estimation, achieving real-time performance. The paper addresses the challenge of training with inconsistent edited images through a SAM2-based recoloring strategy and an asymmetric input strategy. The introduction of DL3DV-Edit-Bench for evaluation is also significant. This work is important because it offers a significant speed improvement over existing methods, making 3D scene editing more accessible and practical.
Reference

Edit3r directly predicts instruction-aligned 3D edits, enabling fast and photorealistic rendering without optimization or pose estimation.

Analysis

This paper addresses the challenge of inconsistent 2D instance labels across views in 3D instance segmentation, a problem that arises when extending 2D segmentation to 3D using techniques like 3D Gaussian Splatting and NeRF. The authors propose a unified framework, UniC-Lift, that merges contrastive learning and label consistency steps, improving efficiency and performance. They introduce a learnable feature embedding for segmentation in Gaussian primitives and a novel 'Embedding-to-Label' process. Furthermore, they address object boundary artifacts by incorporating hard-mining techniques, stabilized by a linear layer. The paper's significance lies in its unified approach, improved performance on benchmark datasets, and the novel solutions to boundary artifacts.
Reference

The paper introduces a learnable feature embedding for segmentation in Gaussian primitives and a novel 'Embedding-to-Label' process.

Analysis

This paper addresses a common problem in collaborative work: task drift and reduced effectiveness due to inconsistent engagement. The authors propose and evaluate an AI-assisted system, ReflecToMeet, designed to improve preparedness through reflective prompts and shared reflections. The study's mixed-method approach and comparison across different reflection conditions provide valuable insights into the impact of structured reflection on team dynamics and performance. The findings highlight the potential of AI to facilitate more effective collaboration.
Reference

Structured reflection supported greater organization and steadier progress.

Analysis

The article introduces Pydantic AI, a LLM agent framework developed by the creators of Pydantic, focusing on structured output with type safety. It highlights the common problem of inconsistent LLM output and the difficulties in parsing. The author, familiar with Pydantic in FastAPI, found the concept appealing and built an agent to analyze motivation and emotions from internal daily reports.
Reference

“The output of LLMs sometimes comes back in strange formats, which is troublesome…”

SHIELD: Efficient LiDAR-based Drone Exploration

Published:Dec 30, 2025 04:01
1 min read
ArXiv

Analysis

This paper addresses the challenges of using LiDAR for drone exploration, specifically focusing on the limitations of point cloud quality, computational burden, and safety in open areas. The proposed SHIELD method offers a novel approach by integrating an observation-quality occupancy map, a hybrid frontier method, and a spherical-projection ray-casting strategy. This is significant because it aims to improve both the efficiency and safety of drone exploration using LiDAR, which is crucial for applications like search and rescue or environmental monitoring. The open-sourcing of the work further benefits the research community.
Reference

SHIELD maintains an observation-quality occupancy map and performs ray-casting on this map to address the issue of inconsistent point-cloud quality during exploration.

Critique of a Model for the Origin of Life

Published:Dec 29, 2025 13:39
1 min read
ArXiv

Analysis

This paper critiques a model by Frampton that attempts to explain the origin of life using false-vacuum decay. The authors point out several flaws in the model, including a dimensional inconsistency in the probability calculation and unrealistic assumptions about the initial conditions and environment. The paper argues that the model's conclusions about the improbability of biogenesis and the absence of extraterrestrial life are not supported.
Reference

The exponent $n$ entering the probability $P_{ m SCO}\sim 10^{-n}$ has dimensions of inverse time: it is an energy barrier divided by the Planck constant, rather than a dimensionless tunnelling action.

Analysis

This paper explores dereverberation techniques for speech signals, focusing on Non-negative Matrix Factor Deconvolution (NMFD) and its variations. It aims to improve the magnitude spectrogram of reverberant speech to remove reverberation effects. The study proposes and compares different NMFD-based approaches, including a novel method applied to the activation matrix. The paper's significance lies in its investigation of NMFD for speech dereverberation and its comparative analysis using objective metrics like PESQ and Cepstral Distortion. The authors acknowledge that while they qualitatively validated existing techniques, they couldn't replicate exact results, and the novel approach showed inconsistent improvement.
Reference

The novel approach, as it is suggested, provides improvement in quantitative metrics, but is not consistent.

Research#llm📝 BlogAnalyzed: Dec 28, 2025 23:02

Empirical Evidence of Interpretation Drift & Taxonomy Field Guide

Published:Dec 28, 2025 21:36
1 min read
r/learnmachinelearning

Analysis

This article discusses the phenomenon of "Interpretation Drift" in Large Language Models (LLMs), where the model's interpretation of the same input changes over time or across different models, even with a temperature setting of 0. The author argues that this issue is often dismissed but is a significant problem in MLOps pipelines, leading to unstable AI-assisted decisions. The article introduces an "Interpretation Drift Taxonomy" to build a shared language and understanding around this subtle failure mode, focusing on real-world examples rather than benchmarking or accuracy debates. The goal is to help practitioners recognize and address this issue in their daily work.
Reference

"The real failure mode isn’t bad outputs, it’s this drift hiding behind fluent responses."

Paper#llm🔬 ResearchAnalyzed: Jan 3, 2026 19:19

LLMs Fall Short for Learner Modeling in K-12 Education

Published:Dec 28, 2025 18:26
1 min read
ArXiv

Analysis

This paper highlights the limitations of using Large Language Models (LLMs) alone for adaptive tutoring in K-12 education, particularly concerning accuracy, reliability, and temporal coherence in assessing student knowledge. It emphasizes the need for hybrid approaches that incorporate established learner modeling techniques like Deep Knowledge Tracing (DKT) for responsible AI in education, especially given the high-risk classification of K-12 settings by the EU AI Act.
Reference

DKT achieves the highest discrimination performance (AUC = 0.83) and consistently outperforms the LLM across settings. LLMs exhibit substantial temporal weaknesses, including inconsistent and wrong-direction updates.

Research#llm📝 BlogAnalyzed: Dec 28, 2025 15:02

Gemini Pro: Inconsistent Performance Across Accounts - A Bug or Hidden Limit?

Published:Dec 28, 2025 14:31
1 min read
r/Bard

Analysis

This Reddit post highlights a significant issue with Google's Gemini Pro: inconsistent performance across different accounts despite having identical paid subscriptions. The user reports that one account is heavily restricted, blocking prompts and disabling image/video generation, while the other account processes the same requests without issue. This suggests a potential bug in Google's account management or a hidden, undocumented limit being applied to specific accounts. The lack of transparency and the frustration of paying for a service that isn't functioning as expected are valid concerns. This issue needs investigation by Google to ensure fair and consistent service delivery to all paying customers. The user's experience raises questions about the reliability and predictability of Gemini Pro's performance.
Reference

"But on my main account, the AI suddenly started blocking almost all my prompts, saying 'try another topic,' and disabled image/video generation."

Research#llm📝 BlogAnalyzed: Dec 28, 2025 10:00

Xiaomi MiMo v2 Flash Claims Claude-Level Coding at 2.5% Cost, Documentation a Mess

Published:Dec 28, 2025 09:28
1 min read
r/ArtificialInteligence

Analysis

This post discusses the initial experiences of a user testing Xiaomi's MiMo v2 Flash, a 309B MoE model claiming Claude Sonnet 4.5 level coding abilities at a fraction of the cost. The user found the documentation, primarily in Chinese, difficult to navigate even with translation. Integration with common coding tools was lacking, requiring a workaround using VSCode Copilot and OpenRouter. While the speed was impressive, the code quality was inconsistent, raising concerns about potential overpromising and eval optimization. The user's experience highlights the gap between claimed performance and real-world usability, particularly regarding documentation and tool integration.
Reference

2.5% cost sounds amazing if the quality actually holds up. but right now feels like typical chinese ai company overpromising

Analysis

This paper addresses the problem of semantic drift in existing AGIQA models, where image embeddings show inconsistent similarities to grade descriptions. It proposes a novel approach inspired by psychometrics, specifically the Graded Response Model (GRM), to improve the reliability and performance of image quality assessment. The use of an Arithmetic GRM (AGQG) module offers a plug-and-play advantage and demonstrates strong generalization capabilities across different image types, suggesting its potential for future IQA models.
Reference

The Arithmetic GRM based Quality Grading (AGQG) module enjoys a plug-and-play advantage, consistently improving performance when integrated into various state-of-the-art AGIQA frameworks.

Research#llm📝 BlogAnalyzed: Dec 27, 2025 22:00

Gemini on Antigravity is tripping out. Has anyone else noticed doing the same?

Published:Dec 27, 2025 21:57
1 min read
r/Bard

Analysis

This post from Reddit's r/Bard suggests potential issues with Google's Gemini model when dealing with abstract or hypothetical concepts like antigravity. The user's observation implies that the model might be generating nonsensical or inconsistent responses related to this topic. This highlights a common challenge in large language models: their reliance on training data and potential difficulties in reasoning about things outside of that data. Further investigation and testing are needed to determine the extent and cause of this behavior. It also raises questions about the model's ability to handle nuanced or speculative queries effectively. The lack of specific examples makes it difficult to assess the severity of the problem.
Reference

Gemini on Antigravity is tripping out. Has anyone else noticed doing the same?

Research#llm📝 BlogAnalyzed: Dec 27, 2025 20:00

Claude AI Admits to Lying About Image Generation Capabilities

Published:Dec 27, 2025 19:41
1 min read
r/ArtificialInteligence

Analysis

This post from r/ArtificialIntelligence highlights a concerning issue with large language models (LLMs): their tendency to provide inconsistent or inaccurate information, even to the point of admitting to lying. The user's experience demonstrates the frustration of relying on AI for tasks when it provides misleading responses. The fact that Claude initially refused to generate an image, then later did so, and subsequently admitted to wasting the user's time raises questions about the reliability and transparency of these models. It underscores the need for ongoing research into how to improve the consistency and honesty of LLMs, as well as the importance of critical evaluation when using AI tools. The user's switch to Gemini further emphasizes the competitive landscape and the varying capabilities of different AI models.
Reference

I've wasted your time, lied to you, and made you work to get basic assistance

Research#llm📝 BlogAnalyzed: Dec 28, 2025 21:57

Researcher Struggles to Explain Interpretation Drift in LLMs

Published:Dec 25, 2025 09:31
1 min read
r/mlops

Analysis

The article highlights a critical issue in LLM research: interpretation drift. The author is attempting to study how LLMs interpret tasks and how those interpretations change over time, leading to inconsistent outputs even with identical prompts. The core problem is that reviewers are focusing on superficial solutions like temperature adjustments and prompt engineering, which can enforce consistency but don't guarantee accuracy. The author's frustration stems from the fact that these solutions don't address the underlying issue of the model's understanding of the task. The example of healthcare diagnosis clearly illustrates the problem: consistent, but incorrect, answers are worse than inconsistent ones that might occasionally be right. The author seeks advice on how to steer the conversation towards the core problem of interpretation drift.
Reference

“What I’m trying to study isn’t randomness, it’s more about how models interpret a task and how it changes what it thinks the task is from day to day.”

Analysis

This paper introduces NullBUS, a novel framework addressing the challenge of limited metadata in breast ultrasound datasets for segmentation tasks. The core innovation lies in the use of "nullable prompts," which are learnable null embeddings with presence masks. This allows the model to effectively leverage both images with and without prompts, improving robustness and performance. The results, demonstrating state-of-the-art performance on a unified dataset, are promising. The approach of handling missing data with learnable null embeddings is a valuable contribution to the field of multimodal learning, particularly in medical imaging where data annotation can be inconsistent or incomplete. Further research could explore the applicability of NullBUS to other medical imaging modalities and segmentation tasks.
Reference

We propose NullBUS, a multimodal mixed-supervision framework that learns from images with and without prompts in a single model.

Analysis

This paper introduces HARMON-E, a novel agentic framework leveraging LLMs for extracting structured oncology data from unstructured clinical notes. The approach addresses the limitations of existing methods by employing context-sensitive retrieval and iterative synthesis to handle variability, specialized terminology, and inconsistent document formats. The framework's ability to decompose complex extraction tasks into modular, adaptive steps is a key strength. The impressive F1-score of 0.93 on a large-scale dataset demonstrates the potential of HARMON-E to significantly improve the efficiency and accuracy of oncology data extraction, facilitating better treatment decisions and research. The focus on patient-level synthesis across multiple documents is particularly valuable.
Reference

We propose an agentic framework that systematically decomposes complex oncology data extraction into modular, adaptive tasks.

Research#llm📝 BlogAnalyzed: Dec 24, 2025 13:11

Reverse Gherkin with AI: Visualizing Specifications from Existing Code

Published:Dec 24, 2025 03:29
1 min read
Zenn AI

Analysis

This article discusses the challenge of documenting existing systems without formal specifications. The author highlights the common problem of code functioning without clear specifications, leading to inconsistent interpretations, especially regarding edge cases, permissions, and duplicate processing. They focus on a "point exchange" feature with complex constraints and external dependencies. The core idea is to use AI to generate Gherkin-style specifications from the existing code, effectively reverse-engineering the specifications. This approach aims to create human-readable documentation and improve understanding of the system's behavior without requiring a complete rewrite or manual specification creation.
Reference

"The code is working, but there are no specifications."

Analysis

This article likely presents research on improving the performance and reliability of decentralized Partially Observable Markov Decision Processes (Dec-POMDPs). The focus is on addressing challenges related to inconsistent beliefs among agents and limitations in communication, which are common issues in multi-agent systems. The research probably explores methods to ensure consistent actions and achieve optimal performance in these complex environments.

Key Takeaways

    Reference

    Analysis

    The article likely introduces a novel approach to federated learning, focusing on practical challenges. Addressing data heterogeneity and partial client participation are crucial for real-world deployment of federated learning systems.
    Reference

    The article is sourced from ArXiv, indicating a research paper.

    policy#content moderation📰 NewsAnalyzed: Jan 5, 2026 09:58

    YouTube Cracks Down on AI-Generated Fake Movie Trailers: A Content Moderation Dilemma

    Published:Dec 18, 2025 22:39
    1 min read
    Ars Technica

    Analysis

    This incident highlights the challenges of content moderation in the age of AI-generated content, particularly regarding copyright infringement and potential misinformation. YouTube's inconsistent stance on AI content raises questions about its long-term strategy for handling such material. The ban suggests a reactive approach rather than a proactive policy framework.
    Reference

    Google loves AI content, except when it doesn't.

    Research#llm🔬 ResearchAnalyzed: Jan 4, 2026 07:21

    SMART: Semantic Matching Contrastive Learning for Partially View-Aligned Clustering

    Published:Dec 17, 2025 12:48
    1 min read
    ArXiv

    Analysis

    The article introduces a new research paper on a clustering technique called SMART. The focus is on handling partially aligned views, suggesting the method is designed for scenarios where data from different sources or perspectives have incomplete or inconsistent relationships. The use of 'Semantic Matching Contrastive Learning' indicates the approach leverages semantic understanding and contrastive learning principles to improve clustering performance. The source being ArXiv suggests this is a preliminary publication, likely a pre-print of a peer-reviewed paper.

    Key Takeaways

      Reference

      Safety#LLM🔬 ResearchAnalyzed: Jan 10, 2026 11:38

      LLM Refusal Inconsistencies: Examining the Impact of Randomness on Safety

      Published:Dec 12, 2025 22:29
      1 min read
      ArXiv

      Analysis

      This article highlights a critical vulnerability in Large Language Models: the unpredictable nature of their refusal behaviors. The study underscores the importance of rigorous testing methodologies when evaluating and deploying safety mechanisms in LLMs.
      Reference

      The study analyzes how random seeds and temperature settings impact LLM's propensity to refuse potentially harmful prompts.

      Research#MLLM🔬 ResearchAnalyzed: Jan 10, 2026 12:30

      MLLMs Exhibit Cross-Modal Inconsistency

      Published:Dec 9, 2025 18:57
      1 min read
      ArXiv

      Analysis

      The study highlights a critical vulnerability in Multi-Modal Large Language Models (MLLMs), revealing inconsistencies in their responses across different input modalities. This research underscores the need for improved training and evaluation strategies to ensure robust and reliable performance in MLLMs.
      Reference

      The research focuses on the inconsistency in MLLMs.

      Research#llm🔬 ResearchAnalyzed: Jan 4, 2026 09:44

      Enhancing Reliability across Short and Long-Form QA via Reinforcement Learning

      Published:Nov 19, 2025 09:26
      1 min read
      ArXiv

      Analysis

      The article likely discusses the application of reinforcement learning to improve the accuracy and consistency of question-answering systems, particularly for both short and long-form text. This suggests a focus on addressing common issues like factual inaccuracies or inconsistent responses in AI-powered QA systems. The use of 'ArXiv' as the source indicates this is a research paper.

      Key Takeaways

        Reference

        Can technology fix fashion's sizing crisis?

        Published:Nov 15, 2025 04:03
        1 min read
        BBC Tech

        Analysis

        The article introduces the potential of AI to address the inconsistent sizing issues in the fashion industry. It suggests a focus on how AI can help consumers navigate the complexities of clothing sizes.

        Key Takeaways

        Reference

        Research#llm👥 CommunityAnalyzed: Jan 3, 2026 08:54

        Defeating Nondeterminism in LLM Inference

        Published:Sep 10, 2025 17:26
        1 min read
        Hacker News

        Analysis

        The article likely discusses techniques to ensure consistent outputs from Large Language Models (LLMs) given the same input. This is crucial for applications requiring reliability and reproducibility. The focus is on addressing the inherent variability in LLM responses.
        Reference

        Analysis

        This is a useful tool for engineers seeking practical implementation examples from tech companies. The core functionality of searching across multiple engineering blogs is valuable. The technical details reveal a pragmatic approach to solving the problem, highlighting the challenges of blog format inconsistencies. The planned features, such as AI summaries and a weekly digest, would significantly enhance the user experience. The project's focus on real-world production examples addresses a common need in the tech community.
        Reference

        The problem: When learning a new technology, the best insights often come from how companies like Google, Meta, or Stripe actually implement it in production. But these gems are scattered across dozens of separate engineering blogs with no way to search across them.