Research#llm👥 CommunityAnalyzed: Jan 3, 2026 16:48

Show HN: I made the slowest, most expensive GPT

Published:Dec 13, 2024 15:05
1 min read
Hacker News

Analysis

The article describes a project that uses multiple LLMs (ChatGPT, Perplexity, Gemini, Claude) to answer the same question, aiming for a more comprehensive and accurate response by cross-referencing. The author highlights the limitations of current LLMs in handling fluid information and complex queries, particularly in areas like online search where consensus is difficult to establish. The project focuses on the iterative process of querying different models and evaluating their outputs, rather than relying on a single model or a simple RAG approach. The author acknowledges the effectiveness of single-shot responses for tasks like math and coding, but emphasizes the challenges in areas requiring nuanced understanding and up-to-date information.

Reference

An example is something like "best ski resorts in the US", which will get a different response from every GPT, but most of their rankings won't reflect actual skiers' consensus.