Search:
Match:
3 results
Research#llm📝 BlogAnalyzed: Dec 28, 2025 21:57

Dataflow Computing for AI Inference with Kunle Olukotun - #751

Published:Oct 14, 2025 19:39
1 min read
Practical AI

Analysis

This article discusses a podcast episode featuring Kunle Olukotun, a professor at Stanford and co-founder of Sambanova Systems. The core topic is reconfigurable dataflow architectures for AI inference, a departure from traditional CPU/GPU approaches. The discussion centers on how this architecture addresses memory bandwidth limitations, improves performance, and facilitates efficient multi-model serving and agentic workflows, particularly for LLM inference. The episode also touches upon future research into dynamic reconfigurable architectures and the use of AI agents in hardware compiler development. The article highlights a shift towards specialized hardware for AI tasks.
Reference

Kunle explains the core idea of building computers that are dynamically configured to match the dataflow graph of an AI model, moving beyond the traditional instruction-fetch paradigm of CPUs and GPUs.

Technology#AI in Finance📝 BlogAnalyzed: Dec 29, 2025 07:43

Scaling BERT and GPT for Financial Services with Jennifer Glore - #561

Published:Feb 28, 2022 16:55
1 min read
Practical AI

Analysis

This podcast episode from Practical AI features Jennifer Glore, VP of customer engineering at SambaNova Systems. The discussion centers on SambaNova's development of a GPT language model tailored for the financial services industry. The conversation covers the progress of financial institutions in adopting transformer models, highlighting successes and challenges. The episode also delves into SambaNova's experience replicating the GPT-3 paper, addressing issues like predictability, controllability, and governance. The focus is on the practical application of large language models (LLMs) in a specific industry and the hardware infrastructure that supports them.
Reference

Jennifer shares her thoughts on the progress of industries like banking and finance, as well as other traditional organizations, in their attempts at using transformers and other models, and where they’ve begun to see success, as well as some of the hidden challenges that orgs run into that impede their progress.

Research#AI Hardware📝 BlogAnalyzed: Dec 29, 2025 08:19

Designing Computer Systems for Software with Kunle Olukotun - TWiML Talk #211

Published:Dec 18, 2018 00:38
1 min read
Practical AI

Analysis

This article summarizes a podcast episode featuring Kunle Olukotun, a professor at Stanford University and Chief Technologist at Sambanova Systems. The discussion centers on designing hardware systems for machine and deep learning, specifically focusing on the challenges and opportunities presented by Software 2.0. The conversation covers key areas like multicore processor design, domain-specific languages, and graph-based hardware. The article highlights the importance of specialized hardware for accelerating AI workloads and the ongoing research in this field. It suggests the podcast provides valuable insights into the future of AI hardware.
Reference

The article doesn't contain a direct quote, but it discusses the topic of designing computer systems for Software 2.0.