Search:
Match:
5 results
Research#llm📝 BlogAnalyzed: Dec 28, 2025 21:57

Jonathan Frankle: Neural Network Pruning and Training

Published:Apr 10, 2023 21:47
1 min read
Weights & Biases

Analysis

This article summarizes a discussion between Jonathan Frankle and Lukas Biewald on the Gradient Dissent podcast. The primary focus is on neural network pruning and training, including the "Lottery Ticket Hypothesis." The article likely delves into the techniques and challenges associated with reducing the size of neural networks (pruning) while maintaining or improving performance. It probably explores methods for training these pruned networks effectively and the implications of the Lottery Ticket Hypothesis, which suggests that within a large, randomly initialized neural network, there exists a subnetwork (a "winning ticket") that can achieve comparable performance when trained in isolation. The discussion likely covers practical applications and research advancements in this field.
Reference

The article doesn't contain a direct quote, but the discussion likely revolves around pruning techniques, training methodologies, and the Lottery Ticket Hypothesis.

Social Science#Social Media📝 BlogAnalyzed: Dec 29, 2025 17:16

Jonathan Haidt: The Case Against Social Media - Analysis of Lex Fridman Podcast

Published:Jun 4, 2022 17:18
1 min read
Lex Fridman Podcast

Analysis

This article summarizes a Lex Fridman Podcast episode featuring Jonathan Haidt, a social psychologist, discussing the negative impacts of social media. The episode covers Haidt's arguments, likely drawing from his research and books such as "The Coddling of the American Mind." The provided links offer access to the podcast episode, related articles, and Haidt's resources. The article also includes links to sponsors and ways to support the podcast. The outline section provides timestamps for the episode, allowing listeners to navigate specific topics discussed.
Reference

The article doesn't contain a direct quote, but it focuses on the discussion of Jonathan Haidt's views on social media.

Research#audio processing📝 BlogAnalyzed: Dec 29, 2025 07:44

Solving the Cocktail Party Problem with Machine Learning, w/ Jonathan Le Roux - #555

Published:Jan 24, 2022 17:14
1 min read
Practical AI

Analysis

This article discusses the application of machine learning to the "cocktail party problem," specifically focusing on separating speech from noise and other speech. It highlights Jonathan Le Roux's research at Mitsubishi Electric Research Laboratories (MERL), particularly his paper on separating complex acoustic scenes into speech, music, and sound effects. The article explores the challenges of working with noisy data, the model architecture used, the role of ML/DL, and future research directions. The focus is on audio separation and enhancement using machine learning techniques, offering insights into the complexities of real-world soundscapes.
Reference

The article focuses on Jonathan Le Roux's paper The Cocktail Fork Problem: Three-Stem Audio Separation For Real-World Soundtracks.

Software Engineering#TensorFlow📝 BlogAnalyzed: Dec 29, 2025 08:09

Scaling TensorFlow at LinkedIn with Jonathan Hung - #314

Published:Nov 4, 2019 19:46
1 min read
Practical AI

Analysis

This article summarizes a podcast episode from Practical AI featuring Jonathan Hung, a Senior Software Engineer at LinkedIn. The discussion centers around LinkedIn's use of TensorFlow, specifically focusing on how they scaled it within their existing infrastructure. Key topics include their motivation for using TensorFlow on Hadoop clusters, the TonY (TensorFlow on Yard) framework, its integration with LinkedIn's Pro-ML AI platform, and their exploration of Kubernetes for research purposes. The episode likely provides valuable insights into the practical challenges and solutions involved in deploying and scaling deep learning models in a large-scale production environment.
Reference

The article doesn't contain a direct quote, but it discusses the topics presented by Jonathan Hung at TensorFlow World.

Research#llm📝 BlogAnalyzed: Dec 29, 2025 08:38

Symbolic and Sub-Symbolic Natural Language Processing with Jonathan Mugan - TWiML Talk #49

Published:Sep 25, 2017 20:56
1 min read
Practical AI

Analysis

This article summarizes a podcast interview with Jonathan Mugan, CEO of Deep Grammar, focusing on Natural Language Processing (NLP). The interview explores both sub-symbolic and symbolic approaches to NLP, contrasting them with the previous week's interview. It highlights the use of deep learning in grammar checking and discusses topics like attention mechanisms (sequence to sequence) and ontological approaches (WordNet, synsets, FrameNet, SUMO). The article serves as a brief overview of the interview's content, providing context and key topics covered.
Reference

This interview is a great complement to my conversation with Bruno, and we cover a variety of topics from both the sub-symbolic and symbolic schools of NLP...