Nemotron-3-Super-120b: Unleashing a Powerful, Uncensored LLM
research#llm📝 Blog|Analyzed: Mar 14, 2026 08:02•
Published: Mar 14, 2026 04:13
•1 min read
•r/LocalLLaMAAnalysis
A new, uncensored Large Language Model (LLM), Nemotron-3-Super-120b, has been released, promising impressive performance on benchmarks like HarmBench and HumanEval. This model's unique architecture leverages LatentMoE and Mamba attention, pushing the boundaries of what's possible in the world of Generative AI.
Key Takeaways
- •Nemotron-3-Super-120b showcases impressive scores on both HarmBench and HumanEval.
- •The model utilizes a combination of LatentMoE and Mamba attention.
- •Custom files are provided to enable users to run the model using MLX.
Reference / Citation
View Original"HarmBench: 97% HumanEval: 94%"
Related Analysis
research
Learn AI Visually: A Groundbreaking Guide to How Artificial Intelligence Works Behind the Scenes
Apr 29, 2026 14:12
researchPioneering Study Highlights the Exciting Frontiers of Multimodal AI in Dietary Tech
Apr 29, 2026 13:07
researchMayo Clinic's Redmod AI Detects Pancreatic Cancer Over a Year Before Clinical Diagnosis
Apr 29, 2026 11:14