Nemotron-3-Super-120b: Unleashing a Powerful, Uncensored LLM
research#llm📝 Blog|Analyzed: Mar 14, 2026 08:02•
Published: Mar 14, 2026 04:13
•1 min read
•r/LocalLLaMAAnalysis
A new, uncensored Large Language Model (LLM), Nemotron-3-Super-120b, has been released, promising impressive performance on benchmarks like HarmBench and HumanEval. This model's unique architecture leverages LatentMoE and Mamba attention, pushing the boundaries of what's possible in the world of Generative AI.
Key Takeaways
- •Nemotron-3-Super-120b showcases impressive scores on both HarmBench and HumanEval.
- •The model utilizes a combination of LatentMoE and Mamba attention.
- •Custom files are provided to enable users to run the model using MLX.
Reference / Citation
View Original"HarmBench: 97% HumanEval: 94%"