OpenAI o1 System Card

Research#llm🏛️ Official|Analyzed: Jan 3, 2026 18:05
Published: Dec 5, 2024 10:00
1 min read
OpenAI News

Analysis

The article is a brief announcement of safety measures taken before releasing OpenAI's o1 and o1-mini models. It highlights external red teaming and risk evaluations as part of their Preparedness Framework. The focus is on safety and responsible AI development.
Reference / Citation
View Original
"This report outlines the safety work carried out prior to releasing OpenAI o1 and o1-mini, including external red teaming and frontier risk evaluations according to our Preparedness Framework."
O
OpenAI NewsDec 5, 2024 10:00
* Cited for critical analysis under Article 32.