Running gpt-oss-20b on RTX 4080 with LM Studio

Technology#LLM (Large Language Models)📝 Blog|Analyzed: Jan 3, 2026 06:14
Published: Jan 2, 2026 09:38
1 min read
Qiita LLM

Analysis

The article introduces the use of LM Studio to run a local LLM (gpt-oss-20b) on an RTX 4080. It highlights the author's interest in creating AI and their experience with self-made LLMs (nanoGPT). The author expresses a desire to explore local LLMs and mentions using LM Studio.

Key Takeaways

Reference / Citation
View Original
"“I always use ChatGPT, but I want to be on the side of creating AI. Recently, I made my own LLM (nanoGPT) and I understood various things and felt infinite possibilities. Actually, I have never touched a local LLM other than my own. I use LM Studio for local LLMs...”"
Q
Qiita LLMJan 2, 2026 09:38
* Cited for critical analysis under Article 32.