π New: LM Studio is now free for use at work! Read the announcement


Your local AI toolkit.
Download and run Llama, DeepSeek, Qwen, Gemma on your computer.
LM Studio is free for home and work use β’ terms


Easy to start, much to explore
Discover and download open source models, use them in chats or run a local server
Easily run LLMs like Llama and DeepSeek on your computer. No expertise required

With LM Studio, you can ...

Cross-platform local AI SDK
LM Studio SDK: Build local AI apps without dealing with dependencies
pip install lmstudio

import lmstudio as lms llm = lms.llm() # Get any loaded LLM prediction = llm.respond_stream("What is a Capybara?") for token in prediction: print(token, end="", flush=True)
Frequently Asked Questions
TLDR: The app does not collect data or monitor your actions. Your data stays local on your machine.
No. One of the main reasons for using a local LLM is privacy, and LM Studio is designed for that. Your data remains private and local to your machine. Visit the Offline Operation page for more.
Yes! LM Studio is free for internal business use. You and your team can use it at your company or organization. No need to get special permission, just go ahead!
If you require additional controls, private Hub organizations, and/or SSO, please head to the LM Studio Enterprise page.
Yes! See our careers page for open positions.
LM Studio works on M1/M2/M3/M4 Macs, as well as Windows (x86 or ARM) and Linux PCs (x86) with a processor that supports AVX2. Visit the System Requirements page for the most up to date information.
You can run any compatible Large Language Model (LLM) from Hugging Face, both in GGUF
(llama.cpp) format, as well as in the MLX
format (Mac only). You can run GGUF
text embedding models. Some models might not be supported, while others might be too large to run on your machine. Image generation models are not yet supported. See the Model Catalog for featured models.
The LM Studio GUI app is not open source. However LM Studioβs CLI lms
, Core SDK, and our MLX inferencing engine are all MIT licensed and open source. Moreover, LM Studio makes it easy to use leading open source libraries such as llama.cpp without needing the know-how to compile or integrate them yourself.
llama.cpp
is a fantastic open source library that provides a powerful and efficient way to run LLMs on edge devices. It was created and is led by Georgi Gerganov. LM Studio leverages llama.cpp to run LLMs on Windows, Linux, and Macs.
MLX
is a new Machine Learning framework from Apple. MLX is efficient and blazing fast on M1/M2/M3/M4 Macs. LM Studio leverages MLX to run LLMs on Apple silicon, utilizing the full power of the Mac's Unified Memory, CPU, and GPU. LM Studio's MLX engine (mlx-engine
) is open source and available on GitHub (MIT). We would appreciate a star! We are also looking for community contributors.