How to Run an LLM (AI Chatbot) Locally on Your Computer
February 14, 2024•424 words
Here's a simple and quick tutorial to install and run an open-source LLM AI Chatbot on your computer's hard drive.
Note: these instructions are for MacOS and Linux (no Windows option at the time of writing)
First-time instructions:
- Go to https://ollama.com/
- Click Download
- Open the Ollama .zip file
- Open the Ollama app and follow the instructions to setup
- Copy the Terminal prompt at the end of the dialogue
- Paste the prompt into your computer's terminal
- Allow it to install
- Once done, you officially have an LLM on your computer's hard drive and you can start typing anything you want into your terminal to communicate with the LLM
To install other LLMs after completing the above steps:
- Go to https://ollama.com/library and pick your next LLM
- Copy the prompt on the LLM page
- Paste the prompt into Terminal to install
To run your LLM:
- Open Terminal
- Enter
ollama run llama2
- Or, if running a model beside llama2, enter that name instead
- Hit enter and start your chat conversation with the LLM
What are the benefits of doing this compared to using an LLM on my browser?
- It doesn't need an internet connection since it's on your computer's hard drive
- Given point 1, it's faster
- Given point 1, it's more private
- Given point 1, you have more control over your data
- No limits on number of prompts
- More flexibility and more customizable
Here are all the open-source LLMs available to use with Ollama along with their terminal prompts:
Model | Parameters | Size | Download |
---|---|---|---|
Llama 2 | 7B | 3.8GB | |
Mistral | 7B | 4.1GB | |
Dolphin Phi | 2.7B | 1.6GB | |
Phi-2 | 2.7B | 1.7GB | |
Neural Chat | 7B | 4.1GB | |
Starling | 7B | 4.1GB | |
Code Llama | 7B | 3.8GB | |
Llama 2 Uncensored | 7B | 3.8GB | |
Llama 2 13B | 13B | 7.3GB | |
Llama 2 70B | 70B | 39GB | |
Orca Mini | 3B | 1.9GB | |
Vicuna | 7B | 3.8GB | |
LLaVA | 7B | 4.5GB |
That's it. It's super easy and only takes few minutes to get up and running with a local LLM.
Cheers!