ollama
1 cd ~
2 curl -fsSL https://ollama.com/install.sh | sh
3 # >>> The Ollama API is now available at 127.0.0.1:11434.
4 curl localhost:11434
5 # Ollama is running
6 ollama run llama3.2:1b
7 ollama show llama3.2:1b
8 ollama list
9 ollama ps
10 ollama serve # start server
11 ollama stop llama3.2:1b
12 ollama rm llama3.2:1b
Setup proxy
1 sudo nano /etc/systemd/system/ollama.service
2 [Unit]
3 Description=Ollama Service
4 After=network-online.target
5
6 [Service]
7 ExecStart=/usr/local/bin/ollama serve
8 User=ollama
9 Group=ollama
10 Restart=always
11 RestartSec=3
12 Environment="PATH=/home/vagrant/.dotnet/tools:/home/vagrant/dotnetcore9:/usr/local/bin:/usr/bin:/bin:/home/vagrant/jdk-17.0.7+7/bin/:/home/vagrant/gradle-8.1/bin/"
13 Environment="HTTPS_PROXY=http://192.168.0.123:3128/"
14 Environment="HTTP_PROXY=http://192.168.0.123:3128/"
15
16 [Install]
17 WantedBy=default.target
18
19 sudo systemctl daemon-reload
20 sudo systemctl restart ollama
21 systemctl show ollama
22 ollama run llama3.2:1b
Chatbots - LLM - AI accessible via browser
Gemini prompt - https://gemini.google.com/
Copilot prompt - https://copilot.microsoft.com/
Grok prompt - https://grok.com/
ChatGPT prompt - https://chatgpt.com/