最終更新:2024-04-10 (水) 17:00:28 (19d)
Ollama
Top / Ollama
Run Llama 2, Code Llama, and other models. Customize and create your own.
https://github.com/ollama/ollama
Windows
- タスクトレイに常駐する
動かし方
- ollama run llama2
コマンド
GPU
help
Large language model runner Usage: ollama [flags] ollama [command] Available Commands: serve Start ollama create Create a model from a Modelfile show Show information for a model run Run a model pull Pull a model from a registry push Push a model to a registry list List models cp Copy a model rm Remove a model help Help about any command Flags: -h, --help help for ollama -v, --version Show version information Use "ollama [command] --help" for more information about a command.
Ollama/モデル
Model | Parameters | Size | Download |
Llama 2 | 7B | 3.8GB | ollama run llama2 |
Mistral? | 7B | 4.1GB | ollama run mistral |
Dolphin Phi? | 2.7B | 1.6GB | ollama run dolphin-phi |
Phi-2? | 2.7B | 1.7GB | ollama run phi |
Neural Chat? | 7B | 4.1GB | ollama run neural-chat |
Starling? | 7B | 4.1GB | ollama run starling-lm |
Code Llama | 7B | 3.8GB | ollama run codellama |
Llama 2 Uncensored? | 7B | 3.8GB | ollama run llama2-uncensored |
Llama 2 13B | 13B | 7.3GB | ollama run llama2:13b |
Llama 2 70B | 70B | 39GB | ollama run llama2:70b |
Orca Mini? | 3B | 1.9GB | ollama run orca-mini |
Vicuna | 7B | 3.8GB | ollama run vicuna |
LLaVA? | 7B | 4.5GB | ollama run llava |
Gemma | 2B | 1.4GB | ollama run gemma:2b |
Gemma | 7B | 4.8GB | ollama run gemma:7b |
APIサーバ
http://localhost:11434