最終更新:2025-01-24 (金) 11:06:11 (14d)
ollama run
Top / ollama run
--help
Run a model Usage: ollama run MODEL [PROMPT] [flags] Flags: --format string Response format (e.g. json) -h, --help help for run --insecure Use an insecure registry --keepalive string Duration to keep a model loaded (e.g. 5m) --nowordwrap Don't wrap words to the next line automatically --verbose Show timings for response Environment Variables: OLLAMA_HOST IP Address for the ollama server (default 127.0.0.1:11434) OLLAMA_NOHISTORY Do not preserve readline history
You can now run *any* GGUF on the Hugging Face Hub directly with Ollama
https://x.com/reach_vb/status/1846545312548360319
ollama run hf. co/{username}/{reponame}:latest ollama run hf. co/bartowski/Llama-3.2-1B-Instruct-GGUF:Q8_0
https://huggingface.co/docs/hub/en/ollama