最終更新:2025-01-24 (金) 11:06:11 (14d)  

ollama run
Top / ollama run

--help

  • Run a model
    
    Usage:
      ollama run MODEL [PROMPT] [flags]
    
    Flags:
          --format string      Response format (e.g. json)
      -h, --help               help for run
          --insecure           Use an insecure registry
          --keepalive string   Duration to keep a model loaded (e.g. 5m)
          --nowordwrap         Don't wrap words to the next line automatically
          --verbose            Show timings for response
    
    Environment Variables:
          OLLAMA_HOST                IP Address for the ollama server (default 127.0.0.1:11434)
          OLLAMA_NOHISTORY           Do not preserve readline history

You can now run *any* GGUF on the Hugging Face Hub directly with Ollama

関連