最終更新:2025-01-24 (金) 10:30:38 (14d)  

lms get
Top / lms get

LMS Studio/モデル

  • lms get
      [Staff Pick] DeepSeek R1 Distill (Qwen 7B)
      [Staff Pick] DeepSeek R1 Distill (Llama 8B)
      [Staff Pick] phi-4
      [Staff Pick] Granite 3.1 8B
      [Staff Pick] Hermes 3 Llama 3.2 3B
      [Staff Pick] Llama 3.3 70B Instruct
      [Staff Pick] Qwen2.5 Coder 14B
      [Staff Pick] Qwen2.5 Coder 32B
      [Staff Pick] Qwen2.5 Coder 3B
      [Staff Pick] Llama 3.2 3B Instruct 4bit
      [Staff Pick] Llama 3.2 1B
      [Staff Pick] Llama 3.2 3B
      [Staff Pick] Qwen2.5 Coder 7B
      [Staff Pick] Qwen2.5 14B
      [Staff Pick] Yi Coder 9B
      [Staff Pick] Hermes 3 Llama 3.1 8B
      [Staff Pick] InternLM 2.5 20B
      [Staff Pick] LLaVA v1.5
      [Staff Pick] Llama 3.1 8B Instruct 4bit
      [Staff Pick] Meta Llama 3.1 8B
      [Staff Pick] Mistral Nemo 2407
      [Staff Pick] Mistral Nemo Instruct 2407 4bit
      [Staff Pick] Gemma 2 2B
      [Staff Pick] Mathstral 7B
      [Staff Pick] Gemma 2 9B
      [Staff Pick] DeepSeek Coder V2 Lite Instruct 4bit mlx
      [Staff Pick] SmolLM 360M v0.2
      [Staff Pick] Phi 3 mini 4k instruct 4bit
      [Staff Pick] Gemma 2 27B
      [Staff Pick] Codestral 22B

help

  • lms get
    > Searching and downloading a model from online.
    
    ARGUMENTS:
      [str] - The model to download. If not provided, staff picked models will be shown. For models that have multiple quantizations, you can specify the quantization by appending it with "@". For example, use "llama-3.1-8b@q4_k_m" to download the llama-3.1-8b model with the specified quantization. [optional]
    
    FLAGS:
      --mlx                              - Whether to include MLX models in the search results. If any of "--mlx" or "--gguf" flag is specified, only models that match the specified flags will be shown; Otherwise only models supported by your installed LM Runtimes will be shown.
      --gguf                             - Whether to include GGUF models in the search results. If any of "--mlx" or "--gguf" flag is specified, only models that match the specified flags will be shown; Otherwise only models supported by your installed LM Runtimes will be shown.
      --always-show-all-results          - By default, an exact model match to the query is automatically selected. If this flag is specified, you're prompted to choose from the model results, even when there's an exact match.
      --always-show-download-options, -a - By default, if there an exact match for your query, the system will automatically select a quantization based on your hardware. Specifying this flag will always prompt you to choose a download option.
      --verbose                          - Enable verbose logging.
      --quiet                            - Suppress all logging.
      --yes, -y                          - Suppress all confirmations and warnings. Useful for scripting. If there are multiple models matching the search term, the first one will be used. If there are multiple download options, the recommended one based on your hardware will be chosen. Fails if you have specified a quantization via the "@" syntax and the quantization does not exist in the options.
      --help, -h                         - show help
    
    OPTIONS:
      --limit, -n <number> - Limit the number of model options. [optional]
      --log-level <value>  - The level of logging to use. If not provided, the default level is "info". [optional]
      --host <str>         - If you wish to connect to a remote LM Studio instance, specify the host here. Note that, in this case, lms will connect using client identifier "lms-cli-remote-<random chars>", which will not be a privileged client, and will restrict usage of functionalities such as "lms push". [optional]
      --port <number>      - The port where LM Studio can be reached. If not provided and the host is set to "127.0.0.1" (default), the last used port will be used; otherwise, 1234 will be used. [optional]