Package ollama: Information

    Source package: ollama
    Version: 0.6.0-alt1
    Latest version according to Repology
    Build time:  Mar 13, 2025, 03:30 AM in the task #377797
    Report package bug
    Home page: https://ollama.com

    License: MIT
    Summary: Get up and running with large language models
    Description: 
    Get up and running with large language models.
    Run DeepSeek-R1, Gemma 3, Llama 3.3, Mistral, Phi-4, Qwen 2.5, and other
    models, locally.
    
    This is a meta-package.

    List of RPM packages built from this SRPM:
    ollama (x86_64, aarch64)
    ollama-cpu (x86_64, aarch64)
    ollama-cpu-debuginfo (x86_64, aarch64)
    ollama-cuda (x86_64)
    ollama-cuda-debuginfo (x86_64)

    Maintainer: Vitaly Chikunov

    List of contributors:
    Vitaly Chikunov

    ACL:
    Vitaly Chikunov
    @everybody

      1. cmake
      2. curl
      3. gcc-c++
      4. gcc12-c++
      5. golang
      6. look
      7. nvidia-cuda-devel-static
      8. patchelf
      9. rpm-macros-cmake
      10. rpm-macros-systemd

    Last changed


    March 12, 2025 Vitaly Chikunov 0.6.0-alt1
    - Update to v0.6.0 (2025-03-11).
    March 7, 2025 Vitaly Chikunov 0.5.13-alt1
    - Update to v0.5.13 (2025-03-03).
    - Enable NVIDIA GPU runner (ollama-cuda).
    Feb. 15, 2025 Vitaly Chikunov 0.5.11-alt1
    - Update to v0.5.11 (2025-02-13).
    - Split the package into meta-package (ollama) and runner (ollama-cpu).