Package ollama: Information

    Source package: ollama
    Version: 0.20.2-alt1
    Latest version according to Repology
    Build time:  Apr 6, 2026, 02:09 PM in the task #414224
    Report package bug
    Home page: https://ollama.com

    License: MIT
    Summary: Get up and running with large language models
    Description: 
    Get up and running with large language models.
    Run OpenAI gpt-oss, DeepSeek-R1, Gemma 3, Llama 4, Mistral, Phi-4,
    Qwen 3, and other models, locally.
    
    This is a meta-package.

    List of RPM packages built from this SRPM:
    ollama (x86_64, aarch64)
    ollama-cpu (x86_64, aarch64)
    ollama-cpu-debuginfo (x86_64, aarch64)
    ollama-cuda (x86_64)
    ollama-cuda-debuginfo (x86_64)
    ollama-vulkan (x86_64, aarch64)
    ollama-vulkan-debuginfo (x86_64, aarch64)

    Maintainer: Vitaly Chikunov

    List of contributors:
    Vitaly Chikunov


      1. cmake
      2. curl
      3. gcc-c++
      4. gcc12-c++
      5. glslc
      6. golang
      7. libvulkan-devel
      8. look
      9. nvidia-cuda-devel-static
      10. patchelf
      11. rpm-macros-cmake
      12. rpm-macros-systemd

    Last changed


    April 5, 2026 Vitaly Chikunov 0.20.2-alt1
    - Update to v0.20.2 (2026-04-03).
    March 10, 2026 Vitaly Chikunov 0.17.7-alt1
    - Update to v0.17.7 (2026-03-05).
    March 2, 2026 Vitaly Chikunov 0.17.5-alt1
    - Update to v0.17.5 (2026-03-01).