Package ollama: Information

    Source package: ollama
    Version: 0.12.11-alt1
    Build time:  Nov 26, 2025, 03:47 PM in the task #400443
    Report package bug
    Home page: https://ollama.com

    License: MIT
    Summary: Get up and running with large language models
    Description: 
    Get up and running with large language models.
    Run OpenAI gpt-oss, DeepSeek-R1, Gemma 3, Llama 4, Mistral, Phi-4,
    Qwen 3, and other models, locally.
    
    This is a meta-package.

    List of RPM packages built from this SRPM:
    ollama (x86_64, aarch64)
    ollama-cpu (x86_64, aarch64)
    ollama-cpu-debuginfo (x86_64, aarch64)
    ollama-cuda (x86_64)
    ollama-cuda-debuginfo (x86_64)
    ollama-vulkan (x86_64, aarch64)
    ollama-vulkan-debuginfo (x86_64, aarch64)

    Maintainer: Vitaly Chikunov

    List of contributors:
    Vitaly Chikunov

      1. cmake
      2. curl
      3. gcc-c++
      4. gcc12-c++
      5. glslc
      6. golang
      7. libvulkan-devel
      8. look
      9. nvidia-cuda-devel-static
      10. patchelf
      11. rpm-macros-cmake
      12. rpm-macros-systemd

    Last changed


    Nov. 15, 2025 Vitaly Chikunov 0.12.11-alt1
    - Update to v0.12.11 (2025-11-13).
    Nov. 9, 2025 Vitaly Chikunov 0.12.10-alt1
    - Update to v0.12.10 (2025-11-05).
    - Enable Vulkan GPU runner (ollama-vulkan).
    Nov. 2, 2025 Vitaly Chikunov 0.12.9-alt1
    - Update to v0.12.9 (2025-10-31).