Package ollama: Information
Source package: ollama
Version: 0.12.11-alt1
Build time: Nov 26, 2025, 03:47 PM in the task #400443
Category: Sciences/Computer science
Report package bugHome page: https://ollama.com
License: MIT
Summary: Get up and running with large language models
Description:
Get up and running with large language models. Run OpenAI gpt-oss, DeepSeek-R1, Gemma 3, Llama 4, Mistral, Phi-4, Qwen 3, and other models, locally. This is a meta-package.
List of RPM packages built from this SRPM:
ollama (x86_64, aarch64)
ollama-cpu (x86_64, aarch64)
ollama-cpu-debuginfo (x86_64, aarch64)
ollama-cuda (x86_64)
ollama-cuda-debuginfo (x86_64)
ollama-vulkan (x86_64, aarch64)
ollama-vulkan-debuginfo (x86_64, aarch64)
ollama (x86_64, aarch64)
ollama-cpu (x86_64, aarch64)
ollama-cpu-debuginfo (x86_64, aarch64)
ollama-cuda (x86_64)
ollama-cuda-debuginfo (x86_64)
ollama-vulkan (x86_64, aarch64)
ollama-vulkan-debuginfo (x86_64, aarch64)
Maintainer: Vitaly Chikunov
Last changed
Nov. 15, 2025 Vitaly Chikunov 0.12.11-alt1
- Update to v0.12.11 (2025-11-13).
Nov. 9, 2025 Vitaly Chikunov 0.12.10-alt1
- Update to v0.12.10 (2025-11-05). - Enable Vulkan GPU runner (ollama-vulkan).
Nov. 2, 2025 Vitaly Chikunov 0.12.9-alt1
- Update to v0.12.9 (2025-10-31).