Package ollama: Information
Source package: ollama
Version: 0.6.0-alt1
Build time: Mar 13, 2025, 03:30 AM in the task #377797
Category: Sciences/Computer science
Report package bugHome page: https://ollama.com
License: MIT
Summary: Get up and running with large language models
Description:
Get up and running with large language models. Run DeepSeek-R1, Gemma 3, Llama 3.3, Mistral, Phi-4, Qwen 2.5, and other models, locally. This is a meta-package.
List of RPM packages built from this SRPM:
ollama (x86_64, aarch64)
ollama-cpu (x86_64, aarch64)
ollama-cpu-debuginfo (x86_64, aarch64)
ollama-cuda (x86_64)
ollama-cuda-debuginfo (x86_64)
ollama (x86_64, aarch64)
ollama-cpu (x86_64, aarch64)
ollama-cpu-debuginfo (x86_64, aarch64)
ollama-cuda (x86_64)
ollama-cuda-debuginfo (x86_64)
Maintainer: Vitaly Chikunov
Last changed
March 12, 2025 Vitaly Chikunov 0.6.0-alt1
- Update to v0.6.0 (2025-03-11).
March 7, 2025 Vitaly Chikunov 0.5.13-alt1
- Update to v0.5.13 (2025-03-03). - Enable NVIDIA GPU runner (ollama-cuda).
Feb. 15, 2025 Vitaly Chikunov 0.5.11-alt1
- Update to v0.5.11 (2025-02-13). - Split the package into meta-package (ollama) and runner (ollama-cpu).