AMD Intel GPUs Get Ollama Support via Vulkan API
Ollama Vulkan support for Intel and AMD
20 października 2025Author: Łukasz Grochal

Ollama 0.12.6-rc0 has introduced experimental Vulkan API support, marking a significant milestone for AI enthusiasts running large language models. This breakthrough enables GPU acceleration on AMD and Intel hardware, expanding beyond the traditional NVIDIA CUDA ecosystem. The development leverages llama.cpp backend and addresses a year-and-a-half tracking effort. Currently available only for users building from source, this feature will eventually reach binary releases after thorough testing.

Vulkan support opens doors for hardware where ROCM or SYCL support remains limited, democratizing local AI deployment across diverse GPU platforms and enhancing accessibility for developers seeking alternatives to proprietary frameworks.