Ollama has introduced a new native application for macOS and Windows, enhancing the experience of running local AI models. The app allows users to download and interact with models directly, eliminating the need for command-line interfaces. It supports drag-and-drop functionality for files, enabling seamless interaction with text and PDFs.
Additionally, the app features adjustable context lengths for processing large documents and multimodal support, allowing image inputs for compatible models like Google's Gemma 3. This update aims to provide a more user-friendly and efficient environment for local AI model interactions