
Auteur: Siu-Ho Fung
February 28, 2025
Large Language Models (LLMs) have revolutionized natural language processing and AI applications. With advancements in open-source models, you no longer need cloud-based APIs to run powerful AI systems. You can now run models like Llama 3.3, DeepSeek-R1, Phi-4, Mistral, Gemma 2, and many others locally on your own machine. For the best performance, running these models on a fast GPU significantly accelerates processing, but if you don’t have one, they can still run on a CPU, albeit with slower response times.
Running large language models on your own system offers several advantages:
To easily set up and run LLMs on your local machine, you can use Ollama, a streamlined framework designed for efficient deployment of AI models. Follow these simple steps to get started:
The installation process for Windows, macOS, and Linux. Follow the steps below for your respective system.
ollama pull deepseek-r1:7bollama run deepseek-r1:7bollama pull deepseek-r1:7bollama run deepseek-r1:7bollama pull deepseek-r1:7bollama run deepseek-r1:7bOllama supports a variety of models beyond DeepSeek-R1:7B. You can install and run other models using similar commands. Some popular choices include:
To install and run any of these models, replace deepseek-r1:7b with the model name:
ollama pull mistral
ollama run mistralRunning LLMs locally has never been easier with tools like Ollama. Whether you're exploring AI for development, research, or personal projects, having models on your machine provides full control over privacy, costs, and performance. Start experimenting today and unlock the potential of AI on your own hardware.

C:\Users\Gebruiker> ollama -v
ollama version is 0.5.11C:\Users\Gebruiker> ollama list
NAME ID SIZE MODIFIED
deepseek-r1:7b 0a8c26691023 4.7 GB 7 days agoC:\Users\Gebruiker> ollama run deepseek-r1:7b "Wat is de hoofdstad van Nederland?"
<think>
</think>De hoofdstad van Nederland is Amsterdam.To make interacting with Ollama more user-friendly, we can use Page Assist, a browser extension that allows you to chat with AI using the running Ollama model.



Hebt u vragen of hulp nodig? Wij helpen u graag.