
Auteur: Siu-Ho Fung
May 12, 2025
Want a better coding experience with local models? Try Continue - a code assistant extension for Visual Studio Code. Combined with Ollama and advanced models like Devstral / DeepSeek Coder, it gives you code suggestions, edits, and chat - all without sending your source code online.
Personal note:
For those looking for a robust free alternative to Cursor, I personally use a combination of Continue, Ollama, and Devstral 24B / DeepSeek Coder 33B. I find that DeepSeek Coder is more capable and gives better results. This setup works very well for me in my daily development workflow.
Everything runs locally, so your code stays private and secure - no cloud dependency.
It brings Cursor-like features to your existing VS Code setup.
Whether you're coding in Python, C++, or JavaScript, PHP, this setup provides smart completions, code edits, and chat-based AI assistance, all running locally and offline.
| Feature | Roo Code | Continue |
|---|---|---|
| VS Code Extension | β Yes | β Yes |
| Local Model Support | β Yes (via Ollama or local APIs) | β Yes (via Ollama) |
| Cloud Model Support | β Yes (e.g. OpenAI, Anthropic, DeepSeek) | β Yes (e.g. OpenAI, Anthropic) |
| Chat Interface | β Yes | β Yes |
| Autocomplete Integration | β Yes | β Yes (tight integration) |
| Code Edit Suggestions | β Yes | β Yes |
| Multi-model Configuration | β Yes (custom prompt modes) | β Yes (`continue.config.yaml`) |
| File System Access | β Yes | β Yes |
| Terminal Command Execution | β Yes | β Yes |
| Browser Automation | β Yes | β No |
| Offline Use | β Fully supported | β Fully supported |
| Open Source | β Yes | β Yes |
| Ease of Setup | β Easy (VS Code Marketplace) | β Easy (VS Code Marketplace) |
| Customization | β High (Modes, commands, routing) | β High (YAML config for models and roles) |
| Community Size | π‘ Medium (growing) | π’ Large (mature VS Code ecosystem) |
Pros
Cons
Pros
continue.config.yaml.Cons
π‘ Recommendation: Use Continue if you want a focused, efficient AI coding assistant tightly integrated into VS Code with excellent local model support.
Choose Roo Code if you prefer an assistant-style developer tool that can automate browser tasks, run terminal commands, and adapt to diverse workflows.
Ctrl+Shift+X or Cmd+Shift+X).Continue and install the one by Continue Dev.Or use the terminal:
code --install-extension Continue.continueOllama is a local model runner that works seamlessly with Continue.
ollama run llama3brew install ollama
ollama run llama3Ollama supports many high-performance open-source models. To pull and run Devstral 24B:
ollama pull devstral:24b-small-2505-q4_K_M
ollama pull deepseek-coder:6.7bThen, test it:
ollama run devstral:24b-small-2505-q4_K_Mβ οΈ System Requirements: To run this model, a GPU with at least 24 GB of VRAM is recommended, such as an NVIDIA RTX 3090 or higher. If you use a model variant with fewer parameters or quantization, 16 GB of VRAM may be sufficient in some cases.
This model is well-suited for chat, code editing, and longer prompts due to its extended context window.
continue.config.yamlFor deeper integration and multi-model control, you can create a custom config file. This allows Continue to:
chat, edit, or autocompletenomic-embed-textcontinue.config.yamlSave this in the root of your workspace or inside a .continue/ folder:
name: Local Assistant
version: 1.0.0
schema: v1models:
- name: Devstral Small 24B (Q4_K_M)
provider: ollama
model: devstral:24b-small-2505-q4_K_M
maxTokens: 131072 # explicitly set
contextLength: 128000 # if supported by Continue
roles:
- chat
- edit
- apply- name: DeepSeek-Coder 6.7B
provider: ollama
model: deepseek-coder:6.7b
maxTokens: 16384
contextLength: 16384
roles:
- autocomplete- name: Nomic Embed
provider: ollama
model: nomic-embed-text:latest
roles:
- embedcontext:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: codebaseπ Tip: After editing the config, restart the Continue extension or reload VS Code.
β
Local-first: No cloud dependency
β
Multi-model support: Use specialized models for specific roles
β
Fast: Optimized for real-time dev workflows
β
Customizable: YAML-based configuration
β
Offline: Ideal for secure environments
By combining Continue with Ollama and powerful open-source models like Devstral en Qwen3, you unlock a next-level AI coding experience - all within VS Code and on your own machine.
While Roo Code and Continue each have their own strengths, you can also use them together to create a powerful and flexible AI coding environment.
By installing both tools side by side in Visual Studio Code, you benefit from:
π‘ Note: Make sure your configuration files (such as
continue.config.yamland Rooβs settings) donβt conflict. Both tools run independently and can function in parallel. Sometimes it is also necessary to adjust the key combinations due to conflicts.
| Model Name | Precise Model Identifier | Notes |
|---|---|---|
| DevStral 7B | devstral:7b | |
| DeepSeek Coder 6.7B | deepseek-coder:6.7b | |
| DeepSeek Coder 33B Q4 | deepseek-coder:33b-instruct-q4_K_M | Personal favorite |
| CodeLlama 13B Instruct | codellama:13b-instruct | |
| CodeLlama 34B Instruct | codellama:34b-instruct | VRAM-intensive; feasible with quantization |
| Phind-CodeLlama-34B-v2 | phind-codellama:34b-v2 | |
| WizardCoder 34B | wizardcoder:34b | With Q4 or Q5 quantization |
| Mixtral 8x7B Instruct | mixtral:8x7b-instruct | |
| StarCoder 15B | starcoder:15b | Manual setup required |
| aiXcoder 7B | aixcoder:7b | Manual setup required |
Start building smarter, faster, and more securely, right from your editor.


Hebt u vragen of hulp nodig? Wij helpen u graag.