Aider + Gemma 4: The Open-Source AI Pair Programming Stack for 2026
If you have been shopping for an AI coding assistant in 2026, you have probably bumped into Codex CLI, Cursor, and Claude Code. They are all great — and they all ship your code to somebody else's server. If you want a fully open-source, locally-running, git-aware pair programmer instead, Aider is the most battle-tested choice on the market.
The catch: Aider's default configuration talks to GPT-4 or Claude, which means a monthly API bill and an endless stream of source code leaving your machine. The fix is to plug in a local Gemma 4 model through Ollama. Zero dollars, zero leakage, full terminal workflow.
This guide walks you from a blank shell to four real coding scenarios — adding a feature, doing a cross-file refactor, fixing a failing test, and generating unit tests — all powered by Gemma 4 running on your own hardware.
What Aider Is (and How It Differs From Codex CLI, Cursor, Claude Code)
Aider is an open-source terminal-based AI coding tool with 30K+ GitHub stars, maintained by Paul Gauthier since 2023. Its core philosophy is AI pair programming: not just generating snippets, but reading your whole repo, editing files in place, and creating git commits with generated messages.
Here is how Aider stacks up against the usual suspects:
| Feature | Aider | Codex CLI | Cursor | Claude Code |
|---|---|---|---|---|
| License | Apache 2.0 open source | Partial | Closed | Closed |
| Multi-file edits | Native | Mostly single-file | Yes | Yes |
| Auto git commits | Built-in with message generation | No | No | No |
| Repo map | Automatic | No | Partial | Partial |
| Local model support | Native Ollama / LiteLLM | Env var config | Plugin required | Not supported |
| Cost | Model cost (local = free) | API usage | $20/mo subscription | API usage |
Aider's killer feature is the repo map — a tree-sitter powered index of files, classes, functions, and dependencies that gets fed into every turn. Gemma 4 isn't guessing what your codebase looks like; it's actually seeing the relevant slices.
If you want a side-by-side with the OpenAI workflow, check Gemma 4 + Codex CLI setup guide.
Prerequisites
Before you start, confirm:
- Python 3.9+ (Aider is a Python project)
- Ollama installed and running (ollama.com)
- Gemma 4 26B or 31B pulled via Ollama
- Hardware: 16 GB RAM minimum for 26B, 24 GB for 31B
Not sure which variant to grab? See Best local AI models in 2026 for a sizing chart.
Step 1: Install Aider
A single pip command does it:
pip install aider-chatVerify:
aider --versionOn Mac or Linux with a messy Python setup, use pipx for isolation:
pipx install aider-chatStep 2: Start Ollama and Confirm Gemma 4
Make sure the daemon is up:
ollama serveIn another terminal:
ollama listYou should see something like:
NAME ID SIZE MODIFIED
gemma4:26b-a4b abc123... 15 GB 2 hours agoIf Gemma 4 is missing, pull it:
ollama pull gemma4:26b-a4bStep 3: Wire Aider to Local Gemma 4
Aider speaks Ollama natively. From inside your repo:
cd /path/to/your/project
aider --model ollama/gemma4:26b-a4bThat is the whole setup. Aider auto-discovers http://localhost:11434 and routes every request to Gemma 4.
To avoid typing that flag every time, drop a .aider.conf.yml into your project root:
model: ollama/gemma4:26b-a4bNow plain aider will do the right thing.
Advanced flags
Ollama on a remote box or non-default port:
aider --model ollama/gemma4:26b-a4b --ollama-api-base http://192.168.1.100:11434Use a heavier model for edits and a lighter one for commit messages to save cycles:
aider --model ollama/gemma4:26b-a4b --weak-model ollama/gemma4:e4bStep 4: Four Real Scenarios
Scenario 1 — Add a new feature
You have a Flask app and want a registration endpoint. Type into Aider:
> In app.py add a /register endpoint that accepts email and password, validates them, and stores the user in SQLite.Aider will:
- Read the repo map to understand the current structure
- Edit
app.py(or create new files as needed) - Show a diff for you to approve
- On approval, run
git commit -m "feat: add /register endpoint with email/password validation"
You never copy-paste code. Everything happens in place.
Scenario 2 — Cross-file refactor
> Extract all database calls from utils.py into a new db.py module and update every import.This is where Aider shines. It edits utils.py, creates db.py, and updates every file that imported the old path — all in a single commit. Codex CLI simply cannot coordinate that many files in one pass.
Scenario 3 — Fix a failing test
> test_login_invalid_password in test_auth.py is failing with "AssertionError: 200 != 401". Fix it.Aider reads both the test and the code under test, identifies the logic bug, patches it, and reruns the test to confirm green.
Scenario 4 — Generate unit tests
> Write pytest unit tests for every public function in db.py.Aider creates test_db.py, fills in happy-path and edge-case cases for each function, and commits.
How Gemma 4 Actually Performs in Aider
Let's be honest: Gemma 4 26B is not going to beat GPT-4 Turbo on hard coding evals. But for day-to-day work it is more than good enough.
Works well for:
- Single-file generation and edits
- Small cross-file refactors (2–3 files)
- Bug fixes when the error message is specific
- Test generation
- Code explanation
Struggles with:
- Wide refactors touching 5+ files
- Framework-idiomatic tasks (DRF ViewSets, Rails concerns, etc.)
- Very long contexts — Gemma 4 26B advertises 128K but quality degrades past ~32K in practice
Recommended strategy: use Gemma 4 26B for daily work (free and local). For the rare hard problem, switch to a cloud model with aider --model gpt-4o — Aider keeps a clean git history either way.
Troubleshooting
"Model not found"
Check that Ollama is up (curl http://localhost:11434/v1/models) and that the model name matches ollama list exactly. Aider needs the ollama/ prefix, e.g. ollama/gemma4:26b-a4b.
Slow responses
Gemma 4 26B runs about 20–40 tok/s on an M1 MacBook. For big generations that means 30–60 seconds. If it crawls:
- Confirm GPU use with
ollama ps - Try a more aggressive quantization
- Use E4B for quick tasks, 26B for hard ones
Unwanted auto-commits
aider --model ollama/gemma4:26b-a4b --no-auto-commitsAider will still edit files but leave committing to you.
Garbled output
Usually a context-window problem. Use /drop to remove files from the chat and /tokens to inspect context usage.
FAQ
Is Aider free? Aider itself is fully open source (Apache 2.0). Your cost is whatever the backing model charges — local Gemma 4 is $0.
Aider vs Cursor? Cursor is a GUI editor built on VS Code. Aider is a pure terminal tool. Aider has auto git commits and a repo map; Cursor does not. They play nicely together on the same repo.
Can I run Aider with Gemma 4 E2B (4B)? Technically yes, practically no. 4B is too small to produce reliable Aider diffs. E4B (8B) works for trivial tasks; 26B is the real baseline.
Does it run on Windows? Yes. Python, Ollama, and Aider all work on Windows Terminal or PowerShell.
Which languages does Aider support? Pretty much everything. Tree-sitter powers the repo map across 50+ languages including Python, TypeScript, Go, Rust, Java, C/C++, and Ruby.
Gemma 4 or Qwen 3 for Aider? Community feedback in early 2026 favors Gemma 4 26B for instruction-following stability — Qwen 3 27B occasionally breaks Aider's diff format. See our Gemma 4 vs Qwen 3 comparison.
Can I run the model remotely?
Absolutely. Put Ollama on a GPU box, SSH in locally, and point --ollama-api-base at the server IP.
Related Articles
- Gemma 4 + Codex CLI setup guide — The OpenAI-flavored alternative
- Gemma 4 vs Qwen 3 compared — Picking the right open model
- Best local AI models in 2026 — The full landscape overview
Stop reading. Start building.
~/gemma4 $ Get hands-on with the models discussed in this guide. No deployment, no friction, 100% free playground.
Launch Playground />


