Better Git Commits with a Local LLM
I’ve been spending more time coding with AI assistants and have been trying out Cursor and Claude Code on various projects. Noticing that my costs were going up rapidly, I looked to move some use cases to local models where they could run for free. Summarizing changes into git commits seemed like such an area, so I wrote a script—llm-commit.sh—to offload that onto a local model instead. The script (written with Claude) is simple: loop through staged files, pipe each diff into a local LLM to get a quick summary, then compile all those summaries into a final conventional-commits style message.
I landed on ollama and simonw/llm to run smaller models locally since they were easy to get up and running and integrate into a shell script. The largest initial issue was that large diffs would cause the model to lose context and ignore the formatting rules I had specified in the prompt. My workaround was to pass a (potentially truncated) diff for each file to the model first in order to get a per-file summary, then to ask the model to summarize those into a formatted commit. This seems to work pretty well, and the qwen2.5-coder:7b-instruct
model is quick enough that the inefficiency of chunking isn’t a big deal.
I’ve found that I prefer staging and reviewing changes manually. With this script I no longer have Cursor or Claude commit changes on their own, preferring to stop the agent from time to time and to manually stage changed files myself. The script then prints a proposed commit message and (with confirmation) commits the changes. This both gives me pretty good control over the commits and how they’re formatted, and saves me from burning paid API requests on simple things like generating commit messages.