Claude Code now supports hooks
As an aside, people say AI will eliminate coding jobs, but then who will configure these hooks? Or think about adding such a feature?
These kinds of tooling and related work will still be there unless AI evolves to the point that it even thinks of this and announces this to all other AI entities and they also implement it properly etc.
.claude/settings.local.json fragment:
I jumped at this HN post, because Claude Code Opus 4 has this stupid habit of never terminating files with a return."hooks": { "PostToolUse": [ { "matcher": "Edit|MultiEdit|Write", "hooks": [ { "type": "command", "command": "jq -r '.tool_input.file_path' | xargs bin/save-hook.sh" } ] } ] }To test a new hook one needs to restart claude. Better to run the actual processing through a script one can continually edit in one session. This script uses formatters on C files and shell scripts, and just fixes missing returns on other files.
As usual claude and other AI is poor at breaking problems into small steps, and makes up ways to do things. The above hook receives a json file, which I first saved to disk, then extracted the file path and saved that to disk, then instead called save-hook.sh on that path. Now we're home after a couple of edits to save-hook.sh
This was all ten minutes. I wasted far more time letting it flail attempting bigger steps at once.
Really excited to see this implemented.
Hooks will be important for "context engineering" and runtime verification of an agent's performance. This extends to things such as enterprise compliance and oversight of agentic behavior.
Nice of Anthropic to have supported the idea of this feature from a github issue submission: https://github.com/anthropics/claude-code/issues/712
This is great, it means you can set up complex concrete rules about commands CC is allowed to run (and with what arguments), rather than trying to coax these via CLAUDE.md.Exit Code 2 Behavior PreToolUse - Blocks the tool call, shows error to ClaudeE.g. you can allow
but preventdocker compose exec django python manage.py testdocker compose exec django python manage.py makemigrationsI've been playing with Claude Code the past few days. It is very energetic and maybe will help me get over the hump on some long-standing difficult problems, but it loses focus quickly. Despite explicit directions in CLAUDE.md to build with "make -j8" and run unit tests with "make -j8 check", I see it sometimes running make without -j or calling the test executable directly. I would like to limit it to doing certain essential aspects of workflow with the commands I specify, just as a developer would normally do. Are "Hooks" the right answer?
A couple of example hooks: https://cameronwestland.com/building-my-first-claude-code-ho...
I'm happy to see Claude Code reaching parity with Cursor for linting/type checking after edits.
I frequently have to remind Claude Code of the instructions in the CLAUDE.md file, as well as various general aspects of the code base.
Maybe this will enable a fix
I'm excited to see these improvements but, none of them are enough to make up for inconvenience of having to start a new conversation (/clear) after every task.
I've been using Gemini Code. The larger context window is big enough to work for a full session without having to /clear. It matters. Having to think so hard and conserve tokens with Claude is problematic.
adding a hook to have it push to prod every time baby
Would love to see this in Cursor. My workaround right now is using a bunch of rules that sort of work some of the time.
This closes a big feature gap. One thing that may not be obvious is that because of the way Claude Code generates commits, regular Git hooks won’t work. (At least, in most configurations.)
We’ve been using CLAUDE.md instructions to tell Claude to auto-format code with the Qlty CLI (https://github.com/qltysh/qlty) but Claude a bit hit and miss in following them. The determinism here is a win.
It looks like the events that can be hooked are somewhat limited to start, and I wonder if they will make it easy to hook Git commit and Git push.
So, form my limited understanding, this doesn't take up context, it's something auto where you can configure per tool use, and not MCP that Claude decides "when" to run it?!
This needs a way to match directories for changes in monorepos. E.g. run this linter only if there were changes in this directory.
Find it a bit odd they didn't model this as an MCP server itself, and making hooks just mcp tools with pre-agreed names.
Wouldn't it be nice to have the agent autodiscover the hooks and abstracting their implementation details away under the mcp server, which you could even reuse by other agents?
Amazing how there's whole companies dedicated to this and yet claude code keeps leading the way.
one thing that is getting clear is that the gains from model enhancement is getting saturated
thats why we are starting to see a programming of ai, almost like programming building blocks
if there is a pathway for models to get smart enough to know when to trigger these hooks by themselves from system prompt or by default itself, then it wouldn't make sense to have these hooks
I was yesterday only searching for ways to live lint than waiting for Claude to do that or during pre-commit
Wish it supported rollbacks..
Claude Code has basically grown to dominate my initial coding workflow.
I was using the API and passed $50 easily, so I upgraded to the $100 a month plan and have already reached $100 in usage.
I've been working on a large project, with 3 different repos (frontend, backend, legacy backend) and I just have all 3 of them in one directory now with claude code.
Wrote some quick instructions about how it was setup, its worked very well. If I am feeling brave I can have multiple claude codes running in different terminals, each working on one piece, but Opus tends to do better working across all 3 repos with all of the required context.
Still have to audit every change, commit often, but it works great 90% of the time.
Opus-4 feels like what OAI was trying to hype up for the better part of 6 months before releasing 4.5
Just started using Claude (very late to the game), and I am truly blown away. Instead of struggling for hours trying to get the right syntax for a Powershell script or to convert Python to Go, I simply ask Claude to make it happen. This helps me focus on content creation instead of the mind-bending experience of syntax across various languages. While some might call it laziness, I call it freedom as it helps me get my stuff done quicker.
I have been using it for other stuff (real estate, grilling recipes, troubleshooting electrical issues with my truck), and it seems to have a very large knowledge base. At this point, my goal is to get good at asking the right kinds of questions to get the best/most accurate answers.
>before this you had to trust that claude would follow your readme instructions about running linters or tests. hit and miss at best. now its deterministic. pre hook blocks bad actions post hook validates results.
>hooks let you build workflows where multiple agents can hand off work safely. one agent writes code another reviews it another deploys it. each step gated by verification hooks.
[dead]
[flagged]
[flagged]
Given the Anthropic legal terms forbid competing with them, what are we actually allowed to do with this? Seems confusing what is allowed.
No machine learning work? That would compete.
No writing stuff I would train AI on. Except I own the stuff it writes, but I can’t use it.
Can we build websites with it? What websites don’t compete with Anthropic?
Terminal games? No, Claude code is a terminal game, if you make a terminal game it competes with Claude?
Can their “trust and safety team” humans read everyone’s stuff just to check if we’re competing with LLMs (funny joke) and steal business ideas and use them at Anthropic?
Feels like the dirty secret of AI services is, every possible use case violates the terms, and we just have to accept we’re using something their legal team told us not to use? How is that logically consistent? Any safety concerns? This doesn’t seem like a law Asimov would appreciate.
It would be cool if the set of allowed use cases wasn’t empty. That might make Anthropic seem more intelligent
This is nice but I really wish they’d just let me fork the damn thing already.
I tried to make an app in claude code like they so much fanfare it could do, and it failed. It was obvious it will fail, I wanted something that I think it was not done before, using the Youtube api. But it failed nonetheless.
I am tired of pretending that this can actually pull any meaningful work besides a debug companion or a slightly enhanced google/stackoverflow.
So many people yearn for LLM's to be like the Star Trek ship computer, which when asked a question unconditionally provides a response relevant and correct, needing no verification.
A better analogy is LLM's are closer to the "universal translator" with an occasional interaction similar to[0]:
0 - https://en.wikiquote.org/wiki/Monty_Python_and_the_Holy_Grai...Black Knight: None shall pass. King Arthur: What? Black Knight: None shall pass! King Arthur: I have no quarrel with you good Sir Knight, But I must cross this bridge. Black Knight: Then you shall die. King Arthur: I command you, as King of the Britons, to stand aside! Black Knight: I move for no man. King Arthur: So be it! [they fight until Arthur cuts off the Black Knight's left arm] King Arthur: Now, stand aside, worthy adversary. Black Knight: 'Tis but a scratch. King Arthur: A scratch? Your arm's off! Black Knight: No, it isn't. King Arthur: Well, what's that then? Black Knight: I've had worse.