Changes in the system prompt between Claude Opus 4.6 and 4.7
- embedding-shape - 50403 sekunder sedan> The new <acting_vs_clarifying> section includes: When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first.
Uff, I've tried stuff like these in my prompts, and the results are never good, I much prefer the agent to prompt me upfront to resolve that before it "attempts" whatever it wants, kind of surprised to see that they added that
- walthamstow - 45967 sekunder sedanThe eating disorder section is kind of crazy. Are we going to incrementally add sections for every 'bad' human behaviour as time goes on?
- Havoc - 4383 sekunder sedan>“If a user indicates they are ready to end the conversation, Claude does not request that the user stay in the interaction or try to elicit another turn and instead respects the user’s request to stop.”
Seems like a good idea. Don't think I've ever had any of those follow up suggestions from a chatbot be actually useful to me
- jwpapi - 11703 sekunder sedanI feel like we are at the point where the improvements at one area diminishes functionality in others. I see some things better in 4.7 and some in 4.6. I assume they’ll split in characters soon.
- ikari_pl - 12974 sekunder sedan> Claude keeps its responses focused and concise so as to avoid potentially overwhelming the user with overly-long responses. Even if an answer has disclaimers or caveats, Claude discloses them briefly and keeps the majority of its response focused on its main answer.
I am strongly opinionated against this. I use Claude in some low-level projects where these answers are saving me from making really silly things, as well as serving as learning material along the way.
This should not be Anthropic's hardcoded choice to make. It should be an option, building the system prompt modularily.
- jwpapi - 11589 sekunder sedanTo me 4.7 gave me a lot of options always even if there’s a clear winner, preaching decision fatigue
- sams99 - 17576 sekunder sedanI did a follow on analysis with got 5.4 and opus 4.7 https://wasnotwas.com/writing/claude-opus-4-7-s-system-promp...
- cfcf14 - 51113 sekunder sedanI'm curious as to why 4.7 seems obsessed with avoiding any actions that could help the user create or enhance malware. The system prompts seem similar on the matter, so I wonder if this is an early attempt by Anthropic to use steering vector injection?
The malware paranoia is so strong that my company has had to temporarily block use of 4.7 on our IDE of choice, as the model was behaving in a concerningly unaligned way, as well as spending large amounts of token budget contemplating whether any particular code or task was related to malware development (we are a relatively boring financial services entity - the jokes write themselves).
In one case I actually encountered a situation where I felt that the model was deliberately failing execute a particular task, and when queried the tool output that it was trying to abide by directives about malware. I know that model introspection reporting is of poor quality and unreliable, but in this specific case I did not 'hint' it in any way. This feels qualitatively like Claude Golden Gate Bridge territory, hence my earlier contemplation on steering vectors. I've been many other people online complaining about the malware paranoia too, especially on reddit, so I don't think it's just me!
- sigmoid10 - 47862 sekunder sedanI knew these system prompts were getting big, but holy fuck. More than 60,000 words. With the 3/4 words per token rule of thumb, that's ~80k tokens. Even with 1M context window, that is approaching 10% and you haven't even had any user input yet. And it gets churned by every single request they receive. No wonder their infra costs keep ballooning. And most of it seems to be stable between claude version iterations too. Why wouldn't they try to bake this into the weights during training? Sure it's cheaper from a dev standpoint, but it is neither more secure nor more efficient from a deployment perspective.
- Grimblewald - 9446 sekunder sedanI miss 4.5. It was gold.
- SoKamil - 48428 sekunder sedanNew knowledge cutoff date means this is a new foundation model?
- mwexler - 41914 sekunder sedanInteresting that it's not a direct "you should" but an omniscient 3rd person perspective "Claude should".
Also full of "can" and "should" phrases: feels both passive and subjunctive as wishes, vs strict commands (I guess these are better termed “modals”, but not an expert)
- dmk - 51241 sekunder sedanThe acting_vs_clarifying change is the one I notice most as a heavy user. Older Claude would ask 3 clarifying questions before doing anything. Now it just picks the most reasonable interpretation and goes. Way less friction in practice.
- ikidd - 43969 sekunder sedanI had seen reports that it was clamping down on security research and things like web-scraping projects were getting caught up in that and not able to use the model very easily anymore. But I don't see any changes mentioned in the prompt that seem likely to have affected that, which is where I would think such changes would have been implemented.
- varispeed - 50240 sekunder sedanBefore Opus 4.7, the 4.6 became very much unusable as it has been flagging normal data analysis scripts it wrote itself as cyber security risk. Got several sessions blocked and was unable to finish research with it and had to switch to GPT-5.4 which has its own problems, but at least is not eager to interfere in legitimate work.
edit: to be fair Anthropic should be giving money back for sessions terminated this way.
- - 46333 sekunder sedan
- mannanj - 44976 sekunder sedanPersonally, as someone who has been lucky enough to completely cure "incurable" diseases with diet, self experimentation and learning from experts who disagreed with the common societal beliefs at the time - I'm concerned that an AI model and an AI company is planting beliefs and limiting what people can and can't learn through their own will and agency.
My concern is these models revert all medical, scientific and personal inquiry to the norm and averages of whats socially acceptable. That's very anti-scientific in my opinion and feels dystopian.
- techpulselab - 6621 sekunder sedan[dead]
- xdavidshinx1 - 8331 sekunder sedan[dead]
- kantaro - 42435 sekunder sedan[dead]
- foreman_ - 52706 sekunder sedan[dead]
- richardwong1 - 42342 sekunder sedan[dead]
Nördnytt! 🤓