@dickfickling beat me to it, but ultrathink is already explicitly called out in the public Anthropic documentation:
"Ask Claude to make a plan for how to approach a specific problem. We recommend using the word "think" to trigger extended thinking mode, which gives Claude additional computation time to evaluate alternatives more thoroughly. These specific phrases are mapped directly to increasing levels of thinking budget in the system: "think" < "think hard" < "think harder" < "ultrathink." Each level allocates progressively more thinking budget for Claude to use."
I don't know what the max allowable "budget_tokens" is for Claude 3.7 Thinking mode, but the SDK shows an example of 32k which matches up with the article's findings.
Looks like that documentation is incorrect. It suggests there are four levels - "think" < "think hard" < "think harder" < "ultrathink." - but if you look in the code there are actually only three.
Sincerely, I respect your response to how arbitrary it seems in this form.
But... I'd like you to take a moment and think really hard about whether this is truly novel behavior for LLMs, or rather something that has always been part of the interplay between inter-agent communication and intra-agent thought :)
It would be cool if these "secret keywords" were more directly exposed in the UI somehow, perhaps as a toggleable developer/experimental mode? I would have a lot of fun tinkering with them.
It's for Claude Code FWIW, just leaving a sigil here for fellow API implementers who are confused: your general point stands (though I wonder about UI affordances other than text given it's a CLI tool)
I already assume that the models are shifting underneath me. It's very frustrating that most non-developers just think you can ask an LLM a question and it will respond accurately each time. They are designed to make creative output and even if you dial down the temperature they still can hallucinate.
Why not be explicit about the thinking budget instead of aliasing it to a number with a term like ultrathink?
It's a cute word, and fun to know is managed on the client side, but isn't it again more imprecision to tools the are already suffering from that?
I love that language. This is something that’s constantly bothered me from the get go. Maybe I’m just wearing a tinfoil hat, but I swear I’ve noticed variations in behavior and performance within models across time.
This has been super annoying to me.
I always use the chat interfaces (mostly Claude atm), so I guess that just puts me at the whim of which sub-version of model Anthropic is serving for the day.
I've had a frustrating time over the last couple of days with Gemini 2.5 Pro.
First I asked it to help me reverse the direction of text on a circle in Photoshop. It gave me very specific instructions which don't work and continued to argue with me that I was doing something wrong - I did my own research and found it's not actually possible to do this in Photoshop, and the instructions it was giving me were for Illustrator. 30 minutes of my time wasted.
This morning I asked it how to remove the axis lines from the orthographic view in Blender 4.3. I explained carefully that I know how to remove them in perspective view but that wasn't working for orthographic views. It over and over told me how to remove them from perspective views, telling me to use non existent UI elements, even drawing ASCII diagrams of how to find the nonexistent icons. When I said they didn't exist, it would circle back to telling me how to turn them off in the perspective view.
It turns out, again, it's not possible to remove grid lines from orthographic views in Blender (at least without messing around with the theme settings, or turning off the grid entirely).
In both cases it was incredibly persistent in stating the wrong way to do things, even when I was saying that it didn't work. I felt like it was gaslighting me, moreso than with any previous model I've used.
I haven't yet used it for writing code but these two experiences don't make me feel hopeful. The worst part about dealing with AI is when they are confidently incorrect.
These are good examples of things that I wouldn't expect an LLM to get right, based purely on my own intuition.
I don't believe they have much training material on the UI for tools at the moment - it may well come in the future as these new "computer use" models get fed vast amounts of screen capture videos, but to date my hunch is that there hasn't been much focus on that, especially for tools like Photoshop and Blender (training them to use a web browser is a whole lot more useful for the moment).
I'd encourage you not to assume they suck at code just because they suck at answering questions about Photoshop and Blender. I wrote about that a while back: "Don’t fall into the trap of anthropomorphizing LLMs and assuming that failures which would discredit a human should discredit the machine in the same way." - https://simonwillison.net/2025/Mar/11/using-llms-for-code/#s...
Have you actually extensively tried using any LLMs for help with Blender/Photoshop/other apps, or are you just speaking based on your intuition?
I use Claude for help with Blender all the time and it's amazing, for the most part. It does have an in depth knowledge of the UI and also of many specific technical ways of doing things. The main thing that it gets stuck on is UI changes between versions, and to be fair, I gets stuck on this too.
I will try this same query on Claude tomorrow when I'm in my office. I suspect it'll get it wrong as well, but it's not so much the getting wrong that I had an issue with, but how persistent Gemini was in refusing to admit error and making it seem like I was the one at fault.
I've tried it for a few different GUI things (and "how do I do X on website Y" things) with very mixed results. I've not used it for Blender.
Really interesting to hear that Claude does well at this kind of problem! Maybe that's thanks to training they did relative to their Claude Computer Use research last year. https://simonwillison.net/2024/Oct/22/computer-use/
Crazy that it's a key word that's implemented in the code that expands the context window, and that a light touch of reverse engineering was required to find it.
"Ask Claude to make a plan for how to approach a specific problem. We recommend using the word "think" to trigger extended thinking mode, which gives Claude additional computation time to evaluate alternatives more thoroughly. These specific phrases are mapped directly to increasing levels of thinking budget in the system: "think" < "think hard" < "think harder" < "ultrathink." Each level allocates progressively more thinking budget for Claude to use."
https://www.anthropic.com/engineering/claude-code-best-pract...
I don't know what the max allowable "budget_tokens" is for Claude 3.7 Thinking mode, but the SDK shows an example of 32k which matches up with the article's findings.
1: https://www.youtube.com/watch?v=Jm-upHSP9KU
But... I'd like you to take a moment and think really hard about whether this is truly novel behavior for LLMs, or rather something that has always been part of the interplay between inter-agent communication and intra-agent thought :)
I included that de-obfuscated code in my post: https://simonwillison.net/2025/Apr/19/claude-code-best-pract...
I already assume that the models are shifting underneath me. It's very frustrating that most non-developers just think you can ask an LLM a question and it will respond accurately each time. They are designed to make creative output and even if you dial down the temperature they still can hallucinate.
Why not be explicit about the thinking budget instead of aliasing it to a number with a term like ultrathink?
It's a cute word, and fun to know is managed on the client side, but isn't it again more imprecision to tools the are already suffering from that?
I love that language. This is something that’s constantly bothered me from the get go. Maybe I’m just wearing a tinfoil hat, but I swear I’ve noticed variations in behavior and performance within models across time.
This has been super annoying to me.
I always use the chat interfaces (mostly Claude atm), so I guess that just puts me at the whim of which sub-version of model Anthropic is serving for the day.
Or again, maybe I’m just hallucinating.
First I asked it to help me reverse the direction of text on a circle in Photoshop. It gave me very specific instructions which don't work and continued to argue with me that I was doing something wrong - I did my own research and found it's not actually possible to do this in Photoshop, and the instructions it was giving me were for Illustrator. 30 minutes of my time wasted.
This morning I asked it how to remove the axis lines from the orthographic view in Blender 4.3. I explained carefully that I know how to remove them in perspective view but that wasn't working for orthographic views. It over and over told me how to remove them from perspective views, telling me to use non existent UI elements, even drawing ASCII diagrams of how to find the nonexistent icons. When I said they didn't exist, it would circle back to telling me how to turn them off in the perspective view.
It turns out, again, it's not possible to remove grid lines from orthographic views in Blender (at least without messing around with the theme settings, or turning off the grid entirely).
In both cases it was incredibly persistent in stating the wrong way to do things, even when I was saying that it didn't work. I felt like it was gaslighting me, moreso than with any previous model I've used.
I haven't yet used it for writing code but these two experiences don't make me feel hopeful. The worst part about dealing with AI is when they are confidently incorrect.
I don't believe they have much training material on the UI for tools at the moment - it may well come in the future as these new "computer use" models get fed vast amounts of screen capture videos, but to date my hunch is that there hasn't been much focus on that, especially for tools like Photoshop and Blender (training them to use a web browser is a whole lot more useful for the moment).
I'd encourage you not to assume they suck at code just because they suck at answering questions about Photoshop and Blender. I wrote about that a while back: "Don’t fall into the trap of anthropomorphizing LLMs and assuming that failures which would discredit a human should discredit the machine in the same way." - https://simonwillison.net/2025/Mar/11/using-llms-for-code/#s...
I use Claude for help with Blender all the time and it's amazing, for the most part. It does have an in depth knowledge of the UI and also of many specific technical ways of doing things. The main thing that it gets stuck on is UI changes between versions, and to be fair, I gets stuck on this too.
I will try this same query on Claude tomorrow when I'm in my office. I suspect it'll get it wrong as well, but it's not so much the getting wrong that I had an issue with, but how persistent Gemini was in refusing to admit error and making it seem like I was the one at fault.
Really interesting to hear that Claude does well at this kind of problem! Maybe that's thanks to training they did relative to their Claude Computer Use research last year. https://simonwillison.net/2024/Oct/22/computer-use/
Perhaps they should switch to the metric thinking system.
Gigathinking, and Terathinking should be on the menu as well.