Your article is truly fantastic. Although I'm Chinese and had to use translation tools to read the whole page, I still found it incredibly engaging—especially the way you present agents like characters in a video game! It’s vivid and memorable.
I have a small question about Golden Rule #2: Don’t Burden Your Warrior with a Junk Drawer (Keep it Concise).
Does the recommended number of tools assigned to an agent/model depend on the scale of the local model being used (for example, Qwen-3B vs. Qwen-8B vs. Qwen-32B)?
In other words, do larger models handle a bigger arsenal of tools more effectively and accurately?
In my real-world scenario, there are many different APIs and functions that need to be integrated. So my “weapon arsenal” might easily exceed 10 tools. Would a larger model help manage this complexity better? Or is it always recommended to keep the toolset as concise as possible, regardless of model size?
For # of tools, it also depends heavily on how complex these they are. But a larger model definitely help a lot with the complexity. I would recommend to just test it out and print the agent's reasoning process to see if it gets confused.
Congrats, Zachary! Great article, great explanation, as always. Let me share some of my experience: I never used Cursor for AI Coding, I used to go with CLINE/RooCode. But 2 weeks ago I have changed to Claude Code, since their plan is the most affordable and their agent is incredible (once, the Agent researched an issue for 5 minutes in the Web and got a correct response). Also, Claude costs only $17 and is kind of unlimited (limited unlimited - after around 2 full context windows you have to await a couple hours to use it again). Also, I will try the new Coding Agent: "Open Code". Everyone says it is great and also works with Claude PRO. And I'm willing to try a new tool from the dagger.io team/community called "container use". https://github.com/dagger/container-use
Hi Zachary!
Your article is truly fantastic. Although I'm Chinese and had to use translation tools to read the whole page, I still found it incredibly engaging—especially the way you present agents like characters in a video game! It’s vivid and memorable.
I have a small question about Golden Rule #2: Don’t Burden Your Warrior with a Junk Drawer (Keep it Concise).
Does the recommended number of tools assigned to an agent/model depend on the scale of the local model being used (for example, Qwen-3B vs. Qwen-8B vs. Qwen-32B)?
In other words, do larger models handle a bigger arsenal of tools more effectively and accurately?
In my real-world scenario, there are many different APIs and functions that need to be integrated. So my “weapon arsenal” might easily exceed 10 tools. Would a larger model help manage this complexity better? Or is it always recommended to keep the toolset as concise as possible, regardless of model size?
Thanks again for your great work!
Thank you!
For # of tools, it also depends heavily on how complex these they are. But a larger model definitely help a lot with the complexity. I would recommend to just test it out and print the agent's reasoning process to see if it gets confused.
Congrats, Zachary! Great article, great explanation, as always. Let me share some of my experience: I never used Cursor for AI Coding, I used to go with CLINE/RooCode. But 2 weeks ago I have changed to Claude Code, since their plan is the most affordable and their agent is incredible (once, the Agent researched an issue for 5 minutes in the Web and got a correct response). Also, Claude costs only $17 and is kind of unlimited (limited unlimited - after around 2 full context windows you have to await a couple hours to use it again). Also, I will try the new Coding Agent: "Open Code". Everyone says it is great and also works with Claude PRO. And I'm willing to try a new tool from the dagger.io team/community called "container use". https://github.com/dagger/container-use
I hope this can be useful.
Best Regards
1. You chose a good metaphor
2. You might be playing a lot of video games.