Description
Background
Large language models (LLMs) decode text through tokens—frequent character sequences within text/code. Under the hood Bolt.new is powered mostly by Anthropic's Sonnet 3.5 AI model, so using Bolt consumes tokens that we must purchase from Anthropic.
Our goal is for Bolt to use as few tokens as possible to accomplish each task, and here's why: 1) AI model tokens are one of our largest expenses and if less are used, we save money, 2) so that users can get more done with Bolt and become fans/advocates, and 3) ultimately so we can attract more users and continue investing in improving the platform!
When users interact with Bolt, tokens are consumed in 3 primary ways: chat messages between the user and the LLM, the LLM writing code, and the LLM reading the existing code to capture any changes made by the user.
There are numerous product changes that we are working on to increase token usage efficiency, and in the meantime there are many tips and tricks you can implement in your workflow to be more token efficient:
Upcoming Improvements
Optimizing token usage is a high priority for our team, and we are actively exploring several R&D initiatives aimed at improving token usage efficiency automatically behind the scenes. In the meantime, we will be shipping multiple features that improve the user experience in the near term including controlling which files it is able to modify via locking and targeting (shipped) and improving the automated debugging feature (shipped). These improvements, paired with the tips below should help you manage your tokens more efficiently. Subscribe to this issue to be notified when those new features land.
While we work on these improvements, here are some strategies you can use to maximize token usage efficiency today:
-
Avoid Repeated Automated Error "Fix" Attempts
Continuously clicking the automatic "fix" button can lead to unnecessary token consumption. After each attempt, review the result and refine your next request if needed. There are programming challenges that the AI cannot solve automatically, so it is a good idea to do some research and intervene manually if it fails to fix automatically. -
Add Error Handling To Your Project
If you find yourself stuck in an error loop, a useful strategy is to prompt the AI to enhance error handling and implement detailed logging throughout the problematic area. The AI excels at inserting robust error logs, even at a granular level, such as between functions or key steps. These logs provide valuable feedback that the AI can use to better understand the root cause of the issue. The additional logging provides more precise information when the error occurs again. With this detailed feedback, the AI can make more accurate adjustments to fix the issue. Credit to @Frankg40 for this one! -
Leverage the Rollback Functionality
Use the rollback feature to revert your project to a previous state without consuming tokens. This is essentially and undo button that can take you back to any prior state of your project, This can save time and tokens if something goes wrong with your project. Keep in mind that there is no "redo" function though, so be sure you want to revert before using this feature because it is final: all changes made after the rollback point will be permanently removed. -
Crawl, Walk, Run
Make sure the basics of your app are scaffolded before describing the details of more advanced functionality for your site. -
Use Specific and Focused Prompts
When prompting the AI, be clear and specific. Direct the model to focus on certain files or functions rather than the entire codebase, which can improve token usage efficiency. This approach is not a magic fix, but anecdotally we've seen evidence that it helps. Some specific prompting strategies that other users have reported as helpful are below, and a ton more can be found in the comment thread below:
If you have specific technologies you want to use (IE Astro, Tailwind, ShadCN), say that in your initial prompt
Mention Specific Code Segments or Classes: When possible, refer to specific divs, classes, or functions to guide Bolt to the exact place where you want the changes made. You can do this manually in chat or by highlighting the relevant code in your project and using the "Ask Bolt" functionality.
Use the Prompt Enhancer function: The better the prompt, the higher quality of the output — bolt․new can help you improve your prompts automatically with the prompt enhancement feature!
Be Specific About What Should Remain Unchanged: Mention explicitly that no modifications should occur to other parts of the site.
Batch multiple simple to explain instructions into one message. For example you can ask Bolt to change the color scheme, add mobile responsiveness, and restart the dev server safely all in one message.
-
Understand Project Size Impact
As your project grows, more tokens are required to keep the AI in sync with your code. Larger projects (and longer chat conversations) demand more resources for the AI to stay aware of the context, so it's important to be mindful of how project size impacts token usage. -
Advanced Strategy: Reset the AI Context Window
If the AI seems stuck or unresponsive to commands, consider refreshing the Bolt.new chat page in your browser. This resets the LLM’s context window, clears out prior chat messages, and reloads your code in a fresh chat session. This will clear the chat, so you will need to remind the AI of any context not already captured in the code, but it can help the AI regain focus when it is overwhelmed due to the context window being full.
We appreciate your patience during this beta period and look forward to updating this thread as we ship new functionality and improvements to increase token usage efficiency!
Activity