- Training the Tokenizer - 03-06-2025
- Commented in [huggingface/smolagents] on 2025-06-04.
AI Summary: @Vidit-Ostwal has suggested implementing a summarization feature that activates after reaching a certain threshold, specifically after utilizing 75% of tokens instead of waiting for the full capacity. This approach aims to enhance the efficiency and responsiveness of the summarization process by not delaying until 100% of tokens are consumed. The proposal encourages discussion on the suggested method to gather feedback on its effectiveness and practicality.
- Commented in [crewAIInc/crewAI] on 2025-06-04.
AI Summary: @Vidit-Ostwal has suggested using the version 0.121.1 of a library while working in Jupyter Notebook. The comment details the setup of a content creation process involving roles such as Content Planner, Content Writer, and Editor, with each agent having specific goals and backstories. Additionally, tasks are outlined to prioritize trends, craft a blog post, and proofread it. It is recommended to utilize .py files for inference in this process to enhance workflow efficiency.
- Commented in [huggingface/smolagents] on 2025-06-04.
AI Summary: @Vidit-Ostwal has suggested appreciation for the Smolagent codebase as a valuable learning resource for AI agents. There is an ongoing issue when using Smolagents to build a knowledge bot linked to various company tools, as irrelevant API calls occur due to an excess of outdated messages in the context window. This leads to increased costs and deteriorating performance. The proposed solution is to allow the agent to optimize its memory by removing or compressing old messages, particularly suggesting summarization once the context window limit is reached.
- Commented in [crewAIInc/crewAI] on 2025-06-04.
AI Summary: @Vidit-Ostwal has suggested that a recurring issue is encountered where the last task sentence continuously repeats, leading to a "RecursionError: maximum recursion depth exceeded." This appears to happen when attempting to connect to the Crewai telemetry service, which results in an SSL certificate verification error. A potential solution proposed is to upgrade the version of Crewai being used or alternatively utilize .py files for inference to circumvent this problem. Proper handling of these issues is essential for effective task execution.
- Commented in [crewAIInc/crewAI] on 2025-06-04.
AI Summary: @Vidit-Ostwal has suggested that tools in crewai should not be sent directly to litellm; instead, they should be included in the description sent to litellm. The current method involves sending the entire description rather than individual tools. Additionally, he pointed out issues where the generated descriptions are treated as keys, which can be improved through prompt engineering. He proposes dropping the description or altering how argument details are presented, as the arguments are not being sent in the current format.
No issues raised recently.
- Opened a PR in [ariG23498/gemma3-object-detection]: Added get_tokenizer_with_new_tokens_func (2025-06-02).
AI Summary: @Vidit-Ostwal has suggested implementing a dynamic function that allows for direct configuration using a configuration file. This function would facilitate the addition of tokens to the tokenizer, enhancing its flexibility and usability. The integration of a configuration file is expected to streamline the process, allowing users to customize the tokenizer more easily. This approach aims to improve the overall efficiency of token management within the application while ensuring that configurations can be easily modified as needed.
- Starred tokenbender/avataRL on 2025-06-01.
No repositories forked recently.