Skip to content

Commit 408b52a

Browse files
merveenoyanMerve Noyanaymeric-roucheralbertvillanova
authored
Add VLM support (#220)
* vlm initial commit * transformers integration for vlms * Add webbrowser example and make it work 🥳🥳 * Refactor image support * Allow modifying agent attributes in callback * Improve vlm browser example * time.sleep(0.5) before screenshot to let js animations happen * test to validate internal workflow for passing images * Update test_agents.py * Improve error logging * Switch to OpenAIServerModel * Improve the example * Format * add docs about steps, callbacks & co * Add precisions in doc * Improve browser * Tiny prompting update * Fix style * fix/add test * refactor * Fix write_inner_memory_from_logs for OpenAI format * Add back summary mode * Make it work with TransformersModel * Fix test * Fix loop * Fix quality * Fix mutable default argument * Rename tool_response_message to error_message and append it * Working browser with firefox * Use flatten_messages_as_text passed to TransformersModel * Fix quality * Document flatten_messages_as_text in docstring * Working ctrl + f in browser * Make style * Fix summary_mode type hint and add to docstring * Move image functions to tools * Update docstrings * Fix type hint * Fix typo * Fix type hints * Make callback call compatible with old single-argument functions * Revert update_metrics to have a single arg * Pass keyword args instead of args to callback * Update webbrowser * fix for single message case where final message list is empty * forgot debugger lol * accommodate VLM-like chat template and fix tests * Improve example wording * Style fixes * clarify naming and fix tests * test fix * Fix style * Add bm25 to fix one of the doc tests * fix mocking in VL test * fix bug in fallback * add transformers model * remove chrome dir from helium * Update Transformers example with flatten_messages_as_text * Add doc for flatten_messages_as_text * Fix merge error --------- Co-authored-by: Merve Noyan <[email protected]> Co-authored-by: Aymeric <[email protected]> Co-authored-by: Albert Villanova del Moral <[email protected]>
1 parent de7b0ee commit 408b52a

File tree

11 files changed

+613
-121
lines changed

11 files changed

+613
-121
lines changed

docs/source/en/conceptual_guides/react.md

+33-7
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,33 @@ The ReAct framework ([Yao et al., 2022](https://huggingface.co/papers/2210.03629
1919

2020
The name is based on the concatenation of two words, "Reason" and "Act." Indeed, agents following this architecture will solve their task in as many steps as needed, each step consisting of a Reasoning step, then an Action step where it formulates tool calls that will bring it closer to solving the task at hand.
2121

22-
React process involves keeping a memory of past steps.
22+
All agents in `smolagents` are based on singular `MultiStepAgent` class, which is an abstraction of ReAct framework.
2323

24-
> [!TIP]
25-
> Read [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) blog post to learn more about multi-step agents.
24+
On a basic level, this class performs actions on a cycle of following steps, where existing variables and knowledge is incorporated into the agent logs like below:
25+
26+
Initialization: the system prompt is stored in a `SystemPromptStep`, and the user query is logged into a `TaskStep` .
27+
28+
While loop (ReAct loop):
29+
30+
- Use `agent.write_inner_memory_from_logs()` to write the agent logs into a list of LLM-readable [chat messages](https://huggingface.co/docs/transformers/en/chat_templating).
31+
- Send these messages to a `Model` object to get its completion. Parse the completion to get the action (a JSON blob for `ToolCallingAgent`, a code snippet for `CodeAgent`).
32+
- Execute the action and logs result into memory (an `ActionStep`).
33+
- At the end of each step, we run all callback functions defined in `agent.step_callbacks` .
34+
35+
Optionally, when planning is activated, a plan can be periodically revised and stored in a `PlanningStep` . This includes feeding facts about the task at hand to the memory.
36+
37+
For a `CodeAgent`, it looks like the figure below.
38+
39+
<div class="flex justify-center">
40+
<img
41+
class="block dark:hidden"
42+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/codeagent_docs.png"
43+
/>
44+
<img
45+
class="hidden dark:block"
46+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/codeagent_docs.png"
47+
/>
48+
</div>
2649

2750
Here is a video overview of how that works:
2851

@@ -39,9 +62,12 @@ Here is a video overview of how that works:
3962

4063
![Framework of a React Agent](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png)
4164

42-
We implement two versions of ToolCallingAgent:
43-
- [`ToolCallingAgent`] generates tool calls as a JSON in its output.
44-
- [`CodeAgent`] is a new type of ToolCallingAgent that generates its tool calls as blobs of code, which works really well for LLMs that have strong coding performance.
65+
We implement two versions of agents:
66+
- [`CodeAgent`] is the preferred type of agent: it generates its tool calls as blobs of code.
67+
- [`ToolCallingAgent`] generates tool calls as a JSON in its output, as is commonly done in agentic frameworks. We incorporate this option because it can be useful in some narrow cases where you can do fine with only one tool call per step: for instance, for web browsing, you need to wait after each action on the page to monitor how the page changes.
68+
69+
> [!TIP]
70+
> We also provide an option to run agents in one-shot: just pass `single_step=True` when launching the agent, like `agent.run(your_task, single_step=True)`
4571
4672
> [!TIP]
47-
> We also provide an option to run agents in one-shot: just pass `single_step=True` when launching the agent, like `agent.run(your_task, single_step=True)`
73+
> Read [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) blog post to learn more about multi-step agents.

examples/vlm_web_browser.py

+222
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,222 @@
1+
from io import BytesIO
2+
from time import sleep
3+
4+
import helium
5+
from dotenv import load_dotenv
6+
from PIL import Image
7+
from selenium import webdriver
8+
from selenium.common.exceptions import ElementNotInteractableException, TimeoutException
9+
from selenium.webdriver.common.by import By
10+
from selenium.webdriver.support import expected_conditions as EC
11+
from selenium.webdriver.support.ui import WebDriverWait
12+
13+
from smolagents import CodeAgent, LiteLLMModel, OpenAIServerModel, TransformersModel, tool # noqa: F401
14+
from smolagents.agents import ActionStep
15+
16+
17+
load_dotenv()
18+
import os
19+
20+
21+
# Let's use Qwen-2VL-72B via an inference provider like Fireworks AI
22+
23+
model = OpenAIServerModel(
24+
api_key=os.getenv("FIREWORKS_API_KEY"),
25+
api_base="https://api.fireworks.ai/inference/v1",
26+
model_id="accounts/fireworks/models/qwen2-vl-72b-instruct",
27+
)
28+
29+
# You can also use a close model
30+
31+
# model = LiteLLMModel(
32+
# model_id="gpt-4o",
33+
# api_key=os.getenv("OPENAI_API_KEY"),
34+
# )
35+
36+
# locally a good candidate is Qwen2-VL-7B-Instruct
37+
# model = TransformersModel(
38+
# model_id="Qwen/Qwen2-VL-7B-Instruct",
39+
# device_map = "auto",
40+
# flatten_messages_as_text=False
41+
# )
42+
43+
44+
# Prepare callback
45+
def save_screenshot(step_log: ActionStep, agent: CodeAgent) -> None:
46+
sleep(1.0) # Let JavaScript animations happen before taking the screenshot
47+
driver = helium.get_driver()
48+
current_step = step_log.step_number
49+
if driver is not None:
50+
for step_logs in agent.logs: # Remove previous screenshots from logs for lean processing
51+
if isinstance(step_log, ActionStep) and step_log.step_number <= current_step - 2:
52+
step_logs.observations_images = None
53+
png_bytes = driver.get_screenshot_as_png()
54+
image = Image.open(BytesIO(png_bytes))
55+
print(f"Captured a browser screenshot: {image.size} pixels")
56+
step_log.observations_images = [image.copy()] # Create a copy to ensure it persists, important!
57+
58+
# Update observations with current URL
59+
url_info = f"Current url: {driver.current_url}"
60+
step_log.observations = url_info if step_logs.observations is None else step_log.observations + "\n" + url_info
61+
return
62+
63+
64+
# Initialize driver and agent
65+
chrome_options = webdriver.ChromeOptions()
66+
chrome_options.add_argument("--force-device-scale-factor=1")
67+
chrome_options.add_argument("--window-size=1000,1300")
68+
chrome_options.add_argument("--disable-pdf-viewer")
69+
70+
driver = helium.start_chrome(headless=False, options=chrome_options)
71+
72+
# Initialize tools
73+
74+
75+
@tool
76+
def search_item_ctrl_f(text: str, nth_result: int = 1) -> str:
77+
"""
78+
Searches for text on the current page via Ctrl + F and jumps to the nth occurrence.
79+
Args:
80+
text: The text to search for
81+
nth_result: Which occurrence to jump to (default: 1)
82+
"""
83+
elements = driver.find_elements(By.XPATH, f"//*[contains(text(), '{text}')]")
84+
if nth_result > len(elements):
85+
raise Exception(f"Match n°{nth_result} not found (only {len(elements)} matches found)")
86+
result = f"Found {len(elements)} matches for '{text}'."
87+
elem = elements[nth_result - 1]
88+
driver.execute_script("arguments[0].scrollIntoView(true);", elem)
89+
result += f"Focused on element {nth_result} of {len(elements)}"
90+
return result
91+
92+
93+
@tool
94+
def go_back() -> None:
95+
"""Goes back to previous page."""
96+
driver.back()
97+
98+
99+
@tool
100+
def close_popups() -> str:
101+
"""
102+
Closes any visible modal or pop-up on the page. Use this to dismiss pop-up windows! This does not work on cookie consent banners.
103+
"""
104+
# Common selectors for modal close buttons and overlay elements
105+
modal_selectors = [
106+
"button[class*='close']",
107+
"[class*='modal']",
108+
"[class*='modal'] button",
109+
"[class*='CloseButton']",
110+
"[aria-label*='close']",
111+
".modal-close",
112+
".close-modal",
113+
".modal .close",
114+
".modal-backdrop",
115+
".modal-overlay",
116+
"[class*='overlay']",
117+
]
118+
119+
wait = WebDriverWait(driver, timeout=0.5)
120+
121+
for selector in modal_selectors:
122+
try:
123+
elements = wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, selector)))
124+
125+
for element in elements:
126+
if element.is_displayed():
127+
try:
128+
# Try clicking with JavaScript as it's more reliable
129+
driver.execute_script("arguments[0].click();", element)
130+
except ElementNotInteractableException:
131+
# If JavaScript click fails, try regular click
132+
element.click()
133+
134+
except TimeoutException:
135+
continue
136+
except Exception as e:
137+
print(f"Error handling selector {selector}: {str(e)}")
138+
continue
139+
return "Modals closed"
140+
141+
142+
agent = CodeAgent(
143+
tools=[go_back, close_popups, search_item_ctrl_f],
144+
model=model,
145+
additional_authorized_imports=["helium"],
146+
step_callbacks=[save_screenshot],
147+
max_steps=20,
148+
verbosity_level=2,
149+
)
150+
151+
helium_instructions = """
152+
You can use helium to access websites. Don't bother about the helium driver, it's already managed.
153+
First you need to import everything from helium, then you can do other actions!
154+
Code:
155+
```py
156+
from helium import *
157+
go_to('github.com/trending')
158+
```<end_code>
159+
160+
You can directly click clickable elements by inputting the text that appears on them.
161+
Code:
162+
```py
163+
click("Top products")
164+
```<end_code>
165+
166+
If it's a link:
167+
Code:
168+
```py
169+
click(Link("Top products"))
170+
```<end_code>
171+
172+
If you try to interact with an element and it's not found, you'll get a LookupError.
173+
In general stop your action after each button click to see what happens on your screenshot.
174+
Never try to login in a page.
175+
176+
To scroll up or down, use scroll_down or scroll_up with as an argument the number of pixels to scroll from.
177+
Code:
178+
```py
179+
scroll_down(num_pixels=1200) # This will scroll one viewport down
180+
```<end_code>
181+
182+
When you have pop-ups with a cross icon to close, don't try to click the close icon by finding its element or targeting an 'X' element (this most often fails).
183+
Just use your built-in tool `close_popups` to close them:
184+
Code:
185+
```py
186+
close_popups()
187+
```<end_code>
188+
189+
You can use .exists() to check for the existence of an element. For example:
190+
Code:
191+
```py
192+
if Text('Accept cookies?').exists():
193+
click('I accept')
194+
```<end_code>
195+
196+
Proceed in several steps rather than trying to solve the task in one shot.
197+
And at the end, only when you have your answer, return your final answer.
198+
Code:
199+
```py
200+
final_answer("YOUR_ANSWER_HERE")
201+
```<end_code>
202+
203+
If pages seem stuck on loading, you might have to wait, for instance `import time` and run `time.sleep(5.0)`. But don't overuse this!
204+
To list elements on page, DO NOT try code-based element searches like 'contributors = find_all(S("ol > li"))': just look at the latest screenshot you have and read it visually, or use your tool search_item_ctrl_f.
205+
Of course, you can act on buttons like a user would do when navigating.
206+
After each code blob you write, you will be automatically provided with an updated screenshot of the browser and the current browser url.
207+
But beware that the screenshot will only be taken at the end of the whole action, it won't see intermediate states.
208+
Don't kill the browser.
209+
"""
210+
211+
# Run the agent!
212+
213+
github_request = """
214+
I'm trying to find how hard I have to work to get a repo in github.com/trending.
215+
Can you navigate to the profile for the top author of the top trending repo, and give me their total number of commits over the last year?
216+
""" # The agent is able to achieve this request only when powered by GPT-4o or Claude-3.5-sonnet.
217+
218+
search_request = """
219+
Please navigate to https://en.wikipedia.org/wiki/Chicago and give me a sentence containing the word "1992" that mentions a construction accident.
220+
"""
221+
222+
agent.run(search_request + helium_instructions)

pyproject.toml

+2-1
Original file line numberDiff line numberDiff line change
@@ -62,8 +62,9 @@ all = [
6262
test = [
6363
"ipython>=8.31.0", # for interactive environment tests
6464
"pytest>=8.1.0",
65-
"python-dotenv>=1.0.1", # For test_all_docs
65+
"python-dotenv>=1.0.1", # For test_all_docs
6666
"smolagents[all]",
67+
"rank-bm25", # For test_all_docs
6768
]
6869
dev = [
6970
"smolagents[quality,test]",

0 commit comments

Comments
 (0)