diff --git a/src/oss/python/integrations/providers/all_providers.mdx b/src/oss/python/integrations/providers/all_providers.mdx
index e5a06c6116..c564f98d00 100644
--- a/src/oss/python/integrations/providers/all_providers.mdx
+++ b/src/oss/python/integrations/providers/all_providers.mdx
@@ -2432,6 +2432,14 @@ Browse the complete collection of integrations available for Python. LangChain P
Web scraping API and proxy service.
+
+ Web scraping and proxy services.
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+You can get your API KEY and 1000 free credits by signing up [here](https://app.scrapingbee.com/account/register).
+
+## Tools
+
+ScrapingBee Integration provides access to the following tools:
+
+* [ScrapeUrlTool](/oss/integrations/tools/scrapingbee_scrapeurl): Scrape the contents of any public website. You can also use this to extract data, capture screenshots, interact with the page before scraping, and capture the internal requests sent by the webpage.
+* [GoogleSearchTool](/oss/integrations/tools/scrapingbee_googlesearch): Search Google to obtain the following types of information: regular search (classic), news, maps, and images.
+* [CheckUsageTool](/oss/integrations/tools/scrapingbee_checkusage): Monitor your ScrapingBee credit or concurrency usage using this tool.
+* [AmazonSearchTool](/oss/integrations/tools/scrapingbee_amazonsearch): Perform a product search on Amazon with options for localization, pagination, and advanced filtering.
+* [AmazonProductTool](/oss/integrations/tools/scrapingbee_amazonproduct): Retrieve detailed information, including reviews, for a specific product on Amazon using its ASIN.
+* [WalmartSearchTool](/oss/integrations/tools/scrapingbee_walmartsearch): Search for products on Walmart with parameters for sorting and price filtering.
+* [WalmartProductTool](/oss/integrations/tools/scrapingbee_walmartproduct): Get specific details and reviews for a Walmart product by its ID.
+* [ChatGPTTool](/oss/integrations/tools/scrapingbee_chatgpt): Send your prompt to ChatGPT with an option to enhance its responses with live web search results.
+* [YouTubeMetadataTool](/oss/integrations/tools/scrapingbee_youtubemetadata): Retrieve comprehensive metadata for a YouTube video including title, description, view count, likes, channel info, publish date, duration, thumbnails, and tags.
+* [YouTubeSearchTool](/oss/integrations/tools/scrapingbee_youtubesearch): Search YouTube with extensive filtering options for video quality (HD, 4K, HDR), duration, upload date, content type (video, channel, playlist), live streams, and more.
+* [YouTubeTrainabilityTool](/oss/integrations/tools/scrapingbee_youtubetrainability): Check whether a YouTube video's content can be used for AI/ML training purposes based on the video's settings and permissions.
+* [YouTubeTranscriptTool](/oss/integrations/tools/scrapingbee_youtubetranscript): Retrieve transcripts/captions for a YouTube video with support for multiple languages and choice between auto-generated or uploader-provided transcripts.
+
+## Tool Options
+
+Most ScrapingBee tools support the following options that control how results are handled:
+
+* `return_content` (boolean, default: `False`): Controls whether the actual content is returned in the response. When set to `False`, only file information is returned to conserve AI tokens. Set to `True` when the agent needs to read and analyze the contents.
+* `results_folder` (string, default: `"scraping_results"`): Base folder path where results are saved. A timestamped subfolder is automatically created for each request.
+
+Example usage:
+
+```python
+# Returns only file information (saves tokens)
+tool.invoke({"query": "example search"})
+
+# Returns the actual content for analysis
+tool.invoke({"query": "example search", "return_content": True})
+
+# Saves results to a custom folder
+tool.invoke({"query": "example search", "results_folder": "my_results", "return_content": True})
+```
diff --git a/src/oss/python/integrations/tools/index.mdx b/src/oss/python/integrations/tools/index.mdx
index d4acc4193e..3814b267c3 100644
--- a/src/oss/python/integrations/tools/index.mdx
+++ b/src/oss/python/integrations/tools/index.mdx
@@ -213,6 +213,18 @@ The following platforms provide access to multiple tools and services through a
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/src/oss/python/integrations/tools/scrapingbee_amazonproduct.mdx b/src/oss/python/integrations/tools/scrapingbee_amazonproduct.mdx
new file mode 100644
index 0000000000..7ff4589297
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_amazonproduct.mdx
@@ -0,0 +1,106 @@
+---
+title: ScrapingBee AmazonProductTool
+---
+
+Use this tool to retrieve detailed information for a specific Amazon product using its ASIN (Amazon Standard Identification Number).
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `AmazonProductTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import AmazonProductTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+amazon_product_tool = AmazonProductTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `query` (string, the product ASIN) and `params` (dictionary) as arguments. The `query` argument is required, and the `params` argument is optional. You can use the `params` argument to customize the request. For example, to get the HTML along with the response, you can use the following as `params`:
+
+```
+{'add_html': True}
+```
+
+For a complete list of acceptable parameters, please visit the [Amazon Product API documentation](https://www.scrapingbee.com/documentation/amazon/#amazon-product-api).
+
+```python
+amazon_product_tool.invoke({"query": "B0DPDRNSXV"})
+
+amazon_product_tool.invoke(
+ {
+ "query": "B0DPDRNSXV",
+ "params": {"add_html": True},
+ }
+)
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import AmazonProductTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+amazon_product_tool = AmazonProductTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [amazon_product_tool])
+
+user_input = "Get the product details for Amazon product B0DPDRNSXV and tell me the product name, price, rating, and number of reviews"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+[Amazon Product API](https://www.scrapingbee.com/documentation/amazon/#amazon-product-api)
diff --git a/src/oss/python/integrations/tools/scrapingbee_amazonsearch.mdx b/src/oss/python/integrations/tools/scrapingbee_amazonsearch.mdx
new file mode 100644
index 0000000000..3da8b1db16
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_amazonsearch.mdx
@@ -0,0 +1,106 @@
+---
+title: ScrapingBee AmazonSearchTool
+---
+
+Use this tool to perform product searches on Amazon with options for localization, pagination, and advanced filtering.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `AmazonSearchTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import AmazonSearchTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+amazon_search_tool = AmazonSearchTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `query` (string, the search term) and `params` (dictionary) as arguments. The `query` argument is required, and the `params` argument is optional. You can use the `params` argument to customize the request. For example, to search on Amazon's UK site, you can use the following as `params`:
+
+```
+{'domain': 'co.uk'}
+```
+
+For a complete list of acceptable parameters, please visit the [Amazon Search API documentation](https://www.scrapingbee.com/documentation/amazon/#amazon-search-api).
+
+```python
+amazon_search_tool.invoke({"query": "iphone 16"})
+
+amazon_search_tool.invoke(
+ {
+ "query": "laptop",
+ "params": {"domain": "co.uk", "country": "gb"},
+ }
+)
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import AmazonSearchTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+amazon_search_tool = AmazonSearchTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [amazon_search_tool])
+
+user_input = "Search for the top 5 wireless headphones on Amazon and provide me with the product names and prices"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+[Amazon Search API](https://www.scrapingbee.com/documentation/amazon/#amazon-search-api)
diff --git a/src/oss/python/integrations/tools/scrapingbee_chatgpt.mdx b/src/oss/python/integrations/tools/scrapingbee_chatgpt.mdx
new file mode 100644
index 0000000000..fab9278791
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_chatgpt.mdx
@@ -0,0 +1,106 @@
+---
+title: ScrapingBee ChatGPTTool
+---
+
+Use this tool to send prompts to ChatGPT and receive AI-generated responses, with an optional web search capability to enhance responses with up-to-date information.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `ChatGPTTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import ChatGPTTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+chatgpt_tool = ChatGPTTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `prompt` (string) and `params` (dictionary) as arguments. The `prompt` argument is required, and the `params` argument is optional. You can use the `params` argument to customize the request. For example, to enable web search to enhance the response with up-to-date information, you can use the following as `params`:
+
+```
+{'search': True}
+```
+
+For a complete list of acceptable parameters, please visit the [ChatGPT API documentation](https://www.scrapingbee.com/documentation/chatgpt/).
+
+```python
+chatgpt_tool.invoke({"prompt": "Explain the benefits of renewable energy in 100 words"})
+
+chatgpt_tool.invoke(
+ {
+ "prompt": "What are the latest developments in AI?",
+ "params": {"search": True},
+ }
+)
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import ChatGPTTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+chatgpt_tool = ChatGPTTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [chatgpt_tool])
+
+user_input = "Use ChatGPT to explain what quantum computing is and its potential applications in 150 words"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+[ChatGPT API](https://www.scrapingbee.com/documentation/chatgpt/)
diff --git a/src/oss/python/integrations/tools/scrapingbee_checkusage.mdx b/src/oss/python/integrations/tools/scrapingbee_checkusage.mdx
new file mode 100644
index 0000000000..a8b5561a20
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_checkusage.mdx
@@ -0,0 +1,97 @@
+---
+title: ScrapingBee CheckUsageTool
+---
+
+This tool allows you to keep track of your credits and concurrency usage while you are scraping the web.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `CheckUsageTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+The `CheckUsageTool` only requires the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the `CheckUsageTool`:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import CheckUsageTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+usage_tool = CheckUsageTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+This tool doesn't require any arguments. Invoking this tool will check your ScrapingBee API usage data and return the following information:
+
+ * `max_api_credit`
+ * `used_api_credit`
+ * `max_concurrency`
+ * `current_concurrency`
+ * `renewal_subscription_date`
+
+```python
+usage_tool.invoke({})
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import CheckUsageTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+usage_tool = CheckUsageTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [usage_tool])
+
+user_input = "How many api credits do I have available in my account?"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+For more details on our `usage` endpoint, please check out this [link](https://www.scrapingbee.com/documentation/#usage-endpoint).
diff --git a/src/oss/python/integrations/tools/scrapingbee_googlesearch.mdx b/src/oss/python/integrations/tools/scrapingbee_googlesearch.mdx
new file mode 100644
index 0000000000..d1b0695399
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_googlesearch.mdx
@@ -0,0 +1,106 @@
+---
+title: ScrapingBee GoogleSearchTool
+---
+
+Use this tool to search on Google for normal search results as well as news, maps, and images.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `GoogleSearchTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import GoogleSearchTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+search_tool = GoogleSearchTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `search` (string) and `params` (dictionary) as argument. The `search` argument is necessary, and the `params` argument is optional. You can use `params` argument to customise the request. For example, to get news results, you can use the following as `params`:
+
+```
+{'search_type': 'news'}
+```
+
+For a complete list of acceptable parameters, please visit the [Google Search API documentation](https://www.scrapingbee.com/documentation/google/).
+
+```python
+google_search_tool.invoke({"search": "LangChain"})
+
+google_search_tool.invoke(
+ {
+ "search": "LangChain",
+ "params": {'search_type': 'news'},
+ }
+)
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import GoogleSearchTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+search_tool = GoogleSearchTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [search_tool])
+
+user_input = "Fetch 5 news about Tesla that includes: headline in 5 words or less, summary in 30 words or less, link, and date"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+[Google Search API](https://www.scrapingbee.com/documentation/google/)
diff --git a/src/oss/python/integrations/tools/scrapingbee_scrapeurl.mdx b/src/oss/python/integrations/tools/scrapingbee_scrapeurl.mdx
new file mode 100644
index 0000000000..94050ee17d
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_scrapeurl.mdx
@@ -0,0 +1,110 @@
+---
+title: ScrapingBee ScrapeUrlTool
+---
+
+This is a versatile tool that can fetch web pages or files, with features like javascript scenario, data extraction, screenshot, ai query, ai extraction and so on.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `ScrapeUrlTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import ScrapeUrlTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+scrape_tool = ScrapeUrlTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `url` (string) and `params` (dictionary) as argument. The `url` argument is necessary, and the `params` argument is optional. You can use `params` argument to customise the request. For example, to disable JavaScript Rendering, you can use the following as `params`:
+
+```
+{'render_js': False}
+```
+
+For a complete list of acceptable parameters, please visit the [HTML API documentation](https://www.scrapingbee.com/documentation/).
+
+```python
+scrape_tool.invoke({"url": "http://httpbin.org/html"})
+
+scrape_tool.invoke(
+ {
+ "url": "https://treaties.un.org/doc/publication/ctc/uncharter.pdf",
+ "params": {"render_js": False},
+ }
+)
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import ScrapeUrlTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+scrape_tool = ScrapeUrlTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [scrape_tool])
+
+user_input = "Capture the full page screenshot of https://www.langchain.com/"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+For detailed documentation of ScrapingBee's HTML API features and configurations head to the API reference:
+
+ * [HTML API](https://www.scrapingbee.com/documentation/)
+ * [Data Extraction](https://www.scrapingbee.com/documentation/data-extraction/)
+ * [JavaScript Scenario](https://www.scrapingbee.com/documentation/js-scenario/)
diff --git a/src/oss/python/integrations/tools/scrapingbee_walmartproduct.mdx b/src/oss/python/integrations/tools/scrapingbee_walmartproduct.mdx
new file mode 100644
index 0000000000..4736fb2ada
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_walmartproduct.mdx
@@ -0,0 +1,106 @@
+---
+title: ScrapingBee WalmartProductTool
+---
+
+Use this tool to retrieve detailed information for a specific Walmart product using its product ID.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `WalmartProductTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import WalmartProductTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+walmart_product_tool = WalmartProductTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `product_id` (string, the Walmart product ID) and `params` (dictionary) as arguments. The `product_id` argument is required, and the `params` argument is optional. You can use the `params` argument to customize the request. For example, to check for delivery options in a specific ZIP code, you can use the following as `params`:
+
+```
+{'delivery_zip': '90210'}
+```
+
+For a complete list of acceptable parameters, please visit the [Walmart Product API documentation](https://www.scrapingbee.com/documentation/walmart/#walmart-product-api).
+
+```python
+walmart_product_tool.invoke({"product_id": "454408250"})
+
+walmart_product_tool.invoke(
+ {
+ "product_id": "454408250",
+ "params": {"add_html": True},
+ }
+)
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import WalmartProductTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+walmart_product_tool = WalmartProductTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [walmart_product_tool])
+
+user_input = "Get the product details for Walmart product 454408250 and tell me the product name, price, rating, and availability"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+[Walmart Product API](https://www.scrapingbee.com/documentation/walmart/#walmart-product-api)
diff --git a/src/oss/python/integrations/tools/scrapingbee_walmartsearch.mdx b/src/oss/python/integrations/tools/scrapingbee_walmartsearch.mdx
new file mode 100644
index 0000000000..6d5d1ef87b
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_walmartsearch.mdx
@@ -0,0 +1,106 @@
+---
+title: ScrapingBee WalmartSearchTool
+---
+
+Use this tool to search for products on Walmart with parameters for sorting, price filtering, and delivery options.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `WalmartSearchTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import WalmartSearchTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+walmart_search_tool = WalmartSearchTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `query` (string, the search term) and `params` (dictionary) as arguments. The `query` argument is required, and the `params` argument is optional. You can use the `params` argument to customize the request. For example, to sort results by the lowest price, you can use the following as `params`:
+
+```
+{'sort_by': 'price_low'}
+```
+
+For a complete list of acceptable parameters, please visit the [Walmart Search API documentation](https://www.scrapingbee.com/documentation/walmart/#walmart-search-api).
+
+```python
+walmart_search_tool.invoke({"query": "iphone"})
+
+walmart_search_tool.invoke(
+ {
+ "query": "coffee maker",
+ "params": {"sort_by": "price_low", "min_price": 20, "max_price": 100},
+ }
+)
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import WalmartSearchTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+walmart_search_tool = WalmartSearchTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [walmart_search_tool])
+
+user_input = "Search for the top 5 best-selling TVs on Walmart and provide me with the product names and prices"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+[Walmart Search API](https://www.scrapingbee.com/documentation/walmart/#walmart-search-api)
diff --git a/src/oss/python/integrations/tools/scrapingbee_youtubemetadata.mdx b/src/oss/python/integrations/tools/scrapingbee_youtubemetadata.mdx
new file mode 100644
index 0000000000..344274d70a
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_youtubemetadata.mdx
@@ -0,0 +1,99 @@
+---
+title: ScrapingBee YouTubeMetadataTool
+---
+
+Use this tool to retrieve comprehensive metadata for a YouTube video including title, description, view count, likes, channel info, publish date, duration, thumbnails, and tags.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `YouTubeMetadataTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import YouTubeMetadataTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+youtube_metadata_tool = YouTubeMetadataTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `video_id` (string) as the required argument. The `video_id` is the unique YouTube video identifier found in the URL of any YouTube video (the part after `v=`). For example, for the URL `https://www.youtube.com/watch?v=dQw4w9WgXcQ`, the video_id would be `dQw4w9WgXcQ`.
+
+```python
+youtube_metadata_tool.invoke({"video_id": "dQw4w9WgXcQ"})
+
+youtube_metadata_tool.invoke(
+ {
+ "video_id": "dQw4w9WgXcQ",
+ }
+)
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import YouTubeMetadataTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+youtube_metadata_tool = YouTubeMetadataTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [youtube_metadata_tool])
+
+user_input = "Get the metadata for YouTube video dQw4w9WgXcQ and tell me the title, channel name, view count, and publish date"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+[YouTube Metadata API](https://www.scrapingbee.com/documentation/youtube/#youtube-metadata-api)
diff --git a/src/oss/python/integrations/tools/scrapingbee_youtubesearch.mdx b/src/oss/python/integrations/tools/scrapingbee_youtubesearch.mdx
new file mode 100644
index 0000000000..4c01472041
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_youtubesearch.mdx
@@ -0,0 +1,106 @@
+---
+title: ScrapingBee YouTubeSearchTool
+---
+
+Use this tool to search YouTube and retrieve video, channel, or playlist results with extensive filtering options for video quality, duration, upload date, and more.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `YouTubeSearchTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import YouTubeSearchTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+youtube_search_tool = YouTubeSearchTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `search` (string) and `params` (dictionary) as arguments. The `search` argument is required, and the `params` argument is optional. You can use the `params` argument to customize the search with various filters. For example, to search for HD videos sorted by view count, you can use the following as `params`:
+
+```
+{'hd': True, 'sort_by': 'view_count'}
+```
+
+For a complete list of acceptable parameters, please visit the [YouTube API documentation](https://www.scrapingbee.com/documentation/youtube/).
+
+```python
+youtube_search_tool.invoke({"search": "python programming tutorial"})
+
+youtube_search_tool.invoke(
+ {
+ "search": "machine learning",
+ "params": {"hd": True, "sort_by": "view_count", "upload_date": "this_month"},
+ }
+)
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import YouTubeSearchTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+youtube_search_tool = YouTubeSearchTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [youtube_search_tool])
+
+user_input = "Search for the top 5 most viewed Python programming tutorials on YouTube uploaded this year and provide me with the video titles and channel names"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+[YouTube Search API](https://www.scrapingbee.com/documentation/youtube/#youtube-metadata-api)
diff --git a/src/oss/python/integrations/tools/scrapingbee_youtubetrainability.mdx b/src/oss/python/integrations/tools/scrapingbee_youtubetrainability.mdx
new file mode 100644
index 0000000000..3d938f6012
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_youtubetrainability.mdx
@@ -0,0 +1,103 @@
+---
+title: ScrapingBee YouTubeTrainabilityTool
+---
+
+Use this tool to check whether a YouTube video's content can be used for AI/ML training purposes based on the video's settings and permissions.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `YouTubeTrainabilityTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import YouTubeTrainabilityTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+youtube_trainability_tool = YouTubeTrainabilityTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `video_id` (string) as the required argument. The `video_id` is the unique YouTube video identifier found in the URL of any YouTube video (the part after `v=`). For example, for the URL `https://www.youtube.com/watch?v=dQw4w9WgXcQ`, the video_id would be `dQw4w9WgXcQ`.
+
+```python
+youtube_trainability_tool.invoke({"video_id": "dQw4w9WgXcQ"})
+```
+
+### Permission Values
+
+| Value | Description |
+| :--- | :--- |
+| `["all"]` | Training permitted for all parties |
+| `["none"]` or `["None"]` | No training permitted for any party |
+| `["party1", "party2", ...]` | Training permitted only for specific listed parties |
+
+The `etag` value serves as a version identifier for the training status.
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import YouTubeTrainabilityTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+youtube_trainability_tool = YouTubeTrainabilityTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [youtube_trainability_tool])
+
+user_input = "Check if the YouTube video with ID dQw4w9WgXcQ can be used for AI training purposes"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+[YouTube Trainability API](https://www.scrapingbee.com/documentation/youtube/#youtube-trainability-api)
diff --git a/src/oss/python/integrations/tools/scrapingbee_youtubetranscript.mdx b/src/oss/python/integrations/tools/scrapingbee_youtubetranscript.mdx
new file mode 100644
index 0000000000..772fbb5aa7
--- /dev/null
+++ b/src/oss/python/integrations/tools/scrapingbee_youtubetranscript.mdx
@@ -0,0 +1,106 @@
+---
+title: ScrapingBee YouTubeTranscriptTool
+---
+
+Use this tool to retrieve transcripts (captions/subtitles) for a YouTube video with support for multiple languages and choice between auto-generated or uploader-provided transcripts.
+
+## Overview
+
+### Integration details
+
+| Class | Package | Serializable | JS support | Package latest |
+| :--- | :--- | :---: | :---: | :---: |
+| `YouTubeTranscriptTool` | [langchain-scrapingbee](https://pypi.org/project/langchain-scrapingbee/) | ✅ | ❌ |  |
+
+
+## Setup
+
+
+```bash pip
+pip install -U langchain-scrapingbee
+````
+
+```bash uv
+uv add langchain-scrapingbee
+```
+
+
+### Credentials
+
+You should configure credentials by setting the following environment variables:
+
+ * `SCRAPINGBEE_API_KEY`
+
+## Instantiation
+
+All of the ScrapingBee tools only require the API Key during instantiation. If not set up in the environment variable, you can provide it directly here.
+
+Here we show how to instantiate an instance of the ScrapingBee tools:
+
+```python
+import getpass
+import os
+from langchain_scrapingbee import YouTubeTranscriptTool
+
+# if not os.environ.get("SCRAPINGBEE_API_KEY"):
+# os.environ["SCRAPINGBEE_API_KEY"] = getpass.getpass("SCRAPINGBEE API key:\n")
+
+youtube_transcript_tool = YouTubeTranscriptTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+```
+
+## Invocation
+
+### Invoke directly with args
+
+This tool accepts `video_id` (string) and `params` (dictionary) as arguments. The `video_id` argument is required, and the `params` argument is optional. The `video_id` is the unique YouTube video identifier found in the URL of any YouTube video (the part after `v=`). You can use the `params` argument to specify the transcript language and source. For example, to get a Spanish transcript from uploader-provided captions:
+
+```
+{'language': 'es', 'transcript_origin': 'uploader_provided'}
+```
+
+For a complete list of acceptable parameters, please visit the [YouTube API documentation](https://www.scrapingbee.com/documentation/youtube/).
+
+```python
+youtube_transcript_tool.invoke({"video_id": "ybfyLfI5Ml0"})
+
+youtube_transcript_tool.invoke(
+ {
+ "video_id": "ybfyLfI5Ml0",
+ "params": {"language": "es"}
+ }
+)
+```
+
+## Use within an agent
+
+```python
+import os
+from langchain_scrapingbee import YouTubeTranscriptTool
+from langchain_google_genai import ChatGoogleGenerativeAI
+from langgraph.prebuilt import create_react_agent
+
+if not os.environ.get("GOOGLE_API_KEY") or not os.environ.get("SCRAPINGBEE_API_KEY"):
+ raise ValueError(
+ "Google and ScrapingBee API keys must be set in environment variables."
+ )
+
+llm = ChatGoogleGenerativeAI(temperature=0, model="gemini-2.5-flash")
+scrapingbee_api_key = os.environ.get("SCRAPINGBEE_API_KEY")
+
+youtube_transcript_tool = YouTubeTranscriptTool(api_key=os.environ.get("SCRAPINGBEE_API_KEY"))
+
+agent = create_react_agent(llm, [youtube_transcript_tool])
+
+user_input = "Get the English transcript for YouTube video dQw4w9WgXcQ and summarize the main points"
+
+# Stream the agent's output step-by-step
+for step in agent.stream(
+ {"messages": user_input},
+ stream_mode="values",
+):
+ step["messages"][-1].pretty_print()
+```
+
+## API reference
+
+[YouTube Transcript API](https://www.scrapingbee.com/documentation/youtube/#youtube-transcript-api)