Skip to content

Renamed HfApiModel to InferenceClientModel #295

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions notebooks/en/agent_data_analyst.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -42,17 +42,17 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from smolagents import HfApiModel, CodeAgent\n",
"from smolagents import InferenceClientModel, CodeAgent\n",
"from huggingface_hub import login\n",
"import os\n",
"\n",
"login(os.getenv(\"HUGGINGFACEHUB_API_TOKEN\"))\n",
"\n",
"model = HfApiModel(\"meta-llama/Llama-3.1-70B-Instruct\")\n",
"model = InferenceClientModel(\"meta-llama/Llama-3.1-70B-Instruct\")\n",
"\n",
"agent = CodeAgent(\n",
" tools=[],\n",
Expand Down
10 changes: 5 additions & 5 deletions notebooks/en/agent_rag.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@
"- *`tools`*: a list of tools that the agent will be able to call.\n",
"- *`model`*: the LLM that powers the agent.\n",
"\n",
"Our `model` must be a callable that takes as input a list of [messages](https://huggingface.co/docs/transformers/main/chat_templating) and returns text. It also needs to accept a `stop_sequences` argument that indicates when to stop its generation. For convenience, we directly use the `HfApiModel` class provided in the package to get a LLM engine that calls our [Inference API](https://huggingface.co/docs/api-inference/en/index).\n",
"Our `model` must be a callable that takes as input a list of [messages](https://huggingface.co/docs/transformers/main/chat_templating) and returns text. It also needs to accept a `stop_sequences` argument that indicates when to stop its generation. For convenience, we directly use the `InferenceClientModel` class provided in the package to get a LLM engine that calls our [Inference API](https://huggingface.co/docs/api-inference/en/index).\n",
"\n",
"And we use [meta-llama/Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct), served for free on Hugging Face's Inference API!\n",
"\n",
Expand All @@ -232,9 +232,9 @@
"metadata": {},
"outputs": [],
"source": [
"from smolagents import HfApiModel, ToolCallingAgent\n",
"from smolagents import InferenceClientModel, ToolCallingAgent\n",
"\n",
"model = HfApiModel(\"meta-llama/Llama-3.1-70B-Instruct\")\n",
"model = InferenceClientModel(\"meta-llama/Llama-3.1-70B-Instruct\")\n",
"\n",
"retriever_tool = RetrieverTool(vectordb)\n",
"agent = ToolCallingAgent(\n",
Expand Down Expand Up @@ -263,15 +263,15 @@
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span> <span style=\"font-weight: bold\">How can I push a model to the Hub?</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">╰─ HfApiModel - meta-llama/Llama-3.1-70B-Instruct ────────────────────────────────────────────────────────────────╯</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">╰─ InferenceClientModel - meta-llama/Llama-3.1-70B-Instruct ────────────────────────────────────────────────────────────────╯</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[38;2;212;183;2m╭─\u001b[0m\u001b[38;2;212;183;2m───────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m \u001b[0m\u001b[1;38;2;212;183;2mNew run\u001b[0m\u001b[38;2;212;183;2m \u001b[0m\u001b[38;2;212;183;2m───────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m─╮\u001b[0m\n",
"\u001b[38;2;212;183;2m│\u001b[0m \u001b[38;2;212;183;2m│\u001b[0m\n",
"\u001b[38;2;212;183;2m│\u001b[0m \u001b[1mHow can I push a model to the Hub?\u001b[0m \u001b[38;2;212;183;2m│\u001b[0m\n",
"\u001b[38;2;212;183;2m│\u001b[0m \u001b[38;2;212;183;2m│\u001b[0m\n",
"\u001b[38;2;212;183;2m╰─\u001b[0m\u001b[38;2;212;183;2m HfApiModel - meta-llama/Llama-3.1-70B-Instruct \u001b[0m\u001b[38;2;212;183;2m───────────────────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m─╯\u001b[0m\n"
"\u001b[38;2;212;183;2m╰─\u001b[0m\u001b[38;2;212;183;2m InferenceClientModel - meta-llama/Llama-3.1-70B-Instruct \u001b[0m\u001b[38;2;212;183;2m───────────────────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m─╯\u001b[0m\n"
]
},
"metadata": {},
Expand Down
16 changes: 8 additions & 8 deletions notebooks/en/agent_text_to_sql.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@
"\n",
"We use the `CodeAgent`, which is `transformers.agents`' main agent class: an agent that writes actions in code and can iterate on previous output according to the ReAct framework.\n",
"\n",
"The `llm_engine` is the LLM that powers the agent system. `HfApiModel` allows you to call LLMs using Hugging Face's Inference API, either via Serverless or Dedicated endpoint, but you could also use any proprietary API: check out [this other cookbook](agent_change_llm) to learn how to adapt it."
"The `llm_engine` is the LLM that powers the agent system. `InferenceClientModel` allows you to call LLMs using Hugging Face's Inference API, either via Serverless or Dedicated endpoint, but you could also use any proprietary API: check out [this other cookbook](agent_change_llm) to learn how to adapt it."
]
},
{
Expand All @@ -169,11 +169,11 @@
"metadata": {},
"outputs": [],
"source": [
"from smolagents import CodeAgent, HfApiModel\n",
"from smolagents import CodeAgent, InferenceClientModel\n",
"\n",
"agent = CodeAgent(\n",
" tools=[sql_engine],\n",
" model=HfApiModel(\"meta-llama/Meta-Llama-3-8B-Instruct\"),\n",
" model=InferenceClientModel(\"meta-llama/Meta-Llama-3-8B-Instruct\"),\n",
")"
]
},
Expand All @@ -189,15 +189,15 @@
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span> <span style=\"font-weight: bold\">Can you give me the name of the client who got the most expensive receipt?</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">╰─ HfApiModel - meta-llama/Meta-Llama-3-8B-Instruct ──────────────────────────────────────────────────────────────╯</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">╰─ InferenceClientModel - meta-llama/Meta-Llama-3-8B-Instruct ──────────────────────────────────────────────────────────────╯</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[38;2;212;183;2m╭─\u001b[0m\u001b[38;2;212;183;2m───────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m \u001b[0m\u001b[1;38;2;212;183;2mNew run\u001b[0m\u001b[38;2;212;183;2m \u001b[0m\u001b[38;2;212;183;2m───────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m─╮\u001b[0m\n",
"\u001b[38;2;212;183;2m│\u001b[0m \u001b[38;2;212;183;2m│\u001b[0m\n",
"\u001b[38;2;212;183;2m│\u001b[0m \u001b[1mCan you give me the name of the client who got the most expensive receipt?\u001b[0m \u001b[38;2;212;183;2m│\u001b[0m\n",
"\u001b[38;2;212;183;2m│\u001b[0m \u001b[38;2;212;183;2m│\u001b[0m\n",
"\u001b[38;2;212;183;2m╰─\u001b[0m\u001b[38;2;212;183;2m HfApiModel - meta-llama/Meta-Llama-3-8B-Instruct \u001b[0m\u001b[38;2;212;183;2m─────────────────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m─╯\u001b[0m\n"
"\u001b[38;2;212;183;2m╰─\u001b[0m\u001b[38;2;212;183;2m InferenceClientModel - meta-llama/Meta-Llama-3-8B-Instruct \u001b[0m\u001b[38;2;212;183;2m─────────────────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m─╯\u001b[0m\n"
]
},
"metadata": {},
Expand Down Expand Up @@ -396,15 +396,15 @@
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span> <span style=\"font-weight: bold\">Which waiter got more total money from tips?</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">│</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">╰─ HfApiModel - Qwen/Qwen2.5-72B-Instruct ────────────────────────────────────────────────────────────────────────╯</span>\n",
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">╰─ InferenceClientModel - Qwen/Qwen2.5-72B-Instruct ────────────────────────────────────────────────────────────────────────╯</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[38;2;212;183;2m╭─\u001b[0m\u001b[38;2;212;183;2m───────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m \u001b[0m\u001b[1;38;2;212;183;2mNew run\u001b[0m\u001b[38;2;212;183;2m \u001b[0m\u001b[38;2;212;183;2m───────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m─╮\u001b[0m\n",
"\u001b[38;2;212;183;2m│\u001b[0m \u001b[38;2;212;183;2m│\u001b[0m\n",
"\u001b[38;2;212;183;2m│\u001b[0m \u001b[1mWhich waiter got more total money from tips?\u001b[0m \u001b[38;2;212;183;2m│\u001b[0m\n",
"\u001b[38;2;212;183;2m│\u001b[0m \u001b[38;2;212;183;2m│\u001b[0m\n",
"\u001b[38;2;212;183;2m╰─\u001b[0m\u001b[38;2;212;183;2m HfApiModel - Qwen/Qwen2.5-72B-Instruct \u001b[0m\u001b[38;2;212;183;2m───────────────────────────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m─╯\u001b[0m\n"
"\u001b[38;2;212;183;2m╰─\u001b[0m\u001b[38;2;212;183;2m InferenceClientModel - Qwen/Qwen2.5-72B-Instruct \u001b[0m\u001b[38;2;212;183;2m───────────────────────────────────────────────────────────────────────\u001b[0m\u001b[38;2;212;183;2m─╯\u001b[0m\n"
]
},
"metadata": {},
Expand Down Expand Up @@ -740,7 +740,7 @@
"\n",
"agent = CodeAgent(\n",
" tools=[sql_engine],\n",
" model=HfApiModel(\"Qwen/Qwen2.5-72B-Instruct\"),\n",
" model=InferenceClientModel(\"Qwen/Qwen2.5-72B-Instruct\"),\n",
")\n",
"\n",
"agent.run(\"Which waiter got more total money from tips?\")"
Expand Down
Loading