|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# Using Opik with Agent Spec\n", |
| 8 | + "\n", |
| 9 | + "[Agent Spec](https://oracle.github.io/agent-spec/development/agentspec/index.html) is a portable configuration language for defining agentic systems (agents, tools, and structured workflows).\n", |
| 10 | + "\n", |
| 11 | + "In this notebook, we will build a simple Agent Spec agent and use Opik's `AgentSpecInstrumentor` to capture a trace of the agent's tool and LLM execution." |
| 12 | + ] |
| 13 | + }, |
| 14 | + { |
| 15 | + "cell_type": "markdown", |
| 16 | + "metadata": {}, |
| 17 | + "source": [ |
| 18 | + "## Creating an account on Comet.com\n", |
| 19 | + "\n", |
| 20 | + "[Comet](https://www.comet.com/site?from=llm&utm_source=opik&utm_medium=colab&utm_content=agentspec&utm_campaign=opik) provides a hosted version of the Opik platform, [simply create an account](https://www.comet.com/signup?from=llm&utm_source=opik&utm_medium=colab&utm_content=agentspec&utm_campaign=opik) and grab your API Key.\n", |
| 21 | + "\n", |
| 22 | + "> You can also run the Opik platform locally, see the [installation guide](https://www.comet.com/docs/opik/self-host/overview/?from=llm&utm_source=opik&utm_medium=colab&utm_content=agentspec&utm_campaign=opik) for more information." |
| 23 | + ] |
| 24 | + }, |
| 25 | + { |
| 26 | + "cell_type": "code", |
| 27 | + "execution_count": null, |
| 28 | + "metadata": {}, |
| 29 | + "outputs": [], |
| 30 | + "source": "%pip install --upgrade opik \"pyagentspec[langgraph]\" opentelemetry-sdk opentelemetry-instrumentation" |
| 31 | + }, |
| 32 | + { |
| 33 | + "cell_type": "code", |
| 34 | + "execution_count": null, |
| 35 | + "metadata": {}, |
| 36 | + "outputs": [], |
| 37 | + "source": [ |
| 38 | + "import opik\n", |
| 39 | + "\n", |
| 40 | + "opik.configure(use_local=False)" |
| 41 | + ] |
| 42 | + }, |
| 43 | + { |
| 44 | + "cell_type": "markdown", |
| 45 | + "metadata": {}, |
| 46 | + "source": [ |
| 47 | + "## Preparing our environment\n", |
| 48 | + "\n", |
| 49 | + "This demo uses OpenAI as the LLM provider. Set your OpenAI API key as an environment variable:" |
| 50 | + ] |
| 51 | + }, |
| 52 | + { |
| 53 | + "cell_type": "code", |
| 54 | + "execution_count": null, |
| 55 | + "metadata": {}, |
| 56 | + "outputs": [], |
| 57 | + "source": [ |
| 58 | + "import os\n", |
| 59 | + "import getpass\n", |
| 60 | + "\n", |
| 61 | + "if \"OPENAI_API_KEY\" not in os.environ:\n", |
| 62 | + " os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")" |
| 63 | + ] |
| 64 | + }, |
| 65 | + { |
| 66 | + "cell_type": "markdown", |
| 67 | + "metadata": {}, |
| 68 | + "source": [ |
| 69 | + "## Define an Agent Spec agent\n", |
| 70 | + "\n", |
| 71 | + "We'll define a small calculator agent with a couple of tools:" |
| 72 | + ] |
| 73 | + }, |
| 74 | + { |
| 75 | + "cell_type": "code", |
| 76 | + "execution_count": null, |
| 77 | + "metadata": {}, |
| 78 | + "outputs": [], |
| 79 | + "source": [ |
| 80 | + "from pyagentspec.agent import Agent\n", |
| 81 | + "from pyagentspec.llms import OpenAiConfig\n", |
| 82 | + "from pyagentspec.property import FloatProperty\n", |
| 83 | + "from pyagentspec.tools import ServerTool\n", |
| 84 | + "\n", |
| 85 | + "\n", |
| 86 | + "def build_agentspec_agent() -> Agent:\n", |
| 87 | + " tools = [\n", |
| 88 | + " ServerTool(\n", |
| 89 | + " name=\"sum\",\n", |
| 90 | + " description=\"Sum two numbers\",\n", |
| 91 | + " inputs=[FloatProperty(title=\"a\"), FloatProperty(title=\"b\")],\n", |
| 92 | + " outputs=[FloatProperty(title=\"result\")],\n", |
| 93 | + " ),\n", |
| 94 | + " ServerTool(\n", |
| 95 | + " name=\"subtract\",\n", |
| 96 | + " description=\"Subtract two numbers\",\n", |
| 97 | + " inputs=[FloatProperty(title=\"a\"), FloatProperty(title=\"b\")],\n", |
| 98 | + " outputs=[FloatProperty(title=\"result\")],\n", |
| 99 | + " ),\n", |
| 100 | + " ]\n", |
| 101 | + "\n", |
| 102 | + " return Agent(\n", |
| 103 | + " name=\"calculator_agent\",\n", |
| 104 | + " description=\"An agent that provides assistance with tool use.\",\n", |
| 105 | + " llm_config=OpenAiConfig(name=\"openai-gpt-5-mini\", model_id=\"gpt-5-mini\"),\n", |
| 106 | + " system_prompt=(\n", |
| 107 | + " \"You are a helpful calculator agent.\\n\"\n", |
| 108 | + " \"Your duty is to compute the result of the given operation using tools, \"\n", |
| 109 | + " \"and to output the result.\\n\"\n", |
| 110 | + " \"It's important that you reply with the result only.\\n\"\n", |
| 111 | + " ),\n", |
| 112 | + " tools=tools,\n", |
| 113 | + " )" |
| 114 | + ] |
| 115 | + }, |
| 116 | + { |
| 117 | + "cell_type": "markdown", |
| 118 | + "metadata": {}, |
| 119 | + "source": [ |
| 120 | + "## Run the agent with Opik tracing enabled\n", |
| 121 | + "\n", |
| 122 | + "Wrap the agent execution in `AgentSpecInstrumentor().instrument_context(...)` to capture traces in Opik.\n", |
| 123 | + "\n", |
| 124 | + "> Agent traces can include prompts, tool inputs/outputs, and messages. If you need to avoid logging sensitive information, set `mask_sensitive_information=True`." |
| 125 | + ] |
| 126 | + }, |
| 127 | + { |
| 128 | + "cell_type": "code", |
| 129 | + "execution_count": null, |
| 130 | + "metadata": {}, |
| 131 | + "outputs": [], |
| 132 | + "source": [ |
| 133 | + "from opik.integrations.agentspec import AgentSpecInstrumentor\n", |
| 134 | + "from pyagentspec.adapters.langgraph import AgentSpecLoader\n", |
| 135 | + "\n", |
| 136 | + "agent = build_agentspec_agent()\n", |
| 137 | + "\n", |
| 138 | + "tool_registry = {\n", |
| 139 | + " \"sum\": lambda a, b: a + b,\n", |
| 140 | + " \"subtract\": lambda a, b: a - b,\n", |
| 141 | + "}\n", |
| 142 | + "\n", |
| 143 | + "langgraph_agent = AgentSpecLoader(tool_registry=tool_registry).load_component(agent)\n", |
| 144 | + "\n", |
| 145 | + "with AgentSpecInstrumentor().instrument_context(\n", |
| 146 | + " project_name=\"agentspec-demo\",\n", |
| 147 | + " mask_sensitive_information=False,\n", |
| 148 | + "):\n", |
| 149 | + " messages = []\n", |
| 150 | + "\n", |
| 151 | + " messages.append({\"role\": \"user\", \"content\": \"Compute 13.5 + 2.25 using the sum tool.\"})\n", |
| 152 | + " response = langgraph_agent.invoke(\n", |
| 153 | + " input={\"messages\": messages},\n", |
| 154 | + " config={\"configurable\": {\"thread_id\": \"1\"}},\n", |
| 155 | + " )\n", |
| 156 | + " agent_answer = response[\"messages\"][-1].content.strip()\n", |
| 157 | + " print(\"AGENT >>>\", agent_answer)\n", |
| 158 | + " messages.append({\"role\": \"assistant\", \"content\": agent_answer})\n", |
| 159 | + "\n", |
| 160 | + " messages.append({\"role\": \"user\", \"content\": \"Now compute 10 - 3.5 using the subtract tool.\"})\n", |
| 161 | + " response = langgraph_agent.invoke(\n", |
| 162 | + " input={\"messages\": messages},\n", |
| 163 | + " config={\"configurable\": {\"thread_id\": \"1\"}},\n", |
| 164 | + " )\n", |
| 165 | + " agent_answer = response[\"messages\"][-1].content.strip()\n", |
| 166 | + " print(\"AGENT >>>\", agent_answer)" |
| 167 | + ] |
| 168 | + }, |
| 169 | + { |
| 170 | + "cell_type": "markdown", |
| 171 | + "metadata": {}, |
| 172 | + "source": [ |
| 173 | + "After running the cell above, open Opik and navigate to the `agentspec-demo` project to inspect the trace tree and debug tool usage and LLM generations." |
| 174 | + ] |
| 175 | + } |
| 176 | + ], |
| 177 | + "metadata": { |
| 178 | + "kernelspec": { |
| 179 | + "display_name": "py312_llm_eval", |
| 180 | + "language": "python", |
| 181 | + "name": "python3" |
| 182 | + }, |
| 183 | + "language_info": { |
| 184 | + "codemirror_mode": { |
| 185 | + "name": "ipython", |
| 186 | + "version": 3 |
| 187 | + }, |
| 188 | + "file_extension": ".py", |
| 189 | + "mimetype": "text/x-python", |
| 190 | + "name": "python", |
| 191 | + "nbconvert_exporter": "python", |
| 192 | + "pygments_lexer": "ipython3", |
| 193 | + "version": "3.12.4" |
| 194 | + } |
| 195 | + }, |
| 196 | + "nbformat": 4, |
| 197 | + "nbformat_minor": 4 |
| 198 | +} |
0 commit comments