Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
PROJECT_ENDPOINT=
MODEL=
LEGALFLY_API_CONNECTION_NAME=
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# LEGALFLY

## Description

Legal insights grounded in trusted sources from your jurisdiction.

## Prerequisites

[Obtain an API key](https://www.legalfly.com/ai-foundry-agents) by filling in the request form. You'll receive the API key by e-mail within 1-2 working days.

## Setup

1. Go to [Azure AI Foundry portal](https://ai.azure.com/) and select your AI Project. Select **Management Center**.

1. Select **+new connection** in the settings page.
1. Select **custom keys** in **other resource types**.

1. Enter the following information to create a connection to store your LEGALFLY key:
1. Set **Custom keys** to "x-api-key", with the value being your LEGALFLY API key.
1. Make sure **is secret** is checked.
1. Set the connection name to your connection name. You use this connection name in your sample code or Foundry Portal later.
1. For the **Access** setting, you can choose either _this project only_ or _shared to all projects_. Just make sure in your code, the connection string of the project you entered has access to this connection.

## Use LEGALFLY through a code-first experience

1. You can follow the [code sample](./legalfly.py) to use LEGALFLY through Agent SDK.
1. Before running the sample:
1. pip install azure-ai-agents azure-identity python-dotenv jsonref
1. Set these environment variables with your own values:
1. PROJECT_ENDPOINT - the Azure AI Agents connection string.
1. MODEL - The deployment name of the AI model, as found under the "Name" column in the "Models + endpoints" tab in your Azure AI Foundry project.
1. LEGALFLY_API_CONNECTION_NAME - The name of the connection for the LegalFly API.

## Customer Support Contact

[email protected]
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
{
"openapi": "3.0.3",
"servers": [
{
"url": "https://public-api.legalfly.com"
}
],
"info": {
"title": "LEGALFLY API documentation",
"description": "Documentation for the public LEGALFLY API",
"version": "0.0.1",
"contact": {
"name": "LEGALFLY",
"url": "https://legalfly.com"
}
},
"tags": [
{
"name": "Legal Counsel",
"description": "Legal insights grounded by trusted sources from your jurisdiction."
}
],
"components": {
"securitySchemes": {
"apiKeyAuth": {
"type": "apiKey",
"in": "header",
"name": "x-api-key"
}
},
"schemas": {
"legal-counsel.body": {
"type": "object",
"properties": {
"query": {
"maxLength": 256,
"type": "string"
},
"jurisdiction": {
"enum": ["US", "UK", "BE", "NL", "FR"]
}
},
"required": ["query"]
},
"legal-counsel.response": {
"type": "object",
"properties": {
"message": {
"type": "string"
},
"sources": {
"type": "array",
"items": {
"type": "object",
"properties": {
"title": {
"type": "string"
},
"url": {
"type": "string"
},
"score": {
"type": "number"
}
},
"required": ["title", "url", "score"]
}
}
},
"required": ["message", "sources"]
}
}
},
"security": [
{
"apiKeyAuth": []
}
],
"paths": {
"/api/v1/legal-counsel": {
"post": {
"parameters": [],
"responses": {
"200": {
"description": "Successful request.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/legal-counsel.response"
}
}
}
},
"400": {
"description": "Bad Request - Body validation failed."
},
"401": {
"description": "Unauthorized - Missing or invalid API key."
},
"429": {
"description": "Too Many Requests - Rate limit is hit."
},
"500": {
"description": "Internal Server Error - Unexpected issue on server."
}
},
"operationId": "getLegalCounsel",
"summary": "Legal Counsel",
"tags": ["Legal Counsel"],
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/legal-counsel.body"
}
},
"multipart/form-data": {
"schema": {
"$ref": "#/components/schemas/legal-counsel.body"
}
},
"text/plain": {
"schema": {
"$ref": "#/components/schemas/legal-counsel.body"
}
}
}
}
}
},
"/health": {
"get": {
"operationId": "getHealth",
"summary": "Health Check",
"responses": {
"200": {
"description": "Successful request."
}
}
}
}
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
# pylint: disable=line-too-long,useless-suppression
# ------------------------------------
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
# ------------------------------------

"""
DESCRIPTION:
This sample demonstrates how to use agent operations with the
OpenAPI tool from the Azure Agents service using a synchronous client.
To learn more about OpenAPI specs, visit https://learn.microsoft.com/openapi

USAGE:
python legalfly.py

Before running the sample:

pip install azure-ai-projects azure-ai-agents azure-identity python-dotenv jsonref

Set these environment variables with your own values:
1) PROJECT_ENDPOINT - the Azure AI Agents endpoint.
2) MODEL - The deployment name of the AI model, as found under the "Name" column in
the "Models + endpoints" tab in your Azure AI Foundry project.
3) LEGALFLY_API_CONNECTION_NAME - The name of the connection for the LegalFly API.
"""
# <initialization>
# Import necessary libraries
import os
import jsonref
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from azure.ai.agents.models import OpenApiTool, OpenApiConnectionAuthDetails, OpenApiConnectionSecurityScheme
from dotenv import load_dotenv

load_dotenv()

# endpoint should be in the format "https://<your-ai-services-resource-name>.services.ai.azure.com/api/projects/<your-project-name>"
endpoint = os.environ["PROJECT_ENDPOINT"]
model = os.environ.get("MODEL", "gpt-4o")
connection_name = os.environ["LEGALFLY_API_CONNECTION_NAME"]


# Initialize the project client using the endpoint and default credentials
with AIProjectClient.from_connection_string(
conn_str=endpoint,
credential=DefaultAzureCredential(exclude_interactive_browser_credential=False),
) as project_client:
# </initialization>

# Load the OpenAPI specification for the service from a local JSON file using jsonref to handle references
with open("./legalfly.json", "r") as f:
openapi_spec = jsonref.loads(f.read())

conn_id = project_client.connections.get(connection_name=connection_name).id
# Create Auth object for the OpenApiTool (note that connection or managed identity auth setup requires additional setup in Azure)
auth = OpenApiConnectionAuthDetails(security_scheme=OpenApiConnectionSecurityScheme(connection_id=conn_id))


# Initialize the main OpenAPI tool definition for weather
openapi_tool = OpenApiTool(
name="getLegalCounsel",
spec=openapi_spec,
description="LegalFly legal counsel API",
auth=auth
)

# <agent_creation>
# --- Agent Creation ---
# Create an agent configured with the combined OpenAPI tool definitions
agent = project_client.agents.create_agent(
model=model, # Specify the model deployment
name="my-agent", # Give the agent a name
instructions="You are a helpful AI legal assistant. Act like a friendly person who possesses a lot of legal knowledge.",
tools=openapi_tool.definitions, # Provide the list of tool definitions
)
print(f"Created agent, ID: {agent.id}")
# </agent_creation>

# <thread_management>
# --- Thread Management ---
# Create a new conversation thread for the interaction
thread = project_client.agents.create_thread()
print(f"Created thread, ID: {thread.id}")

# Create the initial user message in the thread
message = project_client.agents.create_message(
thread_id=thread.id,
role="user",
# give an example of a user message that the agent can respond to
content="What do I need to start a company in California?",
)
print(f"Created message, ID: {message.id}")
# </thread_management>

# <message_processing>
# --- Message Processing (Run Creation and Auto-processing) ---
# Create and automatically process the run, handling tool calls internally
# Note: This differs from the function_tool example where tool calls are handled manually
run = project_client.agents.create_and_process_run(thread_id=thread.id, agent_id=agent.id)
print(f"Run finished with status: {run.status}")
# </message_processing>

# <tool_execution_loop> # Note: This section now processes completed steps, as create_and_process_run handles execution
# --- Post-Run Step Analysis ---
if run.status == "failed":
print(f"Run failed: {run.last_error}")

# Retrieve the steps taken during the run for analysis
run_steps = project_client.agents.list_run_steps(thread_id=thread.id, run_id=run.id)

# Loop through each step to display information
for step in run_steps.data:
print(f"Step {step['id']} status: {step['status']}")

# Check if there are tool calls recorded in the step details
step_details = step.get("step_details", {})
tool_calls = step_details.get("tool_calls", [])

if tool_calls:
print(" Tool calls:")
for call in tool_calls:
print(f" Tool Call ID: {call.get('id')}")
print(f" Type: {call.get('type')}")

function_details = call.get("function", {})
if function_details:
print(f" Function name: {function_details.get('name')}")
print() # Add an extra newline between steps for readability
# </tool_execution_loop>

# <cleanup>
# --- Cleanup ---
# Delete the agent resource to clean up
project_client.agents.delete_agent(agent.id)
print("Deleted agent")

# Fetch and log all messages exchanged during the conversation thread
messages = project_client.agents.list_messages(thread_id=thread.id)
print(f"Messages: {messages}")
messages_array = messages.data
for m in messages_array:
content = m.get("content", [])
if content and content[0].get("type") == "text":
text_value = content[0].get("text", {}).get("value", "")
print(f"Text: {text_value}")
# </cleanup>
Loading