Skip to content

rehank0678/ran-agent

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RAN Agent: An Open-Source Multi-Agent System for Radio Access Networks

The RAN Agent is a multi-agent system for advanced data analysis and anomaly detection in Radio Access Networks. This project demonstrates how to build and orchestrate specialized agents to handle different stages of a data pipeline, from data retrieval to advanced analytics and machine learning. The system is designed to interact with data sources like BigQuery, perform complex data manipulations, generate data visualizations, and execute machine learning tasks.


Overview

The RAN Agent provides a sophisticated, conversational interface for Radio Access Network (RAN) data analysis and anomaly detection. Its core strength lies in its multi-agent architecture, which efficiently breaks down complex user requests into smaller, manageable tasks handled by specialized sub-agents.

The agent is powered by Gemini 2.5, leveraging the model's deep, up-to-date world knowledge in networking alongside the user-configured telemetry database. This dual-source approach allows the agent to dynamically provide powerful and specific network insights, such as accurate KPI generation and anomaly detection, tailored to the state of that network.

The RAN Agent's core purpose is to elevate data analysis from simple querying to sophisticated, domain-aware troubleshooting through a natural, conversational flow.


Agent Details

The key features of the RAN Agent include:

Feature Description
Interaction Type: Conversational
Complexity: Advanced
Agent Type: Multi-Agent
Components: Tools, AgentTools, Session Memory, Gemini
Vertical: Networking

Setup and Installation

Folder Structure

.
├── deployment
│   ├── deploy.py
│   └── deploy_test.py
├── docs
│   ├── CODE_OF_CONDUCT.md
│   └── CONTRIBUTING.md
├── LICENSE
├── pyptoject.toml
├── ran_agent
│   ├── agent.py
│   ├── __init__.py
│   ├── profiles
│   │   └── variables.json
│   ├── prompt.py
│   ├── shared_libraries
│   │   ├── constants.py
│   │   └── types.py
│   ├── sub_agents
│   │   ├── insights_agent
│   │   │   ├── agent.py
│   │   │   └── prompt.py
│   │   ├── anomaly_detection_agent
│   │   │   ├── agent.py
│   │   │   └── prompt.py
│   │   └── usecase_agent
│   │       ├── agent.py
│   │       └── prompt.py
│   └── tools
│       ├── bq_utils.py
│       └── memory.py
└── README.md

Prerequisites

  • Python 3.11+
  • Google Cloud Project: You need a Google Cloud account with the Vertex AI API, BigQuery API, and Cloud Storage API enabled.
  • Poetry: A dependency management tool for Python. You can install it by following the instructions on the official Poetry website: https://python-poetry.org/docs/.
  • Google Agent Development Kit (ADK) 1.0+

Installation

  1. Clone the Repository:

    Start by cloning the project repository to your local machine:

    git clone https://github.com/GoogleCloudPlatform/ran-agent.git
    cd ran-agent
  2. Install Dependencies with Poetry:

    This command reads the pyproject.toml file and installs all the necessary dependencies into a virtual environment managed by Poetry.

    poetry install

    Note for Linux users: If you get an error related to keyring during the installation, you can disable it by running the following command:

    poetry config keyring.enabled false
  3. Set up Environment Variables:

    Rename the file .env-example to .env and fill in the required values. This file will store your configuration settings.

    # Choose Model Backend: 0 -> ML Dev, 1 -> Vertex
    GOOGLE_GENAI_USE_VERTEXAI=1
    
    # Vertex backend config
    GOOGLE_CLOUD_PROJECT='YOUR_CLOUD_PROJECT_ID'
    GOOGLE_CLOUD_LOCATION='us-central1'
    
    # BigQuery config
    BQ_PROJECT_ID='YOUR_BIGQUERY_PROJECT_ID'
    BQ_DATASET_ID='YOUR_BIGQUERY_DATASET_ID'
    
    # Code Interpreter extension name (optional)
    CODE_INTERPRETER_EXTENSION_NAME=''
    • GOOGLE_CLOUD_PROJECT: Your Google Cloud project ID.
    • GOOGLE_CLOUD_LOCATION: The region where your Vertex AI resources are located.
    • BQ_PROJECT_ID: The project ID for your BigQuery instance.
    • BQ_DATASET_ID: The ID of the BigQuery dataset you wish to connect to.
    • CODE_INTERPRETER_EXTENSION_NAME: (Optional) The full resource name of a pre-existing Code Interpreter extension in Vertex AI.
  4. Authenticate your GCloud account:

    gcloud auth application-default login
  5. Activate the Poetry Shell:

    This command activates the virtual environment, allowing you to run commands within the project's environment.

    poetry env activate

Running the Agent

You can interact with the agent using the ADK command-line interface or a local web UI.

Using the ADK CLI

From the project's working directory, you can run the agent in your terminal:

adk run ran_agent

Using the ADK Web UI

To use the web interface, run the following command:

adk web

This will start a local web server on your machine. Open the URL, and from the dropdown menu, select ran_agent to start a conversational session with the agent.


Deployment on Vertex AI Agent Engine

This section explains how to deploy your agent to a production environment using Vertex AI Agent Engine.

1. Set up Permissions

You must grant the necessary permissions to the Reasoning Engine Service Agent to allow it to access your BigQuery and Vertex AI resources. Run the following commands, replacing GOOGLE_CLOUD_PROJECT with your project ID and GOOGLE_CLOUD_PROJECT_NUMBER with your project number.

export RE_SA="service-${GOOGLE_CLOUD_PROJECT_NUMBER}@gcp-sa-aiplatform-re.iam.gserviceaccount.com"
gcloud projects add-iam-policy-binding ${GOOGLE_CLOUD_PROJECT} \
    --member="serviceAccount:${RE_SA}" \
    --role="roles/bigquery.user"
gcloud projects add-iam-policy-binding ${GOOGLE_CLOUD_PROJECT} \
    --member="serviceAccount:${RE_SA}" \
    --role="roles/bigquery.dataViewer"
gcloud projects add-iam-policy-binding ${GOOGLE_CLOUD_PROJECT} \
    --member="serviceAccount:${RE_SA}" \
    --role="roles/aiplatform.user"

2. Build the Agent Package

To deploy the agent, you first need to create a .whl file (a Python wheel package). From the project root directory, run the following command:

poetry build --format=wheel --output=deployment

This will create a file named ran_agent-0.1-py3-none-any.whl in the deployment directory.

3. Deploy the Agent

Now, run the deployment script. This will create a staging bucket in your GCP project and deploy the agent to Vertex AI Agent Engine.

cd deployment/
python3 deploy.py --create

Upon successful deployment, the command will print the Agent Engine resource ID, which you will need for testing and management.

4. Test the Deployed Agent

Once deployed, you can interact with your agent using the provided testing script. Store the agent's resource ID in an environment variable and run the following command:

export RESOURCE_ID=...
export USER_ID=<any string>
python test_deployment.py --resource_id=$RESOURCE_ID --user_id=$USER_ID

This will initiate a conversational session with your deployed agent.

5. Delete the Deployed Agent

To delete the agent from Vertex AI, use the same deployment script with the delete command, providing the resource ID:

python3 deployment/deploy.py --delete --resource_id=RESOURCE_ID

Troubleshooting

  • Internal Server Errors (500): If you encounter this error while running the agent, simply re-run your last command.
  • SQL Generation Issues: For errors in generated SQL queries, consider including clear descriptions for your tables and columns. For large databases, setting up a RAG (Retrieval-Augmented Generation) pipeline for schema linking by storing your table schema details in a vector store can significantly improve performance.
  • Code Interpreter Issues: Review the logs for specific errors. If you're interacting with a code interpreter extension directly, ensure you're using base-64 encoding for files and images.
  • Pydantic or Malformed Function Calls: If the agent provides an invalid or malformed response, simply prompt it to "try again." The agent can often self-correct. Similarly, if it seems stuck, a simple prompt like "what's next?" can help.
  • "Wrong Tool" Errors: If the agent attempts to use an incorrect tool, inform it by saying, "that's the wrong tool, try again," and it will usually correct its behavior.

Contributing

Contributions are welcome and highly appreciated! See our Contribution Guide to get started.


Disclaimer

This agent sample is provided for illustrative purposes only and is not intended for production use. It serves as a basic example and a foundational starting point for developing your own agents. The sample has not been rigorously tested, may contain bugs, and does not include features or optimizations typically required for a production environment (e.g., robust error handling, security measures, scalability, performance considerations, comprehensive logging, or advanced configuration options).

Token Limitation and Feature Advisory: This agent leverages the Gemini model for anomaly detection and, as such, is subject to the model's inherent token limitations, which may be encountered with large inputs (e.g., exceeding 1 million tokens). A more advanced and robust anomaly detection feature is planned for release soon.

Users are solely responsible for any further development, testing, security hardening, and deployment of agents based on this sample. We recommend a thorough review, testing, and the implementation of appropriate safeguards before using any derived agent in a live or critical system.


License

This project, the RAN Agent, is licensed under the Apache License, Version 2.0. A copy of this license can be found in the accompanying LICENSE.md file.


About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%