Skip to content

feat: Add Bitcoin agent and related configurations #373

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open
19 changes: 17 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,17 @@ chat-powerpoint-agent: .venv
chat-arxiv-agent: .venv
@ docker compose run abi bash -c 'poetry install && poetry run python -m src.core.apps.terminal_agent.main generic_run_agent ArXivAssistant'

# Update Bitcoin Agent command to use the standard module loading approach
chat-bitcoin-agent: .venv
@ docker compose run abi bash -c 'poetry install && poetry run python -m src.core.apps.terminal_agent.main generic_run_agent BitcoinAssistant'

# Bitcoin agent test commands
test-bitcoin-agent: .venv
@ docker compose run abi python -m src.custom.modules.bitcoin.tests.run_price_validation

test-bitcoin-consensus: .venv
@ docker compose run abi bash -c 'poetry install && poetry run python -m src.custom.modules.bitcoin.tests.test_price_providers'

.DEFAULT_GOAL := chat-supervisor-agent

.PHONY: all test unit-tests integration-tests lint clean
Expand Down Expand Up @@ -137,7 +148,7 @@ fix-lint:
black src tests
isort src tests

.PHONY: test chat-supervisor-agent chat-support-agent chat-content-agent chat-finance-agent chat-growth-agent chat-opendata-agent chat-operations-agent chat-sales-agent api sh lock add abi-add
.PHONY: test chat-supervisor-agent chat-support-agent chat-content-agent chat-finance-agent chat-growth-agent chat-opendata-agent chat-operations-agent chat-sales-agent chat-bitcoin-agent api sh lock add abi-add

# Build module - copies components to the module directory
build-module:
Expand Down Expand Up @@ -165,7 +176,7 @@ build-module:
echo "Creating module README..." && \
echo "# $$MODULE_NAME Module\n\nDescription of your module and its purpose.\n\n## Components\n\n- Integrations\n- Workflows\n- Pipelines\n- Ontologies\n- Assistants\n\n## Usage\n\nHow to use this module.\n" > src/custom/modules/$$MODULE_NAME/README.md && \
echo "Creating example assistant..." && \
echo "from langchain_openai import ChatOpenAI\nfrom abi.services.agent.Agent import Agent, AgentConfiguration, AgentSharedState\nfrom src import secret, services\n\nNAME = \"$$MODULE_NAME_PASCAL Assistant\"\nDESCRIPTION = \"A brief description of what your assistant does.\"\nMODEL = \"o3-mini\" # Or another appropriate model\nTEMPERATURE = 1\nAVATAR_URL = \"https://example.com/avatar.png\"\nSYSTEM_PROMPT = \"\"\"You are the $$MODULE_NAME_PASCAL Assistant. Your role is to help users with tasks related to $$MODULE_NAME.\n\nYou can perform the following tasks:\n- Task 1\n- Task 2\n- Task 3\n\nAlways be helpful, concise, and focus on solving the user's problem.\"\"\"\n\ndef create_agent(shared_state: AgentSharedState = None) -> Agent:\n \"\"\"Creates a new instance of the $$MODULE_NAME_PASCAL Assistant.\"\"\"\n # Configure the underlying chat model\n llm = ChatOpenAI(\n model=MODEL,\n temperature=TEMPERATURE,\n api_key=secret.get_openai_api_key()\n )\n \n # Configure the agent\n config = AgentConfiguration(\n name=NAME,\n description=DESCRIPTION,\n model=MODEL,\n temperature=TEMPERATURE,\n system_prompt=SYSTEM_PROMPT,\n avatar_url=AVATAR_URL,\n shared_state=shared_state or AgentSharedState(),\n )\n \n # Create and return the agent\n agent = Agent(llm=llm, config=config)\n \n # Add tools to the agent (uncomment and modify as needed)\n # workflow = YourWorkflow(YourWorkflowConfiguration())\n # agent.add_tools(workflow.as_tools())\n \n return agent\n\n# For testing purposes\nif __name__ == \"__main__\":\n agent = create_agent()\n agent.run(\"Hello, I need help with $$MODULE_NAME\")\n" > src/custom/modules/$$MODULE_NAME/assistants/$${MODULE_NAME_PASCAL}Assistant.py && \
echo "from langchain_openai import ChatOpenAI\nfrom abi.services.agent.Agent import Agent, AgentConfiguration, AgentSharedState, MemorySaver\nfrom src import secret, services\n\nNAME = \"$$MODULE_NAME_PASCAL Assistant\"\nDESCRIPTION = \"A brief description of what your assistant does.\"\nMODEL = \"o3-mini\" # Or another appropriate model\nTEMPERATURE = 1\nAVATAR_URL = \"https://example.com/avatar.png\"\nSYSTEM_PROMPT = \"\"\"You are the $$MODULE_NAME_PASCAL Assistant. Your role is to help users with tasks related to $$MODULE_NAME.\n\nYou can perform the following tasks:\n- Task 1\n- Task 2\n- Task 3\n\nAlways be helpful, concise, and focus on solving the user's problem.\"\"\"\n\ndef create_agent(shared_state: AgentSharedState = None) -> Agent:\n \"\"\"Creates a new instance of the $$MODULE_NAME_PASCAL Assistant.\"\"\"\n # Configure the underlying chat model\n chat_model = ChatOpenAI(\n model=MODEL,\n temperature=TEMPERATURE,\n api_key=secret.get('OPENAI_API_KEY')\n )\n \n # Create minimal configuration\n config = AgentConfiguration(\n system_prompt=SYSTEM_PROMPT,\n avatar_url=AVATAR_URL\n )\n \n # Create shared state if not provided\n if shared_state is None:\n shared_state = AgentSharedState()\n \n # Create and return the agent\n agent = Agent(\n name=NAME,\n description=DESCRIPTION,\n chat_model=chat_model, # Using chat_model instead of llm\n tools=[], # Empty tools list\n agents=[], # No sub-agents\n state=shared_state,\n configuration=config,\n memory=MemorySaver()\n )\n \n return agent\n\n# For testing purposes\nif __name__ == \"__main__\":\n agent = create_agent()\n agent.run(\"Hello, I need help with $$MODULE_NAME\")\n" > src/custom/modules/$$MODULE_NAME/assistants/$${MODULE_NAME_PASCAL}Assistant.py && \
echo "Creating example workflow..." && \
echo "from pydantic import BaseModel, Field\nfrom typing import Optional, List, Dict, Any\nfrom fastapi import APIRouter\nfrom langchain_core.tools import StructuredTool\n\nclass $${MODULE_NAME_PASCAL}WorkflowConfiguration(BaseModel):\n \"\"\"Configuration for the $$MODULE_NAME_PASCAL Workflow.\"\"\"\n # Add configuration parameters here\n api_key: Optional[str] = Field(None, description=\"API key for external service\")\n\nclass $${MODULE_NAME_PASCAL}WorkflowParameters(BaseModel):\n \"\"\"Parameters for running the $$MODULE_NAME_PASCAL Workflow.\"\"\"\n # Add input parameters here\n query: str = Field(..., description=\"Query to process\")\n max_results: int = Field(10, description=\"Maximum number of results to return\")\n\nclass $${MODULE_NAME_PASCAL}WorkflowResult(BaseModel):\n \"\"\"Result of the $$MODULE_NAME_PASCAL Workflow.\"\"\"\n # Define the structure of the workflow results\n results: List[Dict[str, Any]] = Field(default_factory=list, description=\"List of results\")\n count: int = Field(0, description=\"Number of results found\")\n\nclass $${MODULE_NAME_PASCAL}Workflow:\n \"\"\"A workflow for $$MODULE_NAME operations.\"\"\"\n \n def __init__(self, configuration: $${MODULE_NAME_PASCAL}WorkflowConfiguration):\n self.__configuration = configuration\n \n def as_tools(self) -> list[StructuredTool]:\n \"\"\"Returns a list of LangChain tools for this workflow.\"\"\"\n return [StructuredTool(\n name=\"$${MODULE_NAME}_workflow\",\n description=\"Runs the $$MODULE_NAME_PASCAL workflow with the given parameters\",\n func=lambda **kwargs: self.run($${MODULE_NAME_PASCAL}WorkflowParameters(**kwargs)),\n args_schema=$${MODULE_NAME_PASCAL}WorkflowParameters\n )]\n \n def as_api(self, router: APIRouter) -> None:\n \"\"\"Adds API endpoints for this workflow to the given router.\"\"\"\n @router.post(\"/$${MODULE_NAME_PASCAL}Workflow\")\n def run(parameters: $${MODULE_NAME_PASCAL}WorkflowParameters):\n return self.run(parameters)\n \n def run(self, parameters: $${MODULE_NAME_PASCAL}WorkflowParameters) -> $${MODULE_NAME_PASCAL}WorkflowResult:\n \"\"\"Runs the workflow with the given parameters.\"\"\"\n # Implement your workflow logic here\n # This is a placeholder implementation\n \n # Example placeholder implementation\n results = [\n {\"id\": 1, \"name\": \"Result 1\", \"value\": \"Sample data 1\"},\n {\"id\": 2, \"name\": \"Result 2\", \"value\": \"Sample data 2\"},\n ]\n \n # Take only as many results as requested\n results = results[:parameters.max_results]\n \n return $${MODULE_NAME_PASCAL}WorkflowResult(\n results=results,\n count=len(results)\n )\n\n# For testing purposes\nif __name__ == \"__main__\":\n config = $${MODULE_NAME_PASCAL}WorkflowConfiguration()\n workflow = $${MODULE_NAME_PASCAL}Workflow(config)\n result = workflow.run($${MODULE_NAME_PASCAL}WorkflowParameters(query=\"test query\"))\n print(result)\n" > src/custom/modules/$$MODULE_NAME/workflows/$${MODULE_NAME_PASCAL}Workflow.py && \
echo "Creating example pipeline..." && \
Expand All @@ -181,3 +192,7 @@ build-module:
echo "Creating sample test file..." && \
echo "import unittest\nfrom unittest.mock import MagicMock, patch\n\nclass Test$${MODULE_NAME_PASCAL}Module(unittest.TestCase):\n \"\"\"Test suite for the $$MODULE_NAME_PASCAL module.\"\"\"\n \n def setUp(self):\n \"\"\"Set up test fixtures.\"\"\"\n pass\n \n def tearDown(self):\n \"\"\"Tear down test fixtures.\"\"\"\n pass\n \n def test_assistant(self):\n \"\"\"Test the $$MODULE_NAME_PASCAL Assistant.\"\"\"\n try:\n from ..assistants.$${MODULE_NAME_PASCAL}Assistant import create_agent\n \n # Test agent creation\n agent = create_agent()\n self.assertIsNotNone(agent)\n \n # Additional tests for the assistant would go here\n except ImportError:\n self.skipTest(\"Assistant not implemented yet\")\n \n def test_workflow(self):\n \"\"\"Test the $$MODULE_NAME_PASCAL Workflow.\"\"\"\n try:\n from ..workflows.$${MODULE_NAME_PASCAL}Workflow import $${MODULE_NAME_PASCAL}Workflow, $${MODULE_NAME_PASCAL}WorkflowConfiguration, $${MODULE_NAME_PASCAL}WorkflowParameters\n \n # Test workflow initialization\n config = $${MODULE_NAME_PASCAL}WorkflowConfiguration()\n workflow = $${MODULE_NAME_PASCAL}Workflow(config)\n self.assertIsNotNone(workflow)\n \n # Test workflow execution\n params = $${MODULE_NAME_PASCAL}WorkflowParameters(query=\"test\")\n result = workflow.run(params)\n self.assertIsNotNone(result)\n \n # Test tool creation\n tools = workflow.as_tools()\n self.assertTrue(len(tools) > 0)\n except ImportError:\n self.skipTest(\"Workflow not implemented yet\")\n \n def test_pipeline(self):\n \"\"\"Test the $$MODULE_NAME_PASCAL Pipeline.\"\"\"\n try:\n from ..pipelines.$${MODULE_NAME_PASCAL}Pipeline import $${MODULE_NAME_PASCAL}Pipeline, $${MODULE_NAME_PASCAL}PipelineConfiguration, $${MODULE_NAME_PASCAL}PipelineParameters\n from abi.services.ontology_store import OntologyStoreService\n \n # Create a mock ontology store\n mock_store = MagicMock(spec=OntologyStoreService)\n \n # Test pipeline initialization\n config = $${MODULE_NAME_PASCAL}PipelineConfiguration(ontology_store=mock_store)\n pipeline = $${MODULE_NAME_PASCAL}Pipeline(config)\n self.assertIsNotNone(pipeline)\n \n # Test pipeline execution\n params = $${MODULE_NAME_PASCAL}PipelineParameters(entity_id=\"123\")\n result = pipeline.run(params)\n self.assertIsNotNone(result)\n \n # Test if result is a graph\n self.assertTrue(hasattr(result, 'serialize'))\n except ImportError:\n self.skipTest(\"Pipeline not implemented yet\")\n \n def test_integration(self):\n \"\"\"Test the $$MODULE_NAME_PASCAL Integration.\"\"\"\n try:\n from ..integrations.$${MODULE_NAME_PASCAL}Integration import $${MODULE_NAME_PASCAL}Integration, $${MODULE_NAME_PASCAL}IntegrationConfiguration, $${MODULE_NAME_PASCAL}SearchParameters\n from pydantic import SecretStr\n \n # Test integration initialization\n config = $${MODULE_NAME_PASCAL}IntegrationConfiguration(api_key=SecretStr(\"test_key\"))\n integration = $${MODULE_NAME_PASCAL}Integration(config)\n self.assertIsNotNone(integration)\n \n # Test search function\n params = $${MODULE_NAME_PASCAL}SearchParameters(query=\"test\")\n results = integration.search(params)\n self.assertIsNotNone(results)\n self.assertTrue(isinstance(results, list))\n except ImportError:\n self.skipTest(\"Integration not implemented yet\")\n\nif __name__ == '__main__':\n unittest.main()" > src/custom/modules/$$MODULE_NAME/tests/test_module.py && \
echo "Module '$$MODULE_NAME' built successfully in src/custom/modules/$$MODULE_NAME/"

# Add debug target
debug-agents: .venv
@ docker compose run abi bash -c 'poetry install && poetry run python -m src.debug_agents'
28 changes: 11 additions & 17 deletions docs/storage/local.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,18 +27,18 @@ storage/
│ ├── intermediate/ # Temporary processing results
│ └── output/ # Final data products
├── triple_store/ # Semantic data storage
├── triplestore/ # Semantic data storage
│ ├── ontologies/ # Ontology definitions (.owl, .rdf)
│ └── triples/ # RDF triple data (.ttl)
└── vector_store/ # Vector embeddings
└── vectorstore/ # Vector embeddings
├── embeddings/ # Raw vector data
├── indexes/ # Vector search indexes
└── metadata/ # Associated metadata

### Triple Store Structure

The `triple_store/` directory follows semantic web standards:
The `triplestore/` directory follows semantic web standards:

- **ontologies/**: Contains schema definitions and ontology models
- `.owl` files define formal ontologies with classes, properties, and rules
Expand All @@ -50,7 +50,7 @@ The `triple_store/` directory follows semantic web standards:

### Vector Store Structure

The `vector_store/` directory is optimized for machine learning applications:
The `vectorstore/` directory is optimized for machine learning applications:

- **embeddings/**: Contains raw vector data, typically in binary formats
- Organized by model and dimension (e.g., `bert-base-768d/`)
Expand Down Expand Up @@ -86,7 +86,7 @@ storage_service = ObjectStorageFactory.ObjectStorageServiceFS("/path/to/storage"

# Basic operations
# Store a file
storage_service.put_object("triple_store/triples", "people.ttl", ttl_content)
storage_service.put_object("triplestore/triples", "people.ttl", ttl_content)

# Retrieve a file
content = storage_service.get_object("data_lake/processed", "customers.json")
Expand All @@ -95,7 +95,7 @@ content = storage_service.get_object("data_lake/processed", "customers.json")
files = storage_service.list_objects("documents/pdf")

# Delete a file
storage_service.delete_object("vector_store/embeddings", "temp_vectors.bin")
storage_service.delete_object("vectorstore/embeddings", "temp_vectors.bin")
```

## Synchronization with Remote Storage
Expand All @@ -117,11 +117,11 @@ These commands automatically handle the authentication and execute the AWS S3 sy
1. **Follow the standard directory structure** to ensure consistency and compatibility with other system components.

2. **Use appropriate directories** for different types of data:
- Document files → `documents/`
- Raw data → `data_lake/raw/`
- Processed data → `data_lake/processed/`
- RDF triples → `triple_store/triples/`
- Vector embeddings → `vector_store/embeddings/`
- Document files → `datastore/documents/`
- Raw data → `datastore/[module_name]/raw/`
- Processed data → `datastore/[module_name]/processed/`
- RDF triples → `triplestore/[module_name]/triples/`
- Vector embeddings → `vectorstore/[module_name]/embeddings/`

3. **Use consistent naming conventions**:
- Use lowercase for directories and filenames
Expand All @@ -132,9 +132,3 @@ These commands automatically handle the authentication and execute the AWS S3 sy
4. **Regularly synchronize** with remote storage to ensure data persistence and backup.

5. **Clean up temporary files** to prevent storage bloat and keep the system organized.

## Related Documentation

- [Remote Storage](./remote.md)
- [Triple Store Architecture](../architecture/triple_store.md)
- [Vector Embeddings](../machine_learning/embeddings.md)
5 changes: 5 additions & 0 deletions lib/abi/utils/Module.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,11 @@ def load(self):


def __load_agents(self):
"""Loads all agents defined in the module."""
# Skip modules with a .skip file
if os.path.exists(os.path.join(self.module_path, '.skip')):
return

for file in os.listdir(os.path.join(self.module_path, 'assistants')):
if file.endswith('.py'):
assistant_path = self.module_import_path + '.assistants.' + file[:-3]
Expand Down
Loading