ThinkHub is a lightweight Python package designed for small and simple projects or for quickly testing LLM services. It provides a unified interface for interacting with multiple AI providers, making it easy to configure and switch between different services. Built with extensibility in mind, users can effortlessly integrate new providers by creating and registering their own plugins or classes. ThinkHub prioritizes simplicity, flexibility, and user-friendly customization, making it an ideal tool for rapid prototyping and experimentation with AI models.
- Lazy Loading: Only loads the dependencies required for the selected service, reducing memory and installation overhead.
- Multi-Service Integration: Interact seamlessly with multiple AI services (e.g., chat, transcription, image processing).
- Plugin System: Register and use custom classes to extend functionality.
- Dynamic Configuration: Load and manage configurations with environment variable overrides.
- Error Handling: Robust exception system for identifying and managing provider-related issues.
- Poetry and pip Support: Flexible dependency and environment management.
- Python 3.11+: Leverages the latest features of Python for performance and simplicity.
- OpenAI: Using the
whisper-1
model. - Google Speech-to-Text
Provider | Completion | Streaming | Async Completion | Async Streaming | Async Embedding | Async Image Generation | Image Input |
---|---|---|---|---|---|---|---|
OpenAI | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | ✅ |
Google Gemini | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | ✅ |
Anthropic - Claude.ai | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | ✅ |
ThinkHub uses a lazy-loading strategy to optimize memory usage and avoid installing unused dependencies. You can install ThinkHub using either Poetry or pip, as shown below:
- Poetry:
poetry add thinkhub
- pip:
pip install thinkhub
Install only the required dependencies based on the service(s) you plan to use:
-
OpenAI Chat:
- Poetry:
poetry add thinkhub --extras openai
- pip:
pip install thinkhub[openai]
- Poetry:
-
Google Transcription:
- Poetry:
poetry add thinkhub --extras google
- pip:
pip install thinkhub[google]
- Poetry:
-
Anthropic Chat:
- Poetry:
poetry add thinkhub --extras anthropic
- pip:
pip install thinkhub[anthropic]
- Poetry:
-
Gemini Chat:
- Poetry:
poetry add thinkhub --extras google-generativeai
- pip:
pip install thinkhub[google-generativeai]
- Poetry:
-
Multiple Services (e.g., OpenAI and Anthropic):
- Poetry:
poetry add thinkhub --extras openai --extras anthropic
- pip:
pip install thinkhub[openai,anthropic]
- Poetry:
If you want to install all available services:
- Poetry:
poetry add thinkhub --extras all
- pip:
pip install thinkhub[all]
- Poetry:
poetry shell
ThinkHub uses lazy loading to dynamically import the dependencies required for a specific provider. This means that:
- Dependencies are only loaded when needed.
- Missing dependencies are flagged with clear error messages.
Example:
If you attempt to use OpenAI services without the openai
extra installed, ThinkHub will raise an error like this:
ImportError: Missing dependencies for provider 'openai': tiktoken.
Install them using 'poetry install --extras openai' or 'pip install thinkhub[openai]'.
To use a chat service like OpenAI:
from thinkhub.chat import get_chat_service
chat_service = get_chat_service("openai", model="gpt-4o")
async for response in chat_service.stream_chat_response("Hello, ThinkHub!"):
print(response)
To use a transcription service like Google:
from thinkhub.transcription import get_transcription_service
transcription_service = get_transcription_service("google")
result = await transcription_service.transcribe("path/to/audio.flac")
print(result)
ThinkHub supports image processing with OpenAI. Here’s an example of how to process multiple images asynchronously:
import asyncio
from thinkhub.chat import get_chat_service
async def process_image_with_openai(image_payloads):
chat_service = get_chat_service("openai", model="gpt-4")
async for response in chat_service.stream_chat_response(
input_data=image_payloads,
system_prompt="Analyze these images."
):
print(response)
# Prepare image payloads
image_payloads = [{"image_path": "path/to/image1.jpg"}, {"image_path": "path/to/image2.jpg"}]
# Process images with OpenAI
asyncio.run(process_image_with_openai(image_payloads))
ThinkHub includes robust error handling to simplify debugging and configuration:
-
Dependency Validation: ThinkHub will check for required dependencies dynamically and provide clear installation instructions.
-
Custom Exceptions:
ProviderNotFoundError
: Raised when a requested provider is not found.ImportError
: Raised when dependencies for a provider are missing.
Example:
from thinkhub.exceptions import ProviderNotFoundError
try:
service = get_chat_service("unsupported_provider")
except ProviderNotFoundError as e:
print(e)
-
Run Tests:
poetry run pytest
-
Code Linting:
poetry run pre-commit run -a
-
Build the Project:
poetry build
This project is licensed under the MIT License..
Special thanks to the open-source community for providing the tools and libraries that made this project possible.