A simple command-line chat application using the OpenAI API.
Use the Snyk CLI with an AI-BOM command:
snyk aibom --experimental --jsonPipe the Snyk CLI output of AI-BOM findings to jq to easily find and match your results:
snyk aibom --experimental --json | jq '.components[] | select(."bom-ref" | startswith("model:")) | ."bom-ref"'Get a fancy visual view of the AI-BOM findings using ai-bom-visualizer npm package:
snyk aibom --experimental --json | npx ai-bom-visualizer --open- uv - Python package and project manager
- Install via curl:
curl -sSL https://install.astral.sh | python3 -
- Install via curl:
- An OpenAI API key
-
Clone the repository and navigate to the project directory
-
Set your OpenAI API key as an environment variable:
export OPENAI_API_KEY="your_api_key_here"
-
Install dependencies:
uv sync
uv run main.pyType your messages and press Enter to chat with the AI. Type quit to exit.
uv run whisper.pyThis script demonstrates speech-to-text transcription and translation using OpenAI's Whisper model:
- Loads a fine-tuned Whisper model (
whisper-medium-fleurs-lang-id) for language identification - Streams French audio samples from the Common Voice dataset
- Translates the French speech into English text
- Uses the Hugging Face
transformersanddatasetslibraries
Note: The first run will download the model (~1.5GB) and stream audio samples.
uv run blip.pyThis script demonstrates visual question answering (VQA) using Salesforce's BLIP model:
- Loads the BLIP VQA model (
blip-vqa-base) from Hugging Face - Downloads a sample image from the web
- Asks a question about the image ("how many men are in the picture?")
- Returns a natural language answer based on the image content
Note: The first run will download the model (~1GB).
Other example projects you can clone and scan for AI-BOM findings: