Parakeet is a Flutter app for generating and practicing AI-powered language-learning dialogues. It uses Cloud Functions (Python) to call LLMs and synthesize audio via Google Cloud Text-to-Speech, OpenAI TTS, and ElevenLabs.
- AI-Generated Dialogues: Create custom lessons with adjustable topics, proficiency levels, and lengths.
- Multi-Provider TTS: High-quality audio synthesis using Google, OpenAI, and ElevenLabs.
- Interactive Learning: Practice with spaced repetition, active recall, and vocabulary reviews.
- Cross-Platform: Mobile-first Flutter app (iOS/Android) with web support.
- Firebase Backend: Robust serverless architecture using Cloud Functions, Firestore, and Storage.
lib/: Flutter application code (screens, services, widgets, utils).functions/: Main Cloud Functions (Python) for lesson generation and audio processing.functions_plot_twist/: Secondary Cloud Functions codebase for specific features (e.g., donations).payment_verification_backend/: Node.js/TypeScript backend for in-app purchase verification.assets/: Static assets (images, icons, sounds).narrator_audio/: Pre-generated audio files for the narrator.third_party/: External dependencies (e.g., Vosk for speech recognition).data_analytics/: Analytics scripts and utilities.
- Flutter SDK: Latest stable version.
- Dart SDK: Compatible with the Flutter version.
- Firebase CLI: For deploying functions and managing the project.
- Python 3.10+: For running Cloud Functions locally or deploying them.
- Node.js 18+: For the payment verification backend.
- Google Cloud Project: With Text-to-Speech API enabled.
The application relies on several environment variables for API keys and configuration. Create a .env file in the functions/ directory (and functions_plot_twist/ if needed).
Required Variables:
| Variable | Description |
|---|---|
OPEN_AI_API_KEY |
API key for OpenAI (GPT models and TTS). |
ELEVENLABS_API_KEY |
API key for ElevenLabs TTS. |
GOOGLE_APPLICATION_CREDENTIALS |
Path to the Google Cloud service account JSON key. |
KOFI_TOKEN |
(Optional) Token for Ko-fi webhook verification in functions_plot_twist. |
-
Install Dependencies:
flutter pub get
-
Run Locally:
flutter run
-
Build for Release:
# Android flutter build appbundle --obfuscate --split-debug-info=build/app/outputs/symbols # iOS flutter build ipa --obfuscate --split-debug-info=build/app/outputs/symbols # Web flutter build web
-
Install Python Dependencies:
cd functions pip install -r requirements.txt -
Run Locally (using Functions Framework):
functions-framework --target second_API_calls --debug
-
Deploy to Firebase:
firebase deploy --only functions
-
Deploy Plot Twist Functions: The secondary codebase
functions_plot_twist/can be deployed via Firebase or directly with gcloud:gcloud functions deploy handle_kofi_donation \ --region=europe-west1 \ --gen2 \ --set-env-vars KOFI_TOKEN=your_kofi_token_here \ --source functions_plot_twist/
- Setup and Deploy:
cd payment_verification_backend npm install npm run deploy # runs: firebase deploy --only functions
The backend logic is handled by Firebase Cloud Functions. Below are the primary endpoints defined in functions/main.py.
Generates the initial dialogue script based on user parameters and reserves lesson credits.
- Method:
POST - Body Parameters:
requested_scenario(string): Description of the scenario (e.g., "Ordering coffee").category(string): Lesson category.native_language(string): User's native language.target_language(string): Language to learn.length(string): Length of the dialogue.user_ID(string): Firebase User ID.document_id(string): Unique ID for the lesson document.tts_provider(int): ID of the TTS provider (1: Google, 2: OpenAI, 3: ElevenLabs).language_level(string): Proficiency level (e.g., "A1", "C2").keywords(string, optional): Specific keywords to include.
Processes the generated dialogue, synthesizes audio for each turn, and constructs the full lesson structure with breakdowns and explanations.
- Method:
POST - Body Parameters:
dialogue(array): The dialogue objects generated by the first call.document_id(string): Lesson document ID.user_ID(string): Firebase User ID.title(string): Lesson title.speakers(object): Speaker details (name, gender).native_language(string): User's native language.target_language(string): Target language.language_level(string): Proficiency level.length(string): Lesson length.voice_1_id(string): Voice ID for Speaker 1.voice_2_id(string): Voice ID for Speaker 2.words_to_repeat(array): List of words for vocabulary practice.tts_provider(int): TTS provider ID.
Deletes audio files associated with a specific lesson to manage storage usage.
- Method:
POST - Body Parameters:
document_id(string): ID of the document/lesson.user_id(string): Firebase User ID.
Generates a spoken audio file for the user's nickname.
- Method:
POST - Body Parameters:
text(string): The nickname text.user_id(string): Firebase User ID.user_id_N(string): Normalized User ID or filename prefix.language(string): Language for the pronunciation.
Suggests a specific lesson topic based on a category and selected words.
- Method:
POST - Body Parameters:
category(string): General category.selectedWords(string): Words to incorporate.target_language(string): Target language.native_language(string): Native language.level_number(int): Difficulty level.
Translates a list of keywords into the target language.
- Method:
POST - Body Parameters:
keywords(string): Comma-separated keywords.target_language(string): Target language.
Provides suggestions for custom lesson scenarios.
- Method:
POST - Body Parameters:
target_language(string): Target language.native_language(string): Native language.
- Cheatsheet: Check
dev-cheatsheet.mdfor useful commands (deploys, bundling, SHA-1, etc.). - Firebase Project: Ensure you are using the correct project with
firebase use <your-project>. - Google TTS: Enable the API and provide credentials via
GOOGLE_APPLICATION_CREDENTIALS.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
This repository does not currently include a license. All rights reserved.