A scalable notebook generation platform that creates downloadable Jupyter notebooks from Hugging Face models with real code examples. Built with Next.js frontend and FastAPI backend with Celery background tasks.
- Model Selection: Browse and select from popular Hugging Face models
- Asynchronous Notebook Generation: Background processing with real-time progress updates
- Notebook Validation: Automated syntax checking and runtime validation to ensure notebooks actually run
- Downloadable Notebooks: Get ready-to-run
.ipynbfiles - Shareable Results: Generate share links for generated notebooks
- Model Categories: Filter models by task type (Text Generation, Classification, etc.)
- Real Examples: Uses actual code from model documentation
- Real-time Progress: WebSocket updates with polling fallback
- Scalable Architecture: FastAPI + Celery + Redis for long-running tasks
Copy .env.local.example to .env.local:
cp .env.local.example .env.localUpdate the environment variables:
# Supabase Configuration
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
# Hugging Face API
HF_API_TOKEN=your-huggingface-token- Create a Supabase project
- Run the migration script:
-- Run this in your Supabase SQL Editor
CREATE EXTENSION IF NOT EXISTS pgcrypto;
CREATE TABLE IF NOT EXISTS public.notebooks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
share_id TEXT UNIQUE NOT NULL,
hf_model_id TEXT NOT NULL,
notebook_content JSONB NOT NULL,
metadata JSONB,
download_count INTEGER DEFAULT 0
);
CREATE INDEX IF NOT EXISTS notebooks_share_idx ON public.notebooks(share_id);
CREATE INDEX IF NOT EXISTS notebooks_model_idx ON public.notebooks(hf_model_id);
CREATE INDEX IF NOT EXISTS notebooks_created_idx ON public.notebooks(created_at DESC);
ALTER TABLE public.notebooks DISABLE ROW LEVEL SECURITY;
GRANT ALL ON public.notebooks TO authenticated;
GRANT ALL ON public.notebooks TO anon;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO anon;npm installcd packages/backend
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements-minimal.txt
cd ../..Option A: Quick Start (Recommended)
./quick-start.shOption B: Full Start Script
./start.shOption C: Manual Start
# Start Redis (if not running)
redis-server --daemonize yes --port 6379
# Start Celery worker
cd packages/backend
source venv/bin/activate
celery -A app.core.celery_app worker --loglevel=info --pool=solo &
cd ../..
# Start FastAPI backend
cd packages/backend
source venv/bin/activate
uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload &
cd ../..
# Start Next.js frontend
pnpm devOpen http://localhost:3000 in your browser.
- Generator Page: Model selection and notebook generation interface
- Share Page: View and download generated notebooks
- Home Page: Landing page with feature overview
/api/models/popular- Get popular HF models/api/models/search- Search models by category/api/notebook/generate- Generate notebooks from models (legacy)/api/notebook/[shareId]- Fetch notebook metadata/api/notebook/download/[shareId]- Download notebook files
GET /api/v1/models/popular- Get popular HF modelsGET /api/v1/models/search- Search models by categoryPOST /api/v1/notebook/generate- Start asynchronous notebook generation taskGET /api/v1/notebook/task/{task_id}- Get task status and progressGET /api/v1/notebook/{share_id}- Get notebook metadataGET /api/v1/notebook/download/{share_id}- Download.ipynbfileGET /api/v1/notebook/{share_id}/validation- Get notebook validation resultsWebSocket /ws/progress/{task_id}- Real-time progress updates
- Task Queue: Redis-based distributed task processing
- Progress Tracking: Real-time progress updates via Redis pub/sub
- Error Handling: Comprehensive error management and retry logic
- Async Integration: Seamless async/await support for background processing
- notebooks table - Store generated notebooks and metadata
- Public access for demo (no authentication)
- Share IDs for public notebook access
GET /api/models/popular- Get popular HF modelsGET /api/models/search?category=text-generation- Search models by categoryPOST /api/notebook/generate- Generate notebook from model (synchronous)GET /api/notebook/[shareId]- Get notebook metadataGET /api/notebook/download/[shareId]- Download.ipynbfile
-
GET /api/v1/models/popular- Get popular HF models -
GET /api/v1/models/search?category=text-generation- Search models by category -
POST /api/v1/notebook/generate- Start asynchronous notebook generation{ "hf_model_id": "meta-llama/Llama-3.1-8B-Instruct" }Response:
{"task_id": "uuid", "estimated_time": 30} -
GET /api/v1/notebook/task/{task_id}- Get task status{ "status": "processing|completed|failed", "progress": 75, "current_step": "Extracting code from README", "share_id": "abc123" (only when completed) } -
GET /api/v1/notebook/{share_id}- Get notebook metadata -
GET /api/v1/notebook/download/{share_id}- Download.ipynbfile -
WebSocket /ws/progress/{task_id}- Real-time progress updates
The FastAPI backend provides real-time progress updates via WebSockets:
{
"type": "progress",
"data": {
"progress": 45,
"current_step": "Generating notebook cells",
"message": "Processing model README..."
}
}- Text Generation: Story writing, content creation models
- Chat & Dialogue: Conversational AI, instruction following
- Classification & NER: Text analysis, entity extraction
- Summarization: Long-form text summarization
- Instruction Following: Complex instruction comprehension
- Translation: Multi-language translation models
- Code Generation: Programming and code completion
Generated notebooks follow a consistent 7-cell structure:
- Title & Attribution - Model information and links
- Environment Setup - Install required packages
- Hello Cell - Basic model verification
- Model Information - Pipeline details and usage
- README Example - Real code from model documentation
- Generic Example - Fallback template
- Next Steps - Additional resources and links
alacard/
├── packages/backend/ # FastAPI backend package
│ ├── app/
│ │ ├── api/v1/ # API routes
│ │ │ ├── endpoints/ # Individual endpoints
│ │ │ └── __init__.py # API router
│ │ ├── core/ # Core configuration
│ │ │ ├── config.py # Settings and config
│ │ │ ├── database.py # Database layer
│ │ │ └── celery_app.py # Celery configuration
│ │ ├── models/ # Pydantic models
│ │ │ └── notebook.py # API data models
│ │ ├── services/ # Business logic
│ │ │ ├── huggingface.py # HF API integration
│ │ │ ├── notebook_generator.py # Notebook generation
│ │ │ └── progress_tracker.py # Progress tracking
│ │ ├── tasks/ # Celery tasks
│ │ │ └── notebook_tasks.py # Background tasks
│ │ └── main.py # FastAPI application
│ ├── requirements-minimal.txt # Python dependencies
│ └── .env.example # Environment template
├── app/ # Next.js frontend
│ ├── api/ # Legacy Next.js API routes
│ ├── generator/ # Model selection interface
│ ├── share/[shareId]/ # Share page for notebooks
│ └── layout.tsx # Root layout
├── components/ # React components
│ ├── ModelCard.tsx
│ ├── GenerateButton.tsx
│ ├── CategoryFilter.tsx
│ ├── LoadingSpinner.tsx
│ └── NotebookResult.tsx
├── lib/ # Frontend utilities
│ ├── backend-api.ts # FastAPI client
│ ├── presets.ts # Model presets
│ ├── supabase.ts # Supabase client
│ └── hooks/ # React hooks
│ └── useWebSocketProgress.ts # WebSocket progress hook
├── types/ # TypeScript definitions
├── quick-start.sh # Quick start script
├── start.sh # Full start script
└── pnpm-workspace.yaml # Monorepo configuration
- Update
lib/presets.tswith new model entries - Add category if needed
- Update notebook generation logic if model requires special handling
Edit the notebook generation logic in lib/notebook-generator.ts to customize:
- Cell structure and content
- Template variations by model type
- Code extraction strategies
- Push code to GitHub
- Connect repository to Vercel
- Set environment variables in Vercel dashboard
- Deploy
- Build the application:
npm run build - Start the production server:
npm start - Set up reverse proxy (nginx, etc.)
- Configure SSL certificates
- Open Generator: Browse popular models and select one
- Generate Notebook: Click "Generate Notebook" and watch the process
- Download Results: Get the
.ipynbfile and open in Jupyter - Share Notebook: Copy share link and open in fresh browser
- Generate More: Use "Generate New" button for same model
A comprehensive workflow test script is available to validate the entire notebook generation pipeline:
# Set up Python environment for testing
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install requests
# Run the workflow test
python3 workflow_test.pyThe workflow test script (workflow_test.py) performs comprehensive testing of the Alacard platform:
- Service Availability: Checks if FastAPI, Redis, and other services are running
- API Validation: Tests request validation and error handling for malformed requests
- Error Handling: Validates proper handling of invalid models and error cases
- Model Validation: Verifies HuggingFace model existence before generation
- Success Workflow: Tests complete notebook generation with real models
- Notebook Retrieval: Validates notebook metadata and content retrieval
- Validation Results: Confirms generated notebooks pass syntax and runtime validation
Run the test script while services are running (./start.sh or ./quick-start.sh):
python3 workflow_test.pyThe script will prompt for a HuggingFace model ID to test with:
- Default model:
microsoft/DialoGPT-medium(press Enter to use) - Custom models: Enter any valid HuggingFace model ID (e.g.,
facebook/bart-large-cnn)
- ✅ Service Check - Verifies all required services are running
- ✅ API Validation - Tests request validation and error responses
- ✅ Error Handling - Tests invalid model handling
- 🆕 Model Validation - Checks if the specified model exists on HuggingFace
- 🧪 Success Workflow - Full notebook generation pipeline
- 📊 Results Validation - Verifies notebook content and validation results
The script provides colored output with:
- ✅ PASS - Successful tests
- ❌ FAIL - Failed tests with error details
- ⏳ PROCESSING - Background task monitoring
- 📊 RESULTS - Detailed test summary
- Python 3.6+
requestslibrary- Running Alacard services (FastAPI, Redis)
- Internet connection for HuggingFace model validation
- Next.js 14: React framework with TypeScript
- Tailwind CSS: Utility-first CSS framework
- WebSocket Client: Real-time progress tracking with fallback
- FastAPI: Modern Python web framework
- Celery: Distributed task queue
- Redis: Message broker and caching
- Pydantic: Data validation and serialization
- Supabase: PostgreSQL database with real-time features
- PostgreSQL: Primary data storage for notebooks
- Hugging Face: Model discovery and metadata
- Hugging Face Files API: README content extraction
- pnpm: Package manager with monorepo support
- Poetry: Python dependency management
- Docker: Containerization support (optional)
- ✅ Asynchronous Processing: FastAPI + Celery for long-running tasks
- ✅ Real-time Progress: WebSocket updates with polling fallback
- ✅ Scalable Architecture: Monorepo with separate backend service
- ✅ Modern Tech Stack: FastAPI, Celery, Redis, Next.js 14
- ✅ Notebook Validation: Automated syntax and runtime validation
- ✅ Quality Assurance: Ensures generated notebooks actually execute successfully
- No user authentication (anonymous access only)
- Limited to predefined popular models (easily extensible)
- Basic notebook template structure (functional but could be enhanced)
- User accounts for personal notebook libraries
- Enhanced model search and filtering
- Notebook customization options
- Integration with Google Colab
- Advanced template variations for different model types
- Model fine-tuning support
MIT License - see LICENSE file for details.