This project is part of Code & Context's AI Drop of the Week.
This bot posts four times a day to X.com. See the results here.
Very often, I feel pressured to create content for social media platforms like X, Facebook, and Instagram. However, I hate the hassle of posting to those platforms, so I just don't ever get around to it.
This project aims to address this challenge by using AI to generate daily content for social media platforms, and it gives me a chance to play with various AI APIs and models.
I'm open sourcing the project so you can adapt it and become socially compliant too!
This version has been completely modernized with 2025 AI tools:
| Component | v1.0 (2024) | v2.0 (2025) |
|---|---|---|
| Scheduler | Trigger.dev | GitHub Actions (cron) |
| Content Source | Random topic list | Live AI news search (Tavily) |
| LLM | Claude Haiku (hardcoded) | OpenRouter (configurable) |
| Image Generation | OpenAI DALL-E 3 | Google Nano Banana Pro |
| Audio Generation | HuggingFace MusicGen | Built into Sora video |
| Video Generation | FFmpeg (image + audio) | OpenAI Sora 2 |
| X API | v1.1 + v2 hybrid | OAuth 2.0 only |
A GitHub Actions workflow runs 4 times daily. Upon invocation, it:
- Searches for AI news using Tavily API to find the latest AI/ML news
- Generates content using OpenRouter (configurable LLM) to select the most interesting topic and write a post
- Creates an image using Google Nano Banana Pro (Gemini 3 Pro Image)
- Generates a video using OpenAI Sora 2 from the image with synchronized audio
- Posts to X using the X API v2 with OAuth 2.0
Tavily (AI news) → OpenRouter LLM (content) → Nano Banana Pro (image)
↓
X API v2 (post) ← Sora 2 (video with audio)
You'll need accounts and API keys for:
- OpenRouter - LLM access (supports 200+ models)
- Tavily - AI-powered web search
- Google Cloud / Vertex AI - Gemini image generation (see Image Generation Setup below)
- OpenAI Platform - Sora 2 video generation (requires API access)
- X Developer Portal - OAuth 2.0 credentials
- Cloudflare - KV storage for OAuth refresh tokens
- Cloudflare R2 (Optional) - Object storage for workflow persistence and replay
See README-AUTH.md for detailed setup instructions.
git clone https://github.com/intertwine/social-compliance-generator.git
cd social-compliance-generator
npm installcp .env.example .env
# Edit .env with your API keysnpm run generate-
Go to your repository's Settings → Secrets and variables → Actions
-
Add the following secrets:
OPENROUTER_API_KEYOPENROUTER_MODEL(optional, defaults toanthropic/claude-sonnet-4.5-20250929)TAVILY_API_KEYGOOGLE_CLOUD_PROJECT- Your Google Cloud project IDGOOGLE_CLOUD_CREDENTIALS- Service account JSON key (see Image Generation Setup)OPENAI_API_KEYX_API_CLIENT_IDX_API_CLIENT_SECRETX_API_ACCESS_TOKENX_API_REFRESH_TOKENCLOUDFLARE_ACCOUNT_ID- Your Cloudflare account IDCLOUDFLARE_KV_NAMESPACE_ID- KV namespace ID for token storageCLOUDFLARE_KV_API_TOKEN- API token with KV write permissionsCLOUDFLARE_R2_BUCKET(optional) - R2 bucket name for workflow storageCLOUDFLARE_R2_ACCESS_KEY_ID(optional) - R2 API access keyCLOUDFLARE_R2_SECRET_ACCESS_KEY(optional) - R2 API secret key
-
The workflow will run automatically at 6am, 12pm, 6pm, and midnight UTC
-
You can also trigger it manually from the Actions tab
social-compliance-generator/
├── .github/workflows/
│ └── generate-post.yml # Cron-triggered GitHub Action
├── src/
│ ├── index.ts # Main orchestration
│ ├── replay.ts # Workflow replay utility
│ ├── services/
│ │ ├── search.ts # Tavily web search
│ │ ├── llm.ts # OpenRouter LLM
│ │ ├── image.ts # Google Nano Banana Pro
│ │ ├── video.ts # OpenAI Sora 2
│ │ ├── x.ts # X API posting
│ │ ├── token-storage.ts # Cloudflare KV token storage
│ │ └── workflow-storage.ts # Cloudflare R2 workflow storage
│ └── types/
│ └── workflow.ts # Workflow data types
├── .env.example # Environment template
├── package.json
└── tsconfig.json
Set OPENROUTER_MODEL in your environment to any model from OpenRouter's catalog:
OPENROUTER_MODEL=openai/gpt-4o
OPENROUTER_MODEL=meta-llama/llama-3.1-70b-instruct
OPENROUTER_MODEL=google/gemini-pro-1.5Edit .github/workflows/generate-post.yml and modify the cron schedule:
schedule:
- cron: '0 */6 * * *' # Every 6 hours
- cron: '0 9 * * *' # Once daily at 9am UTCEdit src/services/x.ts to change the hashtags and links:
const POST_TAGS = ["YourTag1", "YourTag2"];
const POST_LINKS = [
{ title: "Your Link", url: "https://your-url.com" }
];The image service uses Google's Gemini models (Nano Banana Pro / Gemini 2.5 Flash) with automatic fallback on rate limits.
Vertex AI provides higher rate limits and better reliability. Required for GitHub Actions.
-
Create a Google Cloud project at console.cloud.google.com
-
Enable billing for the project:
gcloud billing projects link YOUR_PROJECT_ID --billing-account=YOUR_BILLING_ACCOUNT
-
Enable the Vertex AI API:
gcloud services enable aiplatform.googleapis.com --project=YOUR_PROJECT_ID -
Create a service account:
gcloud iam service-accounts create github-vertex-ai \ --project=YOUR_PROJECT_ID \ --display-name="GitHub Actions Vertex AI" gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ --member="serviceAccount:github-vertex-ai@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/aiplatform.user"
-
Create and download the key:
gcloud iam service-accounts keys create key.json \ --iam-account=github-vertex-ai@YOUR_PROJECT_ID.iam.gserviceaccount.com
-
Add to GitHub Secrets:
GOOGLE_CLOUD_PROJECT: Your project IDGOOGLE_CLOUD_CREDENTIALS: Contents ofkey.json
-
Delete the local key file:
rm key.json
For local development, you can use the simpler Gemini Developer API:
- Get an API key from Google AI Studio
- Set
GOOGLE_API_KEYin your.envfile
Note: The free tier has strict rate limits. For production use, set up Vertex AI.
The workflow storage feature uses Cloudflare R2 to persist intermediate results (news search, content, images, videos) from each workflow run. This enables:
- Replay failed posts: If X posting fails, you can manually repost later
- Debugging: Inspect the full workflow state for any run
- Audit trail: Keep a history of all generated content
-
Create an R2 bucket at Cloudflare Dashboard → R2 → Create bucket
-
Create an R2 API token:
- Go to Dashboard → R2 → Manage R2 API Tokens → Create API token
- Select "Object Read & Write" permission
- Apply to your specific bucket or all buckets
- Copy the Access Key ID and Secret Access Key
-
Add to your environment (
.envfor local, GitHub Secrets for Actions):CLOUDFLARE_R2_BUCKET=your-bucket-name CLOUDFLARE_R2_ACCESS_KEY_ID=your-access-key-id CLOUDFLARE_R2_SECRET_ACCESS_KEY=your-secret-access-key
Note: R2 storage is optional. If not configured, workflows will run normally without persistence.
The replay utility allows you to manage workflow runs stored in R2:
# List recent workflow runs
npm run replay list
# Show details of a specific run
npm run replay show <runId>
# Repost a failed workflow to X
npm run replay post <runId>
# Delete a workflow run from storage
npm run replay delete <runId>Example output from npm run replay list:
[OK] run-20241115-103000-abc123
Started: 2024-11-15T10:30:00.000Z
Completed: 2024-11-15T10:32:15.000Z
Topic: OpenAI announces GPT-5
Post ID: 1234567890 (video)
[FAIL] run-20241115-063000-def456
Started: 2024-11-15T06:30:00.000Z
Topic: Google releases Gemini 3
The Sora 2 API requires explicit invitation from OpenAI. If video generation fails, the system will automatically fall back to posting an image-only post.
Ensure your Cloudflare KV namespace is set up correctly and the API token has write permissions. The initial OAuth tokens should be set in your environment variables (X_API_ACCESS_TOKEN and X_API_REFRESH_TOKEN).
The image service automatically falls back from Nano Banana Pro to Gemini 2.5 Flash on rate limit errors. If both fail:
- Verify billing is enabled:
gcloud billing projects describe YOUR_PROJECT_ID - Check the error message -
free_tiermeans billing isn't properly linked - Wait a few minutes if you just enabled billing/APIs
If you're seeing R2-related errors:
- Verify all three R2 environment variables are set:
CLOUDFLARE_R2_BUCKET,CLOUDFLARE_R2_ACCESS_KEY_ID,CLOUDFLARE_R2_SECRET_ACCESS_KEY - Ensure the API token has "Object Read & Write" permissions
- Check that the bucket name matches exactly (case-sensitive)
- R2 storage is optional - workflows will continue without it if not configured
The X API v2 video upload uses chunked uploads with dedicated endpoints (as of January 2025):
- 413 Payload Too Large: Videos are uploaded in 1MB chunks. If you see this error, the chunk size may need adjustment.
- Invalid media IDs: After upload, videos need processing time (10-60 seconds depending on size). The system waits automatically.
- Rate limits: Free tier has low limits (17 initialize/finalize per 24h, 85 appends). Consider upgrading your X API tier for production use.
Technical details:
- Endpoints:
/2/media/upload/initialize,/{id}/append,/{id}/finalize - Media category:
amplify_video(required for video uploads) - Authentication: OAuth 2.0 with
media.writescope
OpenRouter and other APIs have rate limits. If you're hitting limits, consider:
- Reducing post frequency
- Using a different LLM model
- Adding retry logic with exponential backoff
MIT
For more fun AI projects and tools, subscribe to the AI Drop of the Week Newsletter.