Applied AI Safety Engineer | Building transparent AI systems for real-world impact
I focus on model evaluation, interpretability, and low-resource NLP, making AI systems that are not only capable, but trustworthy and safe to deploy.
- π‘οΈ Red-teaming LLMs for adversarial robustness testing
- π Mechanistic interpretability analysis of transformer attention patterns
- π Low-resource NLP for African languages (Igala-English translation)
- β‘ Custom GPT architectures built from scratch
| Project | Description | Tech Stack | Links |
|---|---|---|---|
| π‘οΈ Red-Teaming LLMs | Automated adversarial testing framework for LLM vulnerabilities | Python, Transformers, Streamlit | Live Demo Β· Code |
| π Igala-English NMT | Neural machine translation for low-resource African languages | PyTorch, mBERT, Transformers | Live Demo Β· Code |
| π¬ Interpretability Analysis | Visualizing transformer attention patterns in low-resource NMT | PyTorch, TransformerLens, Plotly | Live Demo Β· Code |
| β‘ Igala GPT from Scratch | Custom decoder-only transformer with BPE tokenizer | PyTorch, NumPy, Custom Tokenizer | Live Demo Β· Code |
AI/ML: PyTorch β’ Transformers β’ mBERT β’ GPT β’ TransformerLens β’ Scikit-learn
Languages: Python β’ JavaScript β’ TypeScript
Frameworks: Next.js β’ React β’ FastAPI β’ Streamlit
Cloud: Google Cloud Run β’ HuggingFace Spaces β’ Vercel
Tools: Docker β’ Git β’ VS Code
- Building production-grade safety evaluation tools
- Exploring selective prediction methods for uncertainty quantification
- Contributing to AI safety frameworks for underserved communities
- π Portfolio:
- πΌ LinkedIn: faruna-godwin-abuh
- π€ HuggingFace: @Faruna01
- π§ Email: farunagodwin01@gmail.com
- Applied AI Safety roles (evaluation, red-teaming, interpretability)
- Research fellowships and residencies
- Collaboration on low-resource NLP and safety tooling
"AI should serve communities, not replace human judgment."

