Skip to content
View farunawebservices's full-sized avatar
  • Fagmart

Block or report farunawebservices

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
farunawebservices/README.md

Hi, I'm Faruna Godwin Abuh πŸ‘‹

Applied AI Safety Engineer | Building transparent AI systems for real-world impact

I focus on model evaluation, interpretability, and low-resource NLP, making AI systems that are not only capable, but trustworthy and safe to deploy.


πŸ”¬ What I'm Working On

  • πŸ›‘οΈ Red-teaming LLMs for adversarial robustness testing
  • πŸ” Mechanistic interpretability analysis of transformer attention patterns
  • 🌍 Low-resource NLP for African languages (Igala-English translation)
  • ⚑ Custom GPT architectures built from scratch

🎯 Featured Projects

Project Description Tech Stack Links
πŸ›‘οΈ Red-Teaming LLMs Automated adversarial testing framework for LLM vulnerabilities Python, Transformers, Streamlit Live Demo Β· Code
🌐 Igala-English NMT Neural machine translation for low-resource African languages PyTorch, mBERT, Transformers Live Demo · Code
πŸ”¬ Interpretability Analysis Visualizing transformer attention patterns in low-resource NMT PyTorch, TransformerLens, Plotly Live Demo Β· Code
⚑ Igala GPT from Scratch Custom decoder-only transformer with BPE tokenizer PyTorch, NumPy, Custom Tokenizer Live Demo · Code

πŸ› οΈ Tech Stack

AI/ML: PyTorch β€’ Transformers β€’ mBERT β€’ GPT β€’ TransformerLens β€’ Scikit-learn
Languages: Python β€’ JavaScript β€’ TypeScript
Frameworks: Next.js β€’ React β€’ FastAPI β€’ Streamlit
Cloud: Google Cloud Run β€’ HuggingFace Spaces β€’ Vercel
Tools: Docker β€’ Git β€’ VS Code


πŸ“ˆ Current Focus

  • Building production-grade safety evaluation tools
  • Exploring selective prediction methods for uncertainty quantification
  • Contributing to AI safety frameworks for underserved communities

πŸ“« Connect With Me


πŸ’‘ Open To

  • Applied AI Safety roles (evaluation, red-teaming, interpretability)
  • Research fellowships and residencies
  • Collaboration on low-resource NLP and safety tooling

GitHub Stats


"AI should serve communities, not replace human judgment."

Pinned Loading

  1. llm-red-teaming-framework llm-red-teaming-framework Public

    Automated adversarial testing framework for evaluating LLM vulnerabilities across 5 attack categories

    Python

  2. igala-mbert-interpretability igala-mbert-interpretability Public

    Mechanistic interpretability analysis of mBERT attention patterns in low-resource Igala-English translation

    Python

  3. igala-gpt-from-scratch igala-gpt-from-scratch Public

    Custom decoder-only transformer for Igala language generation built from first principles with multi-head attention and BPE tokenizer

    Python

  4. gpt2-safety-calibration gpt2-safety-calibration Public

    Model calibration and mechanistic interpretability analysis of GPT-2 using Direct Logit Attribution and selective prediction methods

    Jupyter Notebook

  5. igala-english-nmt igala-english-nmt Public

    Neural Machine Translation for Igala (low-resource Nigerian language) using fine-tuned mBERT with 3,253 parallel sentences

    Python