Skip to content
View nkalmykovsk's full-sized avatar
🎇
Deep into AI robustness
🎇
Deep into AI robustness
  • Skoltech
  • Moscow

Block or report nkalmykovsk

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
nkalmykovsk/README.md

🧠 About Me

AI researcher focusing on AI Safety, adversarial robustness for computer vision models, and VLM safety alignment. My work spans adversarial attacks & defenses for neural image compression, secure CV pipelines, and scalable evaluation of foundation models.

🎓 Education

  • Ph.D. in Computational & Data Science and Engineering, Skoltech, Moscow (Sep 2024 – Present)
  • M.Sc. in Data Science, Skoltech, Moscow (Sep 2022 – Jun 2024)
  • B.Sc. in Physics, Novosibirsk State University, Novosibirsk (Sep 2017 – Jun 2021)

📚 Publications

For a complete list, please visit:

📫 Reach Me

Pinned Loading

  1. tmla tmla Public

    T-MLA: A Targeted Multiscale Log-Exponential Attack Framework for Neural Image Compression

    Jupyter Notebook 9

  2. Skoltech-courses Skoltech-courses Public

    Jupyter Notebook 7

  3. AlexeyKKov/llm_planning AlexeyKKov/llm_planning Public

    Framework for evaluating Planning with Large Language Models

    Jupyter Notebook 7 1