Skip to content

Watermarking techniques have been used to safeguard AI-generated content. In this project, we study publicly detectable watermarking schemes. We investigate how to reconcile two important security properties, soundness and robustness in watermarking for large language models.

Notifications You must be signed in to change notification settings

VCU-CS-Capstone/CS-25-339-Publicly-Detectable-Watermarking-for-Large-Language-Models

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

82 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CS-25-339-Publicly-Detectable-Watermarking-for-Large-Language-Models

VCU College of Engineering

Short Project Description

Watermarking techniques have been used to safeguard AI-generated content. In this project, we study publicly detectable watermarking schemes. We investigate how to reconcile two important security properties, soundness and robustness in watermarking for large language models.

Folder Description
Documentation all documentation the project team has created to describe the architecture, design, installation, and configuration of the project
Notes and Research Relevant helpful information to understand the tools and techniques used in the project
Project Deliverables Folder that contains final pdf versions of all Fall and Spring Major Deliverables
Status Reports Project management documentation - weekly reports, milestones, etc.
scr Source code - create as many subdirectories as needed

Note: Commits behind this fork could be automatically synced, meaning that changes made in the template are pushed into your repo. Please do not discard commits ahead (these are the updates you make to this repository).

Project Team

  • Hongsheng Zhou - College of Engineering - Faculty Advisor
  • Joseph Hughes - CS - Project Manager/Financial Manager
  • Ronit Sharma - CS - Test Engineer
  • Neil Inge - CS - Logistics Manager
  • Waleed Elbanna - CS - Systems Engineer

About

Watermarking techniques have been used to safeguard AI-generated content. In this project, we study publicly detectable watermarking schemes. We investigate how to reconcile two important security properties, soundness and robustness in watermarking for large language models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HTML 100.0%