Watermarking techniques have been used to safeguard AI-generated content. In this project, we study publicly detectable watermarking schemes. We investigate how to reconcile two important security properties, soundness and robustness in watermarking for large language models.
Folder | Description |
---|---|
Documentation | all documentation the project team has created to describe the architecture, design, installation, and configuration of the project |
Notes and Research | Relevant helpful information to understand the tools and techniques used in the project |
Project Deliverables | Folder that contains final pdf versions of all Fall and Spring Major Deliverables |
Status Reports | Project management documentation - weekly reports, milestones, etc. |
scr | Source code - create as many subdirectories as needed |
Note: Commits behind this fork could be automatically synced, meaning that changes made in the template are pushed into your repo. Please do not discard commits ahead (these are the updates you make to this repository).
- Hongsheng Zhou - College of Engineering - Faculty Advisor
- Joseph Hughes - CS - Project Manager/Financial Manager
- Ronit Sharma - CS - Test Engineer
- Neil Inge - CS - Logistics Manager
- Waleed Elbanna - CS - Systems Engineer