You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Real-time face swap and video deepfake with a single click and only a single image.
4
8
5
9
## Disclaimer
6
-
This software is meant to be a productive contribution to the rapidly growing AI-generated media industry. It will help artists with tasks such as animating a custom character or using the character as a model for clothing etc.
7
10
8
-
The developers of this software are aware of its possible unethical applications and are committed to take preventative measures against them. It has a built-in check which prevents the program from working on inappropriate media including but not limited to nudity, graphic content, sensitive material such as war footage etc. We will continue to develop this project in the positive direction while adhering to law and ethics. This project may be shut down or include watermarks on the output if requested by law.
11
+
This software is intended as a productive contribution to the AI-generated media industry. It aims to assist artists with tasks like animating custom characters or using them as models for clothing, etc.
12
+
13
+
We are aware of the potential for unethical applications and are committed to preventative measures. A built-in check prevents the program from processing inappropriate media (nudity, graphic content, sensitive material like war footage, etc.). We will continue to develop this project responsibly, adhering to law and ethics. We may shut down the project or add watermarks if legally required.
14
+
15
+
Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
9
16
10
-
Users of this software are expected to use this software responsibly while abiding by local laws. If the face of a real person is being used, users are required to get consent from the concerned person and clearly mention that it is a deepfake when posting content online. Developers of this software will not be responsible for actions of end-users.
11
17
12
-
## New Features
18
+
## Features
19
+
13
20
### Resizable Preview Window
14
21
15
-
Dynamically improve the performance by using the `--live-resizable` parameter
22
+
Dynamically improve performance using the `--live-resizable` parameter.
23
+
16
24

17
25
18
26
### Face Mapping
19
27
20
-
Track faces and change it on the fly
28
+
Track and change faces on the fly.
21
29
22
30

23
31
24
-
source video
32
+
**Source Video:**
25
33
26
34

27
35
28
-
Tick this switch
36
+
**Enable Face Mapping:**
29
37
30
38

31
39
32
-
Map the faces
40
+
**Map the Faces:**
33
41
34
42

35
43
36
-
And see the magic!
44
+
**See the Magic!**
37
45
46
+
## Quick Start (Windows / Nvidia)
38
47
39
-
## Want to skip the installation and just run it?
40
-
<details>
41
-
Here's the link without the tedious installation script below
48
+
[Download pre-built version with CUDA support](https://hacksider.gumroad.com/l/vccdmm)
42
49
43
-
[Windows / Nvidia](https://hacksider.gumroad.com/l/vccdmm) CUDA still required
44
-
</details>
45
-
46
-
## How do I install it?
50
+
## Installation (Manual)
47
51
52
+
### Basic Installation (CPU)
48
53
49
-
### Basic: It is more likely to work on your computer but it will also be very slow. You can follow instructions for the basic install (This usually runs via **CPU**)
2.[inswapper_128.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128.onnx)*(Note: Use this [replacement version](https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx) if an issue occurs on your computer)*
2.[inswapper_128_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128.onnx) (Note: Use this [replacement version](https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx) if you encounter issues)
74
+
75
+
Place these files in the "**models**" folder.
76
+
77
+
**4. Install Dependencies**
78
+
79
+
We highly recommend using a `venv` to avoid issues.
80
+
81
+
```bash
69
82
pip install -r requirements.txt
70
83
```
71
-
For MAC OS, You have to install or upgrade python-tk package:
72
-
```
84
+
85
+
**For macOS:** Install or upgrade the `python-tk` package:
##### DONE!!! If you don't have any GPU, You should be able to run Deep-Live-Cam using `python run.py` command. Keep in mind that while running the program for first time, it will download some models which can take time depending on your network connection.
76
90
77
-
#### 5. Proceed if you want to use GPU acceleration (optional)
91
+
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB).
> Note: When you run this program for the first time, it will download some models ~300MB in size.
152
162
153
-
Executing `python run.py` command will launch this window:
154
-

163
+
## Usage
155
164
156
-
Choose a face (image with desired face) and the target image/video (image/video in which you want to replace the face) and click on `Start`. Open file explorer and navigate to the directory you select your output to be in. You will find a directory named `<video_title>` where you can see the frames being swapped in realtime. Once the processing is done, it will create the output file. That's it.
165
+
**1. Image/Video Mode**
157
166
158
-
## For the webcam mode
159
-
Just follow the clicks on the screenshot
160
-
1. Select a face
161
-
2. Click live
162
-
3. Wait for a few seconds (it takes a longer time, usually 10 to 30 seconds before the preview shows up)
167
+
- Execute `python run.py`.
168
+
- Choose a source face image and a target image/video.
169
+
- Click "Start".
170
+
- The output will be saved in a directory named after the target video.
163
171
164
-

172
+
**2. Webcam Mode**
165
173
166
-
Just use your favorite screencapture to stream like OBS
167
-
> Note: In case you want to change your face, just select another picture, the preview mode will then restart (so just wait a bit).
174
+
- Execute `python run.py`.
175
+
- Select a source face image.
176
+
- Click "Live".
177
+
- Wait for the preview to appear (10-30 seconds).
178
+
- Use a screen capture tool like OBS to stream.
179
+
- To change the face, select a new source image.
168
180
181
+

169
182
170
-
Additional command line arguments are given below. To learn out what they do, check [this guide](https://github.com/s0md3v/roop/wiki/Advanced-Options).
183
+
## Command Line Arguments
171
184
172
185
```
173
186
options:
@@ -194,7 +207,8 @@ options:
194
207
195
208
Looking for a CLI mode? Using the -s/--source argument will make the run program in cli mode.
196
209
197
-
### Webcam mode on Windows 11 using WSL2 Ubuntu (optional)
210
+
211
+
## Webcam Mode on WSL2 Ubuntu (Optional)
198
212
199
213
<details>
200
214
<summary>Click to see the details</summary>
@@ -203,11 +217,11 @@ If you want to use WSL2 on Windows 11 you will notice, that Ubuntu WSL2 doesn't
203
217
204
218
This tutorial will guide you through the process of setting up WSL2 Ubuntu with USB webcam support, rebuilding the kernel, and preparing the environment for the Deep-Live-Cam project.
205
219
206
-
#### 1. Install WSL2 Ubuntu
220
+
**1. Install WSL2 Ubuntu**
207
221
208
222
Install WSL2 Ubuntu from the Microsoft Store or using PowerShell:
5. Start Deep-Live-Cam with `python run.py --execution-provider cuda --max-memory 8` where 8 can be changed to the number of GB VRAM of your GPU has, minus 1-2GB. If you have a RTX3080 with 10GB I suggest adding 8GB. Leave some left for Windows.
305
319
306
-
#### Final Notes
320
+
**Final Notes**
307
321
308
322
- Steps 6 and 7 may be optional if the modules are built into the kernel and permissions are already set correctly.
309
323
- Always ensure you're using compatible versions of CUDA, ONNX, and other dependencies.
310
324
- If issues persist, consider checking the Deep-Live-Cam project's specific requirements and troubleshooting steps.
311
325
312
326
By following these steps, you should have a WSL2 Ubuntu environment with USB webcam support ready for the Deep-Live-Cam project. If you encounter any issues, refer back to the specific error messages and troubleshooting steps provided.
If you want the latest and greatest build, or want to see some new great features, go to our [experimental branch](https://github.com/hacksider/Deep-Live-Cam/tree/experimental) and experience what the contributors have given.
345
357
346
-
## TODO
347
-
:heavy_check_mark: Support multiple faces feature
358
+
## Future Updates & Roadmap
359
+
360
+
For the latest experimental builds and features, see the [experimental branch](https://github.com/hacksider/Deep-Live-Cam/tree/experimental).
361
+
362
+
**TODO:**
363
+
364
+
-[x] Support multiple faces
348
365
-[ ] Develop a version for web app/service
349
366
-[ ] UI/UX enhancements for desktop app
350
367
-[ ] Speed up model loading
351
368
-[ ] Speed up real-time face swapping
352
369
353
-
*Note: This is an open-source project, and we’re working on it in our free time. Therefore, features, replies, bug fixes, etc., might be delayed. We hope you understand. Thanks.*
370
+
This is an open-source project developed in our free time. Updates may be delayed.
371
+
354
372
355
373
## Credits
356
374
357
375
-[ffmpeg](https://ffmpeg.org/): for making video related operations easy
358
376
-[deepinsight](https://github.com/deepinsight): for their [insightface](https://github.com/deepinsight/insightface) project which provided a well-made library and models. Please be reminded that the [use of the model is for non-commercial research purposes only](https://github.com/deepinsight/insightface?tab=readme-ov-file#license).
359
377
-[havok2-htwo](https://github.com/havok2-htwo) : for sharing the code for webcam
360
-
-[GosuDRM](https://github.com/GosuDRM/nsfw-roop) : for uncensoring roop
378
+
-[GosuDRM](https://github.com/GosuDRM) : for open version of roop
361
379
-[pereiraroland26](https://github.com/pereiraroland26) : Multiple faces support
362
380
-[vic4key](https://github.com/vic4key) : For supporting/contributing on this project
363
381
-[KRSHH](https://github.com/KRSHH) : For updating the UI
364
382
- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
365
383
- Foot Note: [This is originally roop-cam, see the full history of the code here.](https://github.com/hacksider/roop-cam) Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
0 commit comments