You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+19-7
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,24 @@ capabilities (but hopefully not its complexity!).
10
10
11
11
This repository adds the following (not yet the complete list):
12
12
13
+
* Dataset tool
14
+
* Add `--center-crop-tall`: add vertical black bars to the sides instead, in the same vein as the horizontal bars in
15
+
`--center-crop-wide`.
16
+
* Grayscale images in the dataset are converted to `RGB`.
17
+
* If the dataset tool encounters an error, print it along the offending image, but continue with the rest of the dataset
18
+
([pull #39](https://github.com/NVlabs/stylegan3/pull/39) from [Andreas Jansson](https://github.com/andreasjansson)).
19
+
**TODO*: Add multi-crop, as used in [Earth View](https://github.com/PDillis/earthview#multi-crop---data_augmentpy).
20
+
* Training
21
+
*`--mirrory`: Added vertical mirroring for doubling the dataset size
22
+
*`--gamma`: If no R1 regularization is provided, the heuristic formula will be used from [StyleGAN2](https://github.com/NVlabs/stylegan2).
23
+
*`--augpipe`: Now available to use is [StyleGAN2-ADA's](https://github.com/NVlabs/stylegan2-ada-pytorch) full list of augpipe, e,g., `blit`, `geom`, `bgc`, `bgcfnc`, etc.
24
+
*`--img-snap`: When to save snapshot images, so now it's independent of when the model is saved;
25
+
*`--snap-res`: The resolution of the snapshots, depending on your screen resolution, or how many images you wish to see per tick. Available resolutions: `1080p`, `4k`, and `8k`.
26
+
*`--resume-kimg`: Starting number of `kimg`, useful when continuing training a previous run
27
+
*`--outdir`: Automatically set as `training-runs`
28
+
*`--metrics`: Now set by default to `None`, so there's no need to worry about this one
29
+
*`--resume`: All available pre-trained models from NVIDIA can be found with a simple dictionary, depending on the `--cfg` used.
30
+
For example, if `--cfg=stylegan3-r`, then to transfer learn from FFHQU at 1024 resolution, set `--resume=ffhqu1024`. Full list available [here](https://github.com/PDillis/stylegan3-fun/blob/0bfa8e108487b50d6ecb73718c60497f063d8c17/train.py#L297).
* Generate a static image or a [video](https://youtu.be/hEJKWL2VQTE) with a feedback loop
42
+
* Start from a random image (`random` or `perlin`, using [Mathieu Duchesneau's implementation](https://github.com/duchesneaumathieu/pyperlin)) or from an existing one
24
43
* Expansion on GUI/`visualizer.py`
25
44
* Added the rest of the affine transformations
26
45
* General model and code additions
27
46
* No longer necessary to specify `--outdir` when running the code, as the output directory will be automatically generated
@click.option('--iterations', '-it', type=click.IntRange(min=1), help='Number of gradient ascent steps per octave', default=10, show_default=True)
385
397
# Layer options
386
-
@click.option('--layers', type=parse_layers, help='Layers of the Discriminator to use as the features. If None, will default to the output of D.', default=['b16_conv1'], show_default=True)
398
+
@click.option('--layers', type=parse_layers, help='Layers of the Discriminator to use as the features. If None, will default to the output of D.', default='b16_conv0', show_default=True)
387
399
@click.option('--normed', 'norm_model_layers', is_flag=True, help='Add flag to divide the features of each layer of D by its number of elements')
388
400
@click.option('--sqrt-normed', 'sqrt_norm_model_layers', is_flag=True, help='Add flag to divide the features of each layer of D by the square root of its number of elements')
0 commit comments