You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ pip
29
29
## Models
30
30
31
31
1. Download models from [the latest release of this repository](https://github.com/GateNLP/ToxicClassifier/releases/latest) (currently available `kaggle.tar.gz`, `olid.tar.gz`)
32
-
2. Decompress file inside `models/en/`
32
+
2. Decompress file inside `models/en/` (which will create `models/en/kaggle` or `models/en/olid` respectively)
Copy file name to clipboardExpand all lines: docker/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ The toxic and offensive classifiers are deployed on GATE Cloud via a two step pr
4
4
5
5
## Building the Python classifier images
6
6
7
-
The Python-based classifiers for toxic and offensive language can be built using the `./build.sh` script in this directory. The images are pushed to the GitHub container registry:
7
+
The Python-based classifiers for toxic (kaggle dataset) and offensive (olid dataset) language can be built using the `./build.sh` script in this directory. The relevant model files must be downloaded and unpacked in `../models` as described in [the main README](../README.md). The images are pushed to the GitHub container registry:
0 commit comments