Skip to content

Commit 2b3713b

Browse files
authored
Merge pull request #54 from google/model-version-update
Model version update (only metadata changes, no changes to weights)
2 parents 3d85c45 + 755e921 commit 2b3713b

File tree

4 files changed

+21
-15
lines changed

4 files changed

+21
-15
lines changed

README.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -139,8 +139,8 @@ Note that in this example, we have specified the country code only for the ensem
139139
The `run_model.py` script recommended above will download model weights automatically. If you want to use the SpeciesNet model weights outside of our script, or if you plan to be offline when you first run the script, you can download model weights directly from Kaggle. Running our ensemble also requires [MegaDetector](https://github.com/agentmorris/MegaDetector), so in this list of links, we also include a direct link to the MegaDetector model weights.
140140

141141
- [SpeciesNet page on Kaggle](https://www.kaggle.com/models/google/speciesnet)
142-
- [Direct link to version 4.0.1a weights](https://www.kaggle.com/api/v1/models/google/speciesnet/pyTorch/v4.0.1a/1/download) (the crop classifier)
143-
- [Direct link to version 4.0.1b weights](https://www.kaggle.com/api/v1/models/google/speciesnet/pyTorch/v4.0.1b/1/download) (the whole-image classifier)
142+
- [Direct link to version 4.0.2a weights](https://www.kaggle.com/api/v1/models/google/speciesnet/pyTorch/v4.0.2a/1/download) (the crop classifier)
143+
- [Direct link to version 4.0.2b weights](https://www.kaggle.com/api/v1/models/google/speciesnet/pyTorch/v4.0.2b/1/download) (the whole-image classifier)
144144
- [Direct link to MegaDetector weights](https://github.com/agentmorris/MegaDetector/releases/download/v5.0/md_v5a.0.0.pt)
145145

146146
## Contacting us
@@ -189,18 +189,20 @@ Depending on how you plan to run SpeciesNet, you may want to install additional
189189

190190
There are two variants of the SpeciesNet classifier, which lend themselves to different ensemble strategies:
191191

192-
- [v4.0.1a](model_cards/v4.0.1a) (default): Always-crop model, i.e. we run the detector first and crop the image to the top detection bounding box before feeding it to the species classifier.
193-
- [v4.0.1b](model_cards/v4.0.1b): Full-image model, i.e. we run both the detector and the species classifier on the full image, independently.
192+
- [v4.0.2a](model_cards/v4.0.1a) (default): Always-crop model, i.e. we run the detector first and crop the image to the top detection bounding box before feeding it to the species classifier.
193+
- [v4.0.2b](model_cards/v4.0.1b): Full-image model, i.e. we run both the detector and the species classifier on the full image, independently.
194194

195-
run_model.py defaults to v4.0.1a, but you can specify one model or the other using the --model option, for example:
195+
Both links point to the model cards for the 4.0.1 models; model cards were not updated for the 4.0.2 release, which only included changes to geofencing rules and minor taxonomy updates.
196196

197-
- `--model kaggle:google/speciesnet/pyTorch/v4.0.1a`
198-
- `--model kaggle:google/speciesnet/pyTorch/v4.0.1b`
197+
run_model.py defaults to v4.0.2a, but you can specify one model or the other using the --model option, for example:
198+
199+
- `--model kaggle:google/speciesnet/pyTorch/v4.0.2a/1`
200+
- `--model kaggle:google/speciesnet/pyTorch/v4.0.2b/1`
199201

200202
If you are a DIY type and you plan to run the models outside of our ensemble, a couple of notes:
201203

202-
- The crop classifier (v4.0.1a) expects images to be cropped tightly to animals, then resized to 480x480px.
203-
- The whole-image classifier (v4.0.1b) expects images to have been cropped vertically to remove some pixels from the top and bottom, then resized to 480x480px.
204+
- The crop classifier (v4.0.2a) expects images to be cropped tightly to animals, then resized to 480x480px.
205+
- The whole-image classifier (v4.0.2b) expects images to have been cropped vertically to remove some pixels from the top and bottom, then resized to 480x480px.
204206

205207
See [classifier.py](https://github.com/google/cameratrapai/blob/master/speciesnet/classifier.py) to see how preprocessing is implemented for both classifiers.
206208

speciesnet/__init__.py

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,12 @@
2323
from speciesnet.multiprocessing import *
2424
from speciesnet.utils import *
2525

26-
DEFAULT_MODEL = "kaggle:google/speciesnet/pyTorch/v4.0.1a"
26+
DEFAULT_MODEL = "kaggle:google/speciesnet/pyTorch/v4.0.2a/1"
27+
28+
# This represents the model URLs that will be tested via pytest;
29+
# this does not indicate that only these models will work with
30+
# the speciesnet package.
2731
SUPPORTED_MODELS = [
28-
"kaggle:google/speciesnet/pyTorch/v4.0.1a",
29-
"kaggle:google/speciesnet/pyTorch/v4.0.1b",
32+
"kaggle:google/speciesnet/pyTorch/v4.0.2a/1",
33+
"kaggle:google/speciesnet/pyTorch/v4.0.2b/1",
3034
]

speciesnet/classifier_test.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333

3434
AFRICAN_ELEPHANT = "55631055-3e0e-4b7a-9612-dedebe9f78b0;mammalia;proboscidea;elephantidae;loxodonta;africana;african elephant"
3535
AMERICAN_BLACK_BEAR = "436ddfdd-bc43-44c3-a25d-34671d3430a0;mammalia;carnivora;ursidae;ursus;americanus;american black bear"
36-
DOMESTIC_CATTLE = "aca65aaa-8c6d-4b69-94de-842b08b13bd6;mammalia;cetartiodactyla;bovidae;bos;taurus;domestic cattle"
36+
DOMESTIC_CATTLE = "aca65aaa-8c6d-4b69-94de-842b08b13bd6;mammalia;artiodactyla;bovidae;bos;taurus;domestic cattle"
3737
DOMESTIC_DOG = "3d80f1d6-b1df-4966-9ff4-94053c7a902a;mammalia;carnivora;canidae;canis;familiaris;domestic dog"
3838
OCELOT = "22976d14-d424-4f18-a67a-d8e1689cefcc;mammalia;carnivora;felidae;leopardus;pardalis;ocelot"
3939

speciesnet/detector_test.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -611,5 +611,5 @@ def test_detections(self, predicted_vs_expected) -> None:
611611
for pred_det, exp_det in zip(predicted, expected):
612612
assert pred_det["category"] == exp_det["category"]
613613
assert pred_det["label"] == Detection.from_category(exp_det["category"])
614-
assert pred_det["conf"] == pytest.approx(pred_det["conf"], abs=1e-3)
615-
assert pred_det["bbox"] == pytest.approx(exp_det["bbox"], abs=1e-3)
614+
assert pred_det["conf"] == pytest.approx(pred_det["conf"], abs=1.5e-3)
615+
assert pred_det["bbox"] == pytest.approx(exp_det["bbox"], abs=1.5e-3)

0 commit comments

Comments
 (0)