Skip to content

Commit cc1a717

Browse files
authored
Merge pull request #425 from allenai/spacy_32_upgrade
Spacy 32 upgrade
2 parents cc0ace9 + 8ff659f commit cc1a717

17 files changed

+367
-148
lines changed

Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ WORKDIR /work
1818
COPY requirements.in .
1919

2020
RUN pip install -r requirements.in
21-
RUN pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_sm-0.4.0.tar.gz
21+
RUN pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_core_sci_sm-0.5.0.tar.gz
2222
RUN python -m spacy download en_core_web_sm
2323
RUN python -m spacy download en_core_web_md
2424

README.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ pip install scispacy
1919
to install a model (see our full selection of available models below), run a command like the following:
2020

2121
```bash
22-
pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_sm-0.4.0.tar.gz
22+
pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_core_sci_sm-0.5.0.tar.gz
2323
```
2424

2525
Note: We strongly recommend that you use an isolated Python environment (such as virtualenv or conda) to install scispacy.
@@ -76,14 +76,14 @@ pip install CMD-V(to paste the copied URL)
7676

7777
| Model | Description | Install URL
7878
|:---------------|:------------------|:----------|
79-
| en_core_sci_sm | A full spaCy pipeline for biomedical data with a ~100k vocabulary. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_sm-0.4.0.tar.gz)|
80-
| en_core_sci_md | A full spaCy pipeline for biomedical data with a ~360k vocabulary and 50k word vectors. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_md-0.4.0.tar.gz)|
81-
| en_core_sci_lg | A full spaCy pipeline for biomedical data with a ~785k vocabulary and 600k word vectors. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_lg-0.4.0.tar.gz)|
82-
| en_core_sci_scibert | A full spaCy pipeline for biomedical data with a ~785k vocabulary and `allenai/scibert-base` as the transformer model. You may want to [use a GPU](https://spacy.io/usage#gpu) with this model. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_scibert-0.4.0.tar.gz)|
83-
| en_ner_craft_md| A spaCy NER model trained on the CRAFT corpus.|[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_ner_craft_md-0.4.0.tar.gz)|
84-
| en_ner_jnlpba_md | A spaCy NER model trained on the JNLPBA corpus.| [Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_ner_jnlpba_md-0.4.0.tar.gz)|
85-
| en_ner_bc5cdr_md | A spaCy NER model trained on the BC5CDR corpus. | [Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_ner_bc5cdr_md-0.4.0.tar.gz)|
86-
| en_ner_bionlp13cg_md | A spaCy NER model trained on the BIONLP13CG corpus. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_ner_bionlp13cg_md-0.4.0.tar.gz)|
79+
| en_core_sci_sm | A full spaCy pipeline for biomedical data with a ~100k vocabulary. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_core_sci_sm-0.5.0.tar.gz)|
80+
| en_core_sci_md | A full spaCy pipeline for biomedical data with a ~360k vocabulary and 50k word vectors. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_core_sci_md-0.5.0.tar.gz)|
81+
| en_core_sci_lg | A full spaCy pipeline for biomedical data with a ~785k vocabulary and 600k word vectors. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_core_sci_lg-0.5.0.tar.gz)|
82+
| en_core_sci_scibert | A full spaCy pipeline for biomedical data with a ~785k vocabulary and `allenai/scibert-base` as the transformer model. You may want to [use a GPU](https://spacy.io/usage#gpu) with this model. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_core_sci_scibert-0.5.0.tar.gz)|
83+
| en_ner_craft_md| A spaCy NER model trained on the CRAFT corpus.|[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_ner_craft_md-0.5.0.tar.gz)|
84+
| en_ner_jnlpba_md | A spaCy NER model trained on the JNLPBA corpus.| [Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_ner_jnlpba_md-0.5.0.tar.gz)|
85+
| en_ner_bc5cdr_md | A spaCy NER model trained on the BC5CDR corpus. | [Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_ner_bc5cdr_md-0.5.0.tar.gz)|
86+
| en_ner_bionlp13cg_md | A spaCy NER model trained on the BIONLP13CG corpus. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_ner_bionlp13cg_md-0.5.0.tar.gz)|
8787

8888

8989
## Additional Pipeline Components

configs/base_ner.cfg

+13-10
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
[vars]
2+
include_static_vectors = null
3+
14
[paths]
25
vectors = null
36
init_tok2vec = null
@@ -31,26 +34,26 @@ moves = null
3134
update_with_oracle_cut_size = 100
3235

3336
[components.ner.model]
34-
@architectures = "spacy.TransitionBasedParser.v1"
37+
@architectures = "spacy.TransitionBasedParser.v2"
3538
state_type = "ner"
3639
extra_state_tokens = false
37-
hidden_width = 64
38-
maxout_pieces = 2
40+
hidden_width = 128
41+
maxout_pieces = 3
3942
use_upper = true
4043
nO = null
4144

4245
[components.ner.model.tok2vec]
43-
@architectures = "spacy.Tok2Vec.v1"
46+
@architectures = "spacy.Tok2Vec.v2"
4447

4548
[components.ner.model.tok2vec.embed]
46-
@architectures = "spacy.MultiHashEmbed.v1"
49+
@architectures = "spacy.MultiHashEmbed.v2"
4750
width = 96
48-
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE"]
49-
rows = [5000, 2500, 2500, 2500]
50-
include_static_vectors = true
51+
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE", "SPACY"]
52+
rows = [5000, 2500, 2500, 2500, 100]
53+
include_static_vectors = ${vars.include_static_vectors}
5154

5255
[components.ner.model.tok2vec.encode]
53-
@architectures = "spacy.MaxoutWindowEncoder.v1"
56+
@architectures = "spacy.MaxoutWindowEncoder.v2"
5457
width = 96
5558
depth = 4
5659
window_size = 1
@@ -82,7 +85,7 @@ dev_corpus = "corpora.dev"
8285
train_corpus = "corpora.train"
8386
seed = ${system.seed}
8487
gpu_allocator = ${system.gpu_allocator}
85-
dropout = 0.2
88+
dropout = 0.1
8689
accumulate_gradient = 1
8790
patience = 0
8891
max_epochs = 7

configs/base_ner_scibert.cfg

+8-8
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ parser_tagger_path = null
55
vocab_path = null
66

77
[system]
8-
gpu_allocator = null
8+
gpu_allocator = "pytorch"
99
seed = 0
1010

1111
[nlp]
@@ -31,7 +31,7 @@ moves = null
3131
update_with_oracle_cut_size = 100
3232

3333
[components.ner.model]
34-
@architectures = "spacy.TransitionBasedParser.v1"
34+
@architectures = "spacy.TransitionBasedParser.v2"
3535
state_type = "ner"
3636
extra_state_tokens = false
3737
hidden_width = 64
@@ -40,17 +40,17 @@ use_upper = true
4040
nO = null
4141

4242
[components.ner.model.tok2vec]
43-
@architectures = "spacy.Tok2Vec.v1"
43+
@architectures = "spacy.Tok2Vec.v2"
4444

4545
[components.ner.model.tok2vec.embed]
46-
@architectures = "spacy.MultiHashEmbed.v1"
46+
@architectures = "spacy.MultiHashEmbed.v2"
4747
width = 96
48-
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE"]
49-
rows = [5000, 2500, 2500, 2500]
48+
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE", "SPACY"]
49+
rows = [5000, 2500, 2500, 2500, 100]
5050
include_static_vectors = false
5151

5252
[components.ner.model.tok2vec.encode]
53-
@architectures = "spacy.MaxoutWindowEncoder.v1"
53+
@architectures = "spacy.MaxoutWindowEncoder.v2"
5454
width = 96
5555
depth = 4
5656
window_size = 1
@@ -83,7 +83,7 @@ dev_corpus = "corpora.dev"
8383
train_corpus = "corpora.train"
8484
seed = ${system.seed}
8585
gpu_allocator = ${system.gpu_allocator}
86-
dropout = 0.2
86+
dropout = 0.1
8787
accumulate_gradient = 1
8888
patience = 0
8989
max_epochs = 7

configs/base_parser_tagger.cfg

+11-8
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
[vars]
2+
include_static_vectors = null
3+
14
[paths]
25
genia_train = "project_data/genia_train.spacy"
36
genia_dev = "project_data/genia_dev.spacy"
@@ -35,7 +38,7 @@ moves = null
3538
update_with_oracle_cut_size = 100
3639

3740
[components.parser.model]
38-
@architectures = "spacy.TransitionBasedParser.v1"
41+
@architectures = "spacy.TransitionBasedParser.v2"
3942
state_type = "parser"
4043
extra_state_tokens = false
4144
hidden_width = 128
@@ -64,17 +67,17 @@ upstream = "*"
6467
factory = "tok2vec"
6568

6669
[components.tok2vec.model]
67-
@architectures = "spacy.Tok2Vec.v1"
70+
@architectures = "spacy.Tok2Vec.v2"
6871

6972
[components.tok2vec.model.embed]
70-
@architectures = "spacy.MultiHashEmbed.v1"
73+
@architectures = "spacy.MultiHashEmbed.v2"
7174
width = ${components.tok2vec.model.encode.width}
72-
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE"]
73-
rows = [5000, 2500, 2500, 2500]
74-
include_static_vectors = true
75+
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE", "SPACY"]
76+
rows = [5000, 2500, 2500, 2500, 100]
77+
include_static_vectors = ${vars.include_static_vectors}
7578

7679
[components.tok2vec.model.encode]
77-
@architectures = "spacy.MaxoutWindowEncoder.v1"
80+
@architectures = "spacy.MaxoutWindowEncoder.v2"
7881
width = 96
7982
depth = 4
8083
window_size = 1
@@ -106,7 +109,7 @@ dev_corpus = "corpora.dev"
106109
train_corpus = "corpora.train"
107110
seed = ${system.seed}
108111
gpu_allocator = ${system.gpu_allocator}
109-
dropout = 0.2
112+
dropout = 0.1
110113
accumulate_gradient = 1
111114
patience = 0
112115
max_epochs = 20

configs/base_parser_tagger_scibert.cfg

+8-15
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ init_tok2vec = null
77
vocab_path = null
88

99
[system]
10-
gpu_allocator = "pytorch"
10+
gpu_allocator = null
1111
seed = 0
1212

1313
[nlp]
@@ -36,12 +36,12 @@ moves = null
3636
update_with_oracle_cut_size = 100
3737

3838
[components.parser.model]
39-
@architectures = "spacy.TransitionBasedParser.v1"
39+
@architectures = "spacy.TransitionBasedParser.v2"
4040
state_type = "parser"
4141
extra_state_tokens = false
4242
hidden_width = 128
4343
maxout_pieces = 3
44-
use_upper = true
44+
use_upper = false
4545
nO = null
4646

4747
[components.parser.model.tok2vec]
@@ -69,9 +69,10 @@ max_batch_items = 4096
6969
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}
7070

7171
[components.transformer.model]
72-
@architectures = "spacy-transformers.TransformerModel.v1"
72+
@architectures = "spacy-transformers.TransformerModel.v3"
7373
name = "allenai/scibert_scivocab_uncased"
7474
tokenizer_config = {"use_fast": true}
75+
mixed_precision = true
7576

7677
[components.transformer.model.get_spans]
7778
@span_getters = "spacy-transformers.strided_spans.v1"
@@ -105,7 +106,7 @@ dev_corpus = "corpora.dev"
105106
train_corpus = "corpora.train"
106107
seed = ${system.seed}
107108
gpu_allocator = ${system.gpu_allocator}
108-
dropout = 0.2
109+
dropout = 0.1
109110
accumulate_gradient = 1
110111
patience = 0
111112
max_epochs = 8
@@ -120,8 +121,8 @@ get_length = null
120121

121122
[training.batcher.size]
122123
@schedules = "compounding.v1"
123-
start = 16
124-
stop = 64
124+
start = 4
125+
stop = 12
125126
compound = 1.001
126127
t = 0.0
127128

@@ -157,14 +158,6 @@ ents_r = 0.0
157158
[pretraining]
158159

159160
[initialize]
160-
vectors = ${paths.vectors}
161-
init_tok2vec = ${paths.init_tok2vec}
162-
vocab_data = ${paths.vocab_path}
163-
lookups = null
164-
165-
[initialize.components]
166-
167-
[initialize.tokenizer]
168161

169162
[initialize.before_init]
170163
@callbacks = "replace_tokenizer"

configs/base_specialized_ner.cfg

+13-10
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
[vars]
2+
include_static_vectors = null
3+
14
[paths]
25
vectors = null
36
init_tok2vec = null
@@ -33,26 +36,26 @@ moves = null
3336
update_with_oracle_cut_size = 100
3437

3538
[components.ner.model]
36-
@architectures = "spacy.TransitionBasedParser.v1"
39+
@architectures = "spacy.TransitionBasedParser.v2"
3740
state_type = "ner"
3841
extra_state_tokens = false
39-
hidden_width = 64
40-
maxout_pieces = 2
42+
hidden_width = 128
43+
maxout_pieces = 3
4144
use_upper = true
4245
nO = null
4346

4447
[components.ner.model.tok2vec]
45-
@architectures = "spacy.Tok2Vec.v1"
48+
@architectures = "spacy.Tok2Vec.v2"
4649

4750
[components.ner.model.tok2vec.embed]
48-
@architectures = "spacy.MultiHashEmbed.v1"
51+
@architectures = "spacy.MultiHashEmbed.v2"
4952
width = 96
50-
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE"]
51-
rows = [5000, 2500, 2500, 2500]
52-
include_static_vectors = true
53+
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE", "SPACY"]
54+
rows = [5000, 2500, 2500, 2500, 100]
55+
include_static_vectors = ${vars.include_static_vectors}
5356

5457
[components.ner.model.tok2vec.encode]
55-
@architectures = "spacy.MaxoutWindowEncoder.v1"
58+
@architectures = "spacy.MaxoutWindowEncoder.v2"
5659
width = 96
5760
depth = 4
5861
window_size = 1
@@ -82,7 +85,7 @@ dev_corpus = "corpora.dev"
8285
train_corpus = "corpora.train"
8386
seed = ${system.seed}
8487
gpu_allocator = ${system.gpu_allocator}
85-
dropout = 0.2
88+
dropout = 0.1
8689
accumulate_gradient = 1
8790
patience = 0
8891
max_epochs = 7

docs/index.md

+16-17
Original file line numberDiff line numberDiff line change
@@ -17,15 +17,14 @@ pip install <Model URL>
1717

1818
| Model | Description | Install URL
1919
|:---------------|:------------------|:----------|
20-
| en_core_sci_sm | A full spaCy pipeline for biomedical data. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_sm-0.4.0.tar.gz)|
21-
| en_core_sci_md | A full spaCy pipeline for biomedical data with a larger vocabulary and 50k word vectors. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_md-0.4.0.tar.gz)|
22-
| en_core_sci_scibert | A full spaCy pipeline for biomedical data with a ~785k vocabulary and `allenai/scibert-base` as the transformer model. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_scibert-0.4.0.tar.gz)|
23-
| en_core_sci_lg | A full spaCy pipeline for biomedical data with a larger vocabulary and 600k word vectors. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_lg-0.4.0.tar.gz)|
24-
| en_ner_craft_md| A spaCy NER model trained on the CRAFT corpus.|[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_ner_craft_md-0.4.0.tar.gz)|
25-
| en_ner_jnlpba_md | A spaCy NER model trained on the JNLPBA corpus.| [Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_ner_jnlpba_md-0.4.0.tar.gz)|
26-
| en_ner_bc5cdr_md | A spaCy NER model trained on the BC5CDR corpus. | [Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_ner_bc5cdr_md-0.4.0.tar.gz)|
27-
| en_ner_bionlp13cg_md | A spaCy NER model trained on the BIONLP13CG corpus. | [Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_ner_bionlp13cg_md-0.4.0.tar.gz)|
28-
20+
| en_core_sci_sm | A full spaCy pipeline for biomedical data. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_core_sci_sm-0.5.0.tar.gz)|
21+
| en_core_sci_md | A full spaCy pipeline for biomedical data with a larger vocabulary and 50k word vectors. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_core_sci_md-0.5.0.tar.gz)|
22+
| en_core_sci_scibert | A full spaCy pipeline for biomedical data with a ~785k vocabulary and `allenai/scibert-base` as the transformer model. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_core_sci_scibert-0.5.0.tar.gz)|
23+
| en_core_sci_lg | A full spaCy pipeline for biomedical data with a larger vocabulary and 600k word vectors. |[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_core_sci_lg-0.5.0.tar.gz)|
24+
| en_ner_craft_md| A spaCy NER model trained on the CRAFT corpus.|[Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_ner_craft_md-0.5.0.tar.gz)|
25+
| en_ner_jnlpba_md | A spaCy NER model trained on the JNLPBA corpus.| [Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_ner_jnlpba_md-0.5.0.tar.gz)|
26+
| en_ner_bc5cdr_md | A spaCy NER model trained on the BC5CDR corpus. | [Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_ner_bc5cdr_md-0.5.0.tar.gz)|
27+
| en_ner_bionlp13cg_md | A spaCy NER model trained on the BIONLP13CG corpus. | [Download](https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.0/en_ner_bionlp13cg_md-0.5.0.tar.gz)|
2928

3029

3130

@@ -35,18 +34,18 @@ Our models achieve performance within 3% of published state of the art dependenc
3534

3635
| model | UAS | LAS | POS | Mentions (F1) | Web UAS |
3736
|:---------------|:----|:------|:------|:---|:---|
38-
| en_core_sci_sm | 89.54| 87.62 | 98.32 | 68.15 | 87.62 |
39-
| en_core_sci_md | 89.61| 87.77 | 98.56 | 69.64 | 88.05 |
40-
| en_core_sci_lg | 89.63| 87.81 | 98.56 | 69.61 | 88.08 |
41-
| en_core_sci_scibert | 92.03| 90.25 | 98.91 | 67.91 | 92.21 |
37+
| en_core_sci_sm | 89.27| 87.33 | 98.29 | 68.05 | 87.61 |
38+
| en_core_sci_md | 89.86| 87.92 | 98.43 | 69.32 | 88.05 |
39+
| en_core_sci_lg | 89.54| 87.66 | 98.29 | 69.52 | 87.68 |
40+
| en_core_sci_scibert | 92.28| 90.83 | 98.93 | 67.84 | 92.63 |
4241

4342

4443
| model | F1 | Entity Types|
4544
|:---------------|:-----|:--------|
46-
| en_ner_craft_md | 76.11|GGP, SO, TAXON, CHEBI, GO, CL|
47-
| en_ner_jnlpba_md | 71.62| DNA, CELL_TYPE, CELL_LINE, RNA, PROTEIN |
48-
| en_ner_bc5cdr_md | 84.49| DISEASE, CHEMICAL|
49-
| en_ner_bionlp13cg_md | 77.75| AMINO_ACID, ANATOMICAL_SYSTEM, CANCER, CELL, CELLULAR_COMPONENT, DEVELOPING_ANATOMICAL_STRUCTURE, GENE_OR_GENE_PRODUCT, IMMATERIAL_ANATOMICAL_ENTITY, MULTI-TISSUE_STRUCTURE, ORGAN, ORGANISM, ORGANISM_SUBDIVISION, ORGANISM_SUBSTANCE, PATHOLOGICAL_FORMATION, SIMPLE_CHEMICAL, TISSUE |
45+
| en_ner_craft_md | 78.35|GGP, SO, TAXON, CHEBI, GO, CL|
46+
| en_ner_jnlpba_md | 70.89| DNA, CELL_TYPE, CELL_LINE, RNA, PROTEIN |
47+
| en_ner_bc5cdr_md | 84.70| DISEASE, CHEMICAL|
48+
| en_ner_bionlp13cg_md | 76.79| AMINO_ACID, ANATOMICAL_SYSTEM, CANCER, CELL, CELLULAR_COMPONENT, DEVELOPING_ANATOMICAL_STRUCTURE, GENE_OR_GENE_PRODUCT, IMMATERIAL_ANATOMICAL_ENTITY, MULTI-TISSUE_STRUCTURE, ORGAN, ORGANISM, ORGANISM_SUBDIVISION, ORGANISM_SUBSTANCE, PATHOLOGICAL_FORMATION, SIMPLE_CHEMICAL, TISSUE |
5049

5150

5251
### Example Usage

0 commit comments

Comments
 (0)