Skip to content

Commit 39f3c01

Browse files
Merge pull request #12993 from JohnSnowLabs/release/422-release-candidate
Release/422 release candidate
2 parents fa6963e + 35fc067 commit 39f3c01

File tree

1,433 files changed

+17682
-10369
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,433 files changed

+17682
-10369
lines changed

CHANGELOG

+22
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,25 @@
1+
========
2+
4.2.2
3+
========
4+
----------------
5+
New Features & Enhancements
6+
----------------
7+
8+
* Add support for importing TensorFlow SavedModel from remote storages like DBFS, S3, and HDFS
9+
* Add support for `fullAnnotate` in `LightPipeline` for path of images in Scala
10+
* Add `fullAnnotate` method in `PretrainedPipeline` for Scala
11+
* Add `fullAnnotateJava` method in `PretrainedPipeline` for Java
12+
* Add `fullAnnotateImage` to `PretrainedPipeline` for Scala
13+
* Add `fullAnnotateImageJava` to `PretrainedPipeline` for Java
14+
* Add support for QA in `fullAnnotate` method in `PretrainedPipeline`
15+
* Add `Predicted Entities` to all Vision Transformers (ViT) models and pipelines
16+
17+
----------------
18+
Bug Fixes
19+
----------------
20+
* Unify `annotatorType` name in Python and Scala for Spark schema in Annotation, AnnotationImage and AnnotationAudio
21+
* Fix missing indexes in `RecursiveTokenizer` annotator
22+
123
========
224
4.2.1
325
========

README.md

+44-44
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ To use Spark NLP you need the following requirements:
152152

153153
**GPU (optional):**
154154

155-
Spark NLP 4.2.1 is built with TensorFlow 2.7.1 and the following NVIDIA® software are only required for GPU support:
155+
Spark NLP 4.2.2 is built with TensorFlow 2.7.1 and the following NVIDIA® software are only required for GPU support:
156156

157157
- NVIDIA® GPU drivers version 450.80.02 or higher
158158
- CUDA® Toolkit 11.2
@@ -168,7 +168,7 @@ $ java -version
168168
$ conda create -n sparknlp python=3.7 -y
169169
$ conda activate sparknlp
170170
# spark-nlp by default is based on pyspark 3.x
171-
$ pip install spark-nlp==4.2.1 pyspark==3.2.1
171+
$ pip install spark-nlp==4.2.2 pyspark==3.2.1
172172
```
173173

174174
In Python console or Jupyter `Python3` kernel:
@@ -213,7 +213,7 @@ For more examples, you can visit our dedicated [repository](https://github.com/J
213213

214214
## Apache Spark Support
215215

216-
Spark NLP *4.2.1* has been built on top of Apache Spark 3.2 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:
216+
Spark NLP *4.2.2* has been built on top of Apache Spark 3.2 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:
217217

218218
| Spark NLP | Apache Spark 2.3.x | Apache Spark 2.4.x | Apache Spark 3.0.x | Apache Spark 3.1.x | Apache Spark 3.2.x | Apache Spark 3.3.x |
219219
|-----------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
@@ -247,7 +247,7 @@ Find out more about `Spark NLP` versions from our [release notes](https://github
247247

248248
## Databricks Support
249249

250-
Spark NLP 4.2.1 has been tested and is compatible with the following runtimes:
250+
Spark NLP 4.2.2 has been tested and is compatible with the following runtimes:
251251

252252
**CPU:**
253253

@@ -288,7 +288,7 @@ NOTE: Spark NLP 4.0.x is based on TensorFlow 2.7.x which is compatible with CUDA
288288

289289
## EMR Support
290290

291-
Spark NLP 4.2.1 has been tested and is compatible with the following EMR releases:
291+
Spark NLP 4.2.2 has been tested and is compatible with the following EMR releases:
292292

293293
- emr-6.2.0
294294
- emr-6.3.0
@@ -326,23 +326,23 @@ Spark NLP supports all major releases of Apache Spark 3.0.x, Apache Spark 3.1.x,
326326
```sh
327327
# CPU
328328

329-
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1
329+
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
330330

331-
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1
331+
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
332332

333-
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1
333+
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
334334
```
335335

336336
The `spark-nlp` has been published to the [Maven Repository](https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp).
337337

338338
```sh
339339
# GPU
340340

341-
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.1
341+
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.2
342342

343-
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.1
343+
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.2
344344

345-
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.1
345+
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.2
346346

347347
```
348348

@@ -351,11 +351,11 @@ The `spark-nlp-gpu` has been published to the [Maven Repository](https://mvnrepo
351351
```sh
352352
# AArch64
353353

354-
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.1
354+
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.2
355355

356-
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.1
356+
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.2
357357

358-
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.1
358+
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.2
359359

360360
```
361361

@@ -364,11 +364,11 @@ The `spark-nlp-aarch64` has been published to the [Maven Repository](https://mvn
364364
```sh
365365
# M1
366366

367-
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.1
367+
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.2
368368

369-
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.1
369+
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.2
370370

371-
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.1
371+
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.2
372372

373373
```
374374

@@ -380,7 +380,7 @@ The `spark-nlp-m1` has been published to the [Maven Repository](https://mvnrepos
380380
spark-shell \
381381
--driver-memory 16g \
382382
--conf spark.kryoserializer.buffer.max=2000M \
383-
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1
383+
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
384384
```
385385

386386
## Scala
@@ -396,7 +396,7 @@ Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2
396396
<dependency>
397397
<groupId>com.johnsnowlabs.nlp</groupId>
398398
<artifactId>spark-nlp_2.12</artifactId>
399-
<version>4.2.1</version>
399+
<version>4.2.2</version>
400400
</dependency>
401401
```
402402

@@ -407,7 +407,7 @@ Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2
407407
<dependency>
408408
<groupId>com.johnsnowlabs.nlp</groupId>
409409
<artifactId>spark-nlp-gpu_2.12</artifactId>
410-
<version>4.2.1</version>
410+
<version>4.2.2</version>
411411
</dependency>
412412
```
413413

@@ -418,7 +418,7 @@ Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2
418418
<dependency>
419419
<groupId>com.johnsnowlabs.nlp</groupId>
420420
<artifactId>spark-nlp-aarch64_2.12</artifactId>
421-
<version>4.2.1</version>
421+
<version>4.2.2</version>
422422
</dependency>
423423
```
424424

@@ -429,7 +429,7 @@ Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2
429429
<dependency>
430430
<groupId>com.johnsnowlabs.nlp</groupId>
431431
<artifactId>spark-nlp-m1_2.12</artifactId>
432-
<version>4.2.1</version>
432+
<version>4.2.2</version>
433433
</dependency>
434434
```
435435

@@ -439,28 +439,28 @@ Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2
439439

440440
```sbtshell
441441
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp
442-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "4.2.1"
442+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "4.2.2"
443443
```
444444

445445
**spark-nlp-gpu:**
446446

447447
```sbtshell
448448
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-gpu
449-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-gpu" % "4.2.1"
449+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-gpu" % "4.2.2"
450450
```
451451

452452
**spark-nlp-aarch64:**
453453

454454
```sbtshell
455455
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-aarch64
456-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-aarch64" % "4.2.1"
456+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-aarch64" % "4.2.2"
457457
```
458458

459459
**spark-nlp-m1:**
460460

461461
```sbtshell
462462
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-m1
463-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-m1" % "4.2.1"
463+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-m1" % "4.2.2"
464464
```
465465

466466
Maven Central: [https://mvnrepository.com/artifact/com.johnsnowlabs.nlp](https://mvnrepository.com/artifact/com.johnsnowlabs.nlp)
@@ -480,7 +480,7 @@ If you installed pyspark through pip/conda, you can install `spark-nlp` through
480480
Pip:
481481

482482
```bash
483-
pip install spark-nlp==4.2.1
483+
pip install spark-nlp==4.2.2
484484
```
485485

486486
Conda:
@@ -508,7 +508,7 @@ spark = SparkSession.builder \
508508
.config("spark.driver.memory","16G")\
509509
.config("spark.driver.maxResultSize", "0") \
510510
.config("spark.kryoserializer.buffer.max", "2000M")\
511-
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1")\
511+
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2")\
512512
.getOrCreate()
513513
```
514514

@@ -576,7 +576,7 @@ Use either one of the following options
576576
- Add the following Maven Coordinates to the interpreter's library list
577577

578578
```bash
579-
com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1
579+
com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
580580
```
581581

582582
- Add a path to pre-built jar from [here](#compiled-jars) in the interpreter's library list making sure the jar is available to driver path
@@ -586,7 +586,7 @@ com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1
586586
Apart from the previous step, install the python module through pip
587587

588588
```bash
589-
pip install spark-nlp==4.2.1
589+
pip install spark-nlp==4.2.2
590590
```
591591

592592
Or you can install `spark-nlp` from inside Zeppelin by using Conda:
@@ -611,7 +611,7 @@ The easiest way to get this done on Linux and macOS is to simply install `spark-
611611
$ conda create -n sparknlp python=3.8 -y
612612
$ conda activate sparknlp
613613
# spark-nlp by default is based on pyspark 3.x
614-
$ pip install spark-nlp==4.2.1 pyspark==3.2.1 jupyter
614+
$ pip install spark-nlp==4.2.2 pyspark==3.2.1 jupyter
615615
$ jupyter notebook
616616
```
617617

@@ -627,7 +627,7 @@ export PYSPARK_PYTHON=python3
627627
export PYSPARK_DRIVER_PYTHON=jupyter
628628
export PYSPARK_DRIVER_PYTHON_OPTS=notebook
629629

630-
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1
630+
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
631631
```
632632

633633
Alternatively, you can mix in using `--jars` option for pyspark + `pip install spark-nlp`
@@ -652,7 +652,7 @@ This script comes with the two options to define `pyspark` and `spark-nlp` versi
652652
# -s is for spark-nlp
653653
# -g will enable upgrading libcudnn8 to 8.1.0 on Google Colab for GPU usage
654654
# by default they are set to the latest
655-
!wget https://setup.johnsnowlabs.com/colab.sh -O - | bash /dev/stdin -p 3.2.1 -s 4.2.1
655+
!wget https://setup.johnsnowlabs.com/colab.sh -O - | bash /dev/stdin -p 3.2.1 -s 4.2.2
656656
```
657657

658658
[Spark NLP quick start on Google Colab](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/quick_start_google_colab.ipynb) is a live demo on Google Colab that performs named entity recognitions and sentiment analysis by using Spark NLP pretrained pipelines.
@@ -673,7 +673,7 @@ This script comes with the two options to define `pyspark` and `spark-nlp` versi
673673
# -s is for spark-nlp
674674
# -g will enable upgrading libcudnn8 to 8.1.0 on Kaggle for GPU usage
675675
# by default they are set to the latest
676-
!wget https://setup.johnsnowlabs.com/colab.sh -O - | bash /dev/stdin -p 3.2.1 -s 4.2.1
676+
!wget https://setup.johnsnowlabs.com/colab.sh -O - | bash /dev/stdin -p 3.2.1 -s 4.2.2
677677
```
678678

679679
[Spark NLP quick start on Kaggle Kernel](https://www.kaggle.com/mozzie/spark-nlp-named-entity-recognition) is a live demo on Kaggle Kernel that performs named entity recognitions by using Spark NLP pretrained pipeline.
@@ -691,9 +691,9 @@ This script comes with the two options to define `pyspark` and `spark-nlp` versi
691691

692692
3. In `Libraries` tab inside your cluster you need to follow these steps:
693693

694-
3.1. Install New -> PyPI -> `spark-nlp==4.2.1` -> Install
694+
3.1. Install New -> PyPI -> `spark-nlp==4.2.2` -> Install
695695

696-
3.2. Install New -> Maven -> Coordinates -> `com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1` -> Install
696+
3.2. Install New -> Maven -> Coordinates -> `com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2` -> Install
697697

698698
4. Now you can attach your notebook to the cluster and use Spark NLP!
699699

@@ -741,7 +741,7 @@ A sample of your software configuration in JSON on S3 (must be public access):
741741
"spark.kryoserializer.buffer.max": "2000M",
742742
"spark.serializer": "org.apache.spark.serializer.KryoSerializer",
743743
"spark.driver.maxResultSize": "0",
744-
"spark.jars.packages": "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1"
744+
"spark.jars.packages": "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2"
745745
}
746746
}]
747747
```
@@ -750,7 +750,7 @@ A sample of AWS CLI to launch EMR cluster:
750750
751751
```.sh
752752
aws emr create-cluster \
753-
--name "Spark NLP 4.2.1" \
753+
--name "Spark NLP 4.2.2" \
754754
--release-label emr-6.2.0 \
755755
--applications Name=Hadoop Name=Spark Name=Hive \
756756
--instance-type m4.4xlarge \
@@ -814,7 +814,7 @@ gcloud dataproc clusters create ${CLUSTER_NAME} \
814814
--enable-component-gateway \
815815
--metadata 'PIP_PACKAGES=spark-nlp spark-nlp-display google-cloud-bigquery google-cloud-storage' \
816816
--initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/python/pip-install.sh \
817-
--properties spark:spark.serializer=org.apache.spark.serializer.KryoSerializer,spark:spark.driver.maxResultSize=0,spark:spark.kryoserializer.buffer.max=2000M,spark:spark.jars.packages=com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1
817+
--properties spark:spark.serializer=org.apache.spark.serializer.KryoSerializer,spark:spark.driver.maxResultSize=0,spark:spark.kryoserializer.buffer.max=2000M,spark:spark.jars.packages=com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
818818
```
819819
820820
2. On an existing one, you need to install spark-nlp and spark-nlp-display packages from PyPI.
@@ -853,7 +853,7 @@ spark = SparkSession.builder \
853853
.config("spark.kryoserializer.buffer.max", "2000m") \
854854
.config("spark.jsl.settings.pretrained.cache_folder", "sample_data/pretrained") \
855855
.config("spark.jsl.settings.storage.cluster_tmp_dir", "sample_data/storage") \
856-
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1") \
856+
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2") \
857857
.getOrCreate()
858858
```
859859
@@ -867,7 +867,7 @@ spark-shell \
867867
--conf spark.kryoserializer.buffer.max=2000M \
868868
--conf spark.jsl.settings.pretrained.cache_folder="sample_data/pretrained" \
869869
--conf spark.jsl.settings.storage.cluster_tmp_dir="sample_data/storage" \
870-
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1
870+
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
871871
```
872872
873873
**pyspark:**
@@ -880,7 +880,7 @@ pyspark \
880880
--conf spark.kryoserializer.buffer.max=2000M \
881881
--conf spark.jsl.settings.pretrained.cache_folder="sample_data/pretrained" \
882882
--conf spark.jsl.settings.storage.cluster_tmp_dir="sample_data/storage" \
883-
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.1
883+
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.2
884884
```
885885
886886
**Databricks:**
@@ -1144,12 +1144,12 @@ spark = SparkSession.builder \
11441144
.config("spark.driver.memory","16G")\
11451145
.config("spark.driver.maxResultSize", "0") \
11461146
.config("spark.kryoserializer.buffer.max", "2000M")\
1147-
.config("spark.jars", "/tmp/spark-nlp-assembly-4.2.1.jar")\
1147+
.config("spark.jars", "/tmp/spark-nlp-assembly-4.2.2.jar")\
11481148
.getOrCreate()
11491149
```
11501150
11511151
- You can download provided Fat JARs from each [release notes](https://github.com/JohnSnowLabs/spark-nlp/releases), please pay attention to pick the one that suits your environment depending on the device (CPU/GPU) and Apache Spark version (3.0.x, 3.1.x, 3.2.x, and 3.3.x)
1152-
- If you are local, you can load the Fat JAR from your local FileSystem, however, if you are in a cluster setup you need to put the Fat JAR on a distributed FileSystem such as HDFS, DBFS, S3, etc. (i.e., `hdfs:///tmp/spark-nlp-assembly-4.2.1.jar`)
1152+
- If you are local, you can load the Fat JAR from your local FileSystem, however, if you are in a cluster setup you need to put the Fat JAR on a distributed FileSystem such as HDFS, DBFS, S3, etc. (i.e., `hdfs:///tmp/spark-nlp-assembly-4.2.2.jar`)
11531153
11541154
Example of using pretrained Models and Pipelines in offline:
11551155

build.sbt

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ name := getPackageName(is_m1, is_gpu, is_aarch64)
66

77
organization := "com.johnsnowlabs.nlp"
88

9-
version := "4.2.1"
9+
version := "4.2.2"
1010

1111
(ThisBuild / scalaVersion) := scalaVer
1212

conda/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ conda config --set anaconda_upload no
3535
Build `spark-nlp` from the latest PyPI tar:
3636

3737
```bash
38-
conda build . --python=3.7 && conda build . --python=3.8
38+
conda build . --python=3.7 && conda build . --python=3.8 && conda build . --python=3.9
3939
```
4040

4141
Example of uploading Conda package to Anaconda Cloud:

conda/meta.yaml

+4-4
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
package:
22
name: "spark-nlp"
3-
version: 4.2.1
3+
version: 4.2.2
44

55
app:
66
entry: spark-nlp
77
summary: Natural Language Understanding Library for Apache Spark.
88

99
source:
10-
fn: spark-nlp-4.2.1.tar.gz
11-
url: https://files.pythonhosted.org/packages/f4/7e/9e4a789d30f9e917c41267bc852ca63dd9bd9b326d90bd558edbe3fd23fe/spark-nlp-4.2.1.tar.gz
12-
sha256: 297134d0012c95743515904bc485b7cd2e9c4de6e419233327b434a181525097
10+
fn: spark-nlp-4.2.2.tar.gz
11+
url: https://files.pythonhosted.org/packages/78/7e/1ed94f903c0dfe0e6d4900bf61d0210cb39dadf918c7a21f9cfdf924fc50/spark-nlp-4.2.2.tar.gz
12+
sha256: 276abca3fc807a4dd0ffa5a299f11359c402670ad20166a01dd7ff6392719f65
1313
build:
1414
noarch: generic
1515
number: 0

0 commit comments

Comments
 (0)