Skip to content

Commit 9ccd267

Browse files
authored
Release 25.04.0 [skip ci] (#912)
Merge branch-25.04 into main Note: merge this PR with **Create a merge commit to merge**
2 parents 34a7b3a + af75541 commit 9ccd267

File tree

75 files changed

+3556
-796
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

75 files changed

+3556
-796
lines changed

README.md

Lines changed: 19 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Spark Rapids ML
22

3-
Spark Rapids ML enables GPU accelerated distributed machine learning on [Apache Spark](https://spark.apache.org/). It provides several PySpark ML compatible algorithms powered by the [RAPIDS cuML](https://docs.rapids.ai/api/cuml/stable/) library, along with a compatible Scala API for the PCA algorithm.
3+
Spark Rapids ML enables GPU accelerated distributed machine learning on [Apache Spark](https://spark.apache.org/). It provides several PySpark ML compatible algorithms powered by the [RAPIDS cuML](https://docs.rapids.ai/api/cuml/stable/) library.
44

55
These APIs seek to minimize any code changes to end user Spark code. After your environment is configured to support GPUs (with drivers, CUDA toolkit, and RAPIDS dependencies), you should be able to just change an import statement or class name to take advantage of GPU acceleration. See [here](./python/README.md#clis-enabling-no-package-import-change) for experimental CLIs that enable GPU acceleration without the need for changing the `pyspark.ml` package names in an existing pyspark ml application.
66

@@ -18,39 +18,31 @@ pca = (
1818
pca.fit(df)
1919
```
2020

21-
**Scala**
22-
```scala
23-
// val pca = new org.apache.spark.ml.feature.PCA()
24-
val pca = new com.nvidia.spark.ml.feature.PCA()
25-
.setK(3)
26-
.setInputCol("features")
27-
.setOutputCol("pca_features")
28-
.fit(df)
29-
```
30-
3121
## Supported Algorithms
3222

3323
The following table shows the currently supported algorithms. The goal is to expand this over time with support from the underlying RAPIDS cuML libraries. If you would like support for a specific algorithm, please file a [git issue](https://github.com/NVIDIA/spark-rapids-ml/issues) to help us prioritize.
3424

35-
| Supported Algorithms | Python | Scala |
36-
| :--------------------- | :----: | :---: |
37-
| CrossValidator || |
38-
| DBSCAN (*) || |
39-
| KMeans || |
40-
| approx/exact k-NN (*) || |
41-
| LinearRegression || |
42-
| LogisticRegression || |
43-
| PCA |||
44-
| RandomForestClassifier || |
45-
| RandomForestRegressor || |
46-
| UMAP (*) || |
47-
48-
Note: Spark does not provide a k-Nearest Neighbors (k-NN) implementation, but it does have an [LSH-based Approximate Nearest Neighbor](https://spark.apache.org/docs/latest/ml-features.html#approximate-nearest-neighbor-search) implementation. As an alternative to PCA, we also provide a Spark API for GPU accelerated Uniform Manifold Approximation and Projection (UMAP), a non-linear dimensionality reduction algorithm in the RAPIDS cuML library. As an alternative to KMeans, we also provide a Spark API for GPU accelerated Density-Based Spatial Clustering of Applications with Noise (DBSCAN), a density based clustering algorithm in the RAPIDS cuML library.
25+
| Supported Algorithms | Python |
26+
| :--------------------- | :----: |
27+
| CrossValidator ||
28+
| DBSCAN (*) ||
29+
| KMeans ||
30+
| approx/exact k-NN (*) ||
31+
| LinearRegression ||
32+
| LogisticRegression ||
33+
| PCA ||
34+
| RandomForestClassifier ||
35+
| RandomForestRegressor ||
36+
| UMAP (*) ||
37+
38+
(*) Notes:
39+
- As an alternative to KMeans, we also provide a Spark API for GPU accelerated Density-Based Spatial Clustering of Applications with Noise (DBSCAN), a density based clustering algorithm in the RAPIDS cuML library.
40+
- Spark does not provide a k-Nearest Neighbors (k-NN) implementation, but it does have an [LSH-based Approximate Nearest Neighbor](https://spark.apache.org/docs/latest/ml-features.html#approximate-nearest-neighbor-search) implementation.
41+
- As an alternative to PCA, we also provide a Spark API for GPU accelerated Uniform Manifold Approximation and Projection (UMAP), a non-linear dimensionality reduction algorithm in the RAPIDS cuML library.
4942

5043
## Getting started
5144

52-
- For PySpark (Python) users, see [this guide](python/README.md).
53-
- For Spark (Scala) users, see [this guide](jvm/README.md).
45+
For PySpark (Python) users, see [this guide](python/README.md).
5446

5547
## Performance
5648

ci/Dockerfile

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,10 @@
1515
#
1616

1717
ARG CUDA_VERSION=11.8.0
18-
FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu20.04
18+
FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu22.04
1919

2020
# Install packages to build spark-rapids-ml
21+
RUN chmod 1777 /tmp
2122
RUN apt update -y \
2223
&& DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt install -y openjdk-8-jdk \
2324
&& apt install -y git numactl software-properties-common wget zip \
@@ -37,6 +38,6 @@ RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86
3738
&& conda config --set solver libmamba
3839

3940
# install cuML
40-
ARG CUML_VER=25.02
41+
ARG CUML_VER=25.04
4142
RUN conda install -y -c rapidsai -c conda-forge -c nvidia cuml=$CUML_VER cuvs=$CUML_VER python=3.10 cuda-version=11.8 numpy~=1.0 \
4243
&& conda clean --all -f -y

ci/Jenkinsfile.premerge

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
#!/usr/local/env groovy
22
/*
3-
* Copyright (c) 2023-2024, NVIDIA CORPORATION.
3+
* Copyright (c) 2023-2025, NVIDIA CORPORATION.
44
*
55
* Licensed under the Apache License, Version 2.0 (the "License");
66
* you may not use this file except in compliance with the License.
@@ -30,7 +30,7 @@ import ipp.blossom.*
3030

3131
def githubHelper // blossom github helper
3232
def TEMP_IMAGE_BUILD = true
33-
def IMAGE_PREMERGE = "${common.ARTIFACTORY_NAME}/sw-spark-docker/spark-rapids-ml:ubuntu20.04-blossom-ci"
33+
def IMAGE_PREMERGE = "${common.ARTIFACTORY_NAME}/sw-spark-docker/rapids:ml-ubuntu22-cuda11.8.0-py310"
3434
def cpuImage = pod.getCPUYAML("${common.ARTIFACTORY_NAME}/sw-spark-docker/spark:rapids-databricks") // tooling image
3535
def PREMERGE_DOCKERFILE = 'ci/Dockerfile'
3636
def PREMERGE_TAG

ci/lint_python.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Copyright (c) 2024, NVIDIA CORPORATION.
1+
# Copyright (c) 2024-2025, NVIDIA CORPORATION.
22
#
33
# Licensed under the Apache License, Version 2.0 (the "License");
44
# you may not use this file except in compliance with the License.

deprecated/README.md

Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
# Spark Rapids ML (Scala)
2+
3+
**NOTE**: The Scala algorithm is deprecated as of v25.04.
4+
5+
### PCA
6+
7+
Comparing to the original PCA training API:
8+
9+
```scala
10+
val pca = new org.apache.spark.ml.feature.PCA()
11+
.setInputCol("feature_vector_type")
12+
.setOutputCol("feature_value_3d")
13+
.setK(3)
14+
.fit(vectorDf)
15+
```
16+
17+
We used a customized class and user will need to do `no code change` to enjoy the GPU acceleration:
18+
19+
```scala
20+
val pca = new com.nvidia.spark.ml.feature.PCA()
21+
.setInputCol("feature_array_type") // accept ArrayType column, no need to convert it to Vector type
22+
.setOutputCol("feature_value_3d")
23+
.setK(3)
24+
.fit(vectorDf)
25+
...
26+
```
27+
28+
Note: The `setInputCol` is targeting the input column of `Vector` type for training process in `CPU`
29+
version. But in GPU version, user doesn't need to do the extra preprocess step to convert column of
30+
`ArrayType` to `Vector` type, the `setInputCol` will accept the raw `ArrayType` column.
31+
32+
## Build
33+
34+
### Build in Docker:
35+
36+
We provide a Dockerfile to build the project in a container. See [docker](../docker/README.md) for more instructions.
37+
38+
### Prerequisites:
39+
40+
1. essential build tools:
41+
- [cmake(>=3.23.1)](https://cmake.org/download/),
42+
- [ninja(>=1.10)](https://github.com/ninja-build/ninja/releases),
43+
- [gcc(>=9.3)](https://gcc.gnu.org/releases.html)
44+
2. [CUDA Toolkit(>=11.5)](https://developer.nvidia.com/cuda-toolkit)
45+
3. conda: use [miniconda](https://docs.conda.io/en/latest/miniconda.html) to maintain header files
46+
and cmake dependecies
47+
4. [cuDF](https://github.com/rapidsai/cudf):
48+
- install cuDF shared library via conda:
49+
```bash
50+
conda install -c rapidsai -c conda-forge cudf=22.04 python=3.8 -y
51+
```
52+
5. [RAFT(22.12)](https://github.com/rapidsai/raft):
53+
- raft provides only header files, so no build instructions for it. Note we fix the version to
54+
22.12 to avoid potential API compatibility issues in the future.
55+
```bash
56+
$ git clone -b branch-22.12 https://github.com/rapidsai/raft.git
57+
```
58+
6. export RAFT_PATH:
59+
```bash
60+
export RAFT_PATH=ABSOLUTE_PATH_TO_YOUR_RAFT_FOLDER
61+
```
62+
Note: For those using other types of GPUs which do not have CUDA forward compatibility (for example, GeForce), CUDA 11.5 or later is required.
63+
64+
### Build target jar
65+
66+
Spark-rapids-ml uses [spark-rapids](https://github.com/NVIDIA/spark-rapids) plugin as a dependency.
67+
To build the _SNAPSHOT_ jar, user needs to build and install the denpendency jar _rapids-4-spark_ first
68+
because there's no snapshot jar for spark-rapids plugin in public maven repositories.
69+
See [build instructions](https://github.com/NVIDIA/spark-rapids/blob/branch-23.04/CONTRIBUTING.md#building-a-distribution-for-multiple-versions-of-spark) to get the dependency jar installed.
70+
71+
User can also modify the pom file to use the _release_ version spark-rapids plugin as the dependency. In this case user doesn't need to manually build and install spark-rapids plugin jar by themselves.
72+
73+
Make sure the _rapids-4-spark_ is installed in your local maven then user can build it directly in
74+
the _project root path_ with:
75+
```
76+
cd jvm
77+
mvn clean package
78+
```
79+
Then `rapids-4-spark-ml_2.12-24.04.1-SNAPSHOT.jar` will be generated under `target` folder.
80+
81+
Users can also use the _release_ version spark-rapids plugin as the dependency if it's already been
82+
released in public maven repositories, see [rapids-4-spark maven repository](https://mvnrepository.com/artifact/com.nvidia/rapids-4-spark)
83+
for release versions. In this case, users don't need to manually build and install spark-rapids
84+
plugin jar by themselves. Remember to replace the [dependency](https://github.com/NVIDIA/spark-rapids-ml/blob/branch-23.04/pom.xml#L94-L96)
85+
in pom file.
86+
87+
_Note_: This module contains both native and Java/Scala code. The native library build instructions
88+
has been added to the pom.xml file so that maven build command will help build native library all
89+
the way. Make sure the prerequisites are all met, or the build will fail with error messages
90+
accordingly such as "cmake not found" or "ninja not found" etc.
91+
92+
## How to use
93+
94+
After the building processes, spark-rapids plugin jar will be installed to your local maven
95+
repository, usually in your `~/.m2/repository`.
96+
97+
Add the artifact jar to the Spark, for example:
98+
```bash
99+
ML_JAR="target/rapids-4-spark-ml_2.12-24.04.1-SNAPSHOT.jar"
100+
PLUGIN_JAR="~/.m2/repository/com/nvidia/rapids-4-spark_2.12/24.04.1/rapids-4-spark_2.12-24.04.1.jar"
101+
102+
$SPARK_HOME/bin/spark-shell --master $SPARK_MASTER \
103+
--driver-memory 20G \
104+
--executor-memory 30G \
105+
--conf spark.driver.maxResultSize=8G \
106+
--jars ${ML_JAR},${PLUGIN_JAR} \
107+
--conf spark.plugins=com.nvidia.spark.SQLPlugin \
108+
--conf spark.rapids.sql.enabled=true \
109+
--conf spark.task.resource.gpu.amount=0.08 \
110+
--conf spark.executor.resource.gpu.amount=1 \
111+
--conf spark.executor.resource.gpu.discoveryScript=./getGpusResources.sh \
112+
--files ${SPARK_HOME}/examples/src/main/scripts/getGpusResources.sh
113+
```
114+
115+
### PCA examples
116+
117+
Please refer to
118+
[PCA examples](https://github.com/NVIDIA/spark-rapids-examples/blob/branch-23.04/examples/ML+DL-Examples/Spark-cuML/pca/) for
119+
more details about example code. We provide both
120+
[Notebook](https://github.com/NVIDIA/spark-rapids-examples/blob/branch-23.04/examples/ML+DL-Examples/Spark-cuML/pca/notebooks/Spark_PCA_End_to_End.ipynb)
121+
and [jar](https://github.com/NVIDIA/spark-rapids-examples/blob/branch-23.04/examples/ML+DL-Examples/Spark-cuML/pca/scala/src/com/nvidia/spark/examples/pca/Main.scala)
122+
versions there. Instructions to run these examples are described in the
123+
[README](https://github.com/NVIDIA/spark-rapids-examples/blob/branch-23.04/examples/ML+DL-Examples/Spark-cuML/pca/README.md).
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)