You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/explanation/cryptography.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ All three charms in the Charmed Apache Spark solution employ pinned revisions of
17
17
* Integration Hub for Apache Spark
18
18
* Charmed Apache Kyuubi
19
19
20
-
The Spark History Server and Charmed Apache Kyuubi use different flavours of the
20
+
The Spark History Server and Charmed Apache Kyuubi use different flavors of the
21
21
[Charmed Apache Spark Rock image](https://github.com/canonical/charmed-spark-rock/) whereas the Integration Hub for
22
22
Apache Spark uses the [Integration Hub for Apache Spark Rock image](https://github.com/canonical/spark-integration-hub-rock).
23
23
@@ -166,4 +166,4 @@ For more information about the Kyuubi Client, see the
166
166
The Spark History Server provides authentication integrated using the Canonical Identity Platform in combination
167
167
with a custom design [servlet filter](https://code.launchpad.net/~data-platform/soss/+source/charmed-spark-servlet-filters/+git/charmed-spark-servlet-filters).
168
168
This feature can be enabled by following the steps in the
169
-
[Spark History Server authorisation How-to guide](https://charmhub.io/spark-k8s-bundle/docs/h-history-server-authorization).
169
+
[Spark History Server authorization How-to guide](https://charmhub.io/spark-k8s-bundle/docs/h-history-server-authorization).
Copy file name to clipboardExpand all lines: docs/explanation/security.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -80,7 +80,7 @@ Charmed Apache Spark K8s runs on top of a set of Rockcraft-based images, all bas
80
80
available in the [Apache Spark release page](https://launchpad.net/spark-releases), on top of Ubuntu 22.04.
81
81
The images that can be found in the [Charmed Apache Spark rock images GitHub repository](https://github.com/canonical/charmed-spark-rock) are used as the base
82
82
images for pods both for Spark jobs and charms.
83
-
The following table summarises the relation between the component and its underlying base image.
83
+
The following table summarizes the relation between the component and its underlying base image.
Copy file name to clipboardExpand all lines: docs/how-to/enable-monitoring.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -165,7 +165,7 @@ Congratulations, the COS now monitors the two charms.
165
165
166
166
## Customize dashboards and alert rules
167
167
168
-
The `cos-configuration-k8s` charm included in the bundle can be used to customise the Grafana dashboards and alert rules.
168
+
The `cos-configuration-k8s` charm included in the bundle can be used to customize the Grafana dashboards and alert rules.
169
169
If needed, we provide a [default dashboard](https://github.com/canonical/spark-k8s-bundle/blob/main/releases/3.4/resources/grafana) as part
170
170
of the bundle and dedicated dashboards for the [Spark History Server](https://github.com/canonical/spark-history-server-k8s-operator/tree/3/edge/src/grafana_dashboards)
171
171
and [Charmed Apache Kyuubi](https://github.com/canonical/kyuubi-k8s-operator/tree/3.4/edge/src/grafana_dashboards) charms.
Copy file name to clipboardExpand all lines: docs/tutorial/2-distributed-data-processing.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -316,5 +316,5 @@ The result should look similar to the following:
316
316
2025-04-07T11:32:44.872Z [sparkd] Number of tweets containing Ubuntu: 54
317
317
```
318
318
319
-
By default, Apache Spark stores the logs of drivers and executors as pod logs in the local file systems, which are lost once the pods are deleted. Apache Spark can store these logs in a persistent object storage system, like S3, so that they can later be retrieved and visualised by a component called Spark History Server.
319
+
By default, Apache Spark stores the logs of drivers and executors as pod logs in the local file systems, which are lost once the pods are deleted. Apache Spark can store these logs in a persistent object storage system, like S3, so that they can later be retrieved and visualized by a component called Spark History Server.
Wait for the deployment to complete and check the model status with the `juju status` command.
230
230
The `kyuubi-k8s` units are all in the `blocked` state now, with a message “Missing ZooKeeper integration”.
231
-
That is because multi-node Apache Kyuubi deployments require Apache ZooKeeper for synchronisation.
231
+
That is because multi-node Apache Kyuubi deployments require Apache ZooKeeper for synchronization.
232
232
233
233
Now, deploy [Apache ZooKeeper K8s](https://charmhub.io/zookeeper-k8s) charm (three nodes, since we want high availability) and integrate it with Apache Kyuubi:
0 commit comments