You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/how-to/manage-service-accounts/using-spark-client-snap.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ using Juju relations. For more information about how to use the configuration hu
17
17
```
18
18
19
19
```{caution}
20
-
The following commands assume that you have administrative permission on the namespaces (or on the Kubernetes cluster) so that the corresponding resources (such as service accounts, secrets, roles, and role bindings) can be created and deleted.
20
+
The following commands assume that you have administrative permission on the namespaces (or on the Kubernetes cluster) so that the corresponding resources (such as ServiceAccounts, Secrets, Roles, and RoleBindings) can be created and deleted.
Copy file name to clipboardExpand all lines: docs/tutorial/1-environment-setup.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -166,7 +166,7 @@ The MicroK8s setup is complete.
166
166
167
167
## The `spark-client` snap
168
168
169
-
For Apache Spark jobs to be running run on top of Kubernetes, a set of resources (service account, associated roles, role bindings etc.) need to be created and configured.
169
+
For Apache Spark jobs to be running run on top of Kubernetes, a set of resources (ServiceAccount, associated Roles, RoleBindings etc.) need to be created and configured.
170
170
To simplify this task, the Charmed Apache Spark solution offers the `spark-client` snap. Install the snap:
171
171
172
172
```bash
@@ -179,14 +179,14 @@ Let's create a Kubernetes namespace for us to use as a playground in this tutori
179
179
kubectl create namespace spark
180
180
```
181
181
182
-
We will now create a Kubernetes service account that will be used to run the Spark jobs. The creation of the service account can be done using the `spark-client` snap, which will create necessary roles, role bindings and other necessary configurations along with the creation of the service account:
182
+
We will now create a ServiceAccount that will be used to run the Spark jobs. The creation of the ServiceAccount can be done using the `spark-client` snap, which will create necessary Roles, RoleBindings and other necessary configurations along with the creation of the ServiceAccount:
183
183
184
184
```bash
185
185
spark-client.service-account-registry create \
186
186
--username spark --namespace spark
187
187
```
188
188
189
-
This command does a number of things in the background. First, it creates a service account in the `spark` namespace with the name `spark`. Then it creates a role with name `spark-role` with all the required RBAC permissions and binds that role to the service account by creating a role binding.
189
+
This command does a number of things in the background. First, it creates a ServiceAccount in the `spark` namespace with the name `spark`. Then it creates a Role with name `spark-role` with all the required RBAC permissions and binds that Role to the ServiceAccount by creating a RoleBinding.
190
190
191
191
These resources can be viewed with `kubectl get` commands as follows:
192
192
@@ -363,7 +363,7 @@ With the access key, secret key, and the endpoint properly configured, you shoul
363
363
364
364
For Apache Spark to be able to access and use our local S3 bucket, we need to provide a few configuration options including the bucket endpoint, access key and secret key.
365
365
366
-
In the Charmed Apache Spark solution, these configurations are stored in a Kubernetes secret and bound to a Kubernetes service account. When Spark jobs are executed using that service account, all associated configurations are automatically retrieved and supplied to Apache Spark.
366
+
In the Charmed Apache Spark solution, these configurations are stored in a Secret object and bound to a ServiceAccount. When Spark jobs are executed using that service account, all associated configurations are automatically retrieved and supplied to Apache Spark.
367
367
368
368
The S3 configurations can be added to the existing `spark` service account with the following command:
Copy file name to clipboardExpand all lines: docs/tutorial/4-history-server.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -136,16 +136,16 @@ The Spark History Server comes with a Web UI for users to view and monitor the S
136
136
137
137
### Setup
138
138
139
-
The web UI can be accessed at port 18080 of the IP address of the `spark-history-server-k8s/0` unit. However, it's good practice to access it via a Kubernetes Ingress rather than directly accessing the unit's IP address. Using an Ingress will allow us to have a common entrypoint to the applications running in the Juju model.
139
+
The web UI can be accessed at port 18080 of the IP address of the `spark-history-server-k8s/0` unit. However, it's good practice to access it via an ingress rather than directly accessing the unit's IP address. Using an ingress will allow us to have a common entrypoint to the applications running in the Juju model.
140
140
141
-
Let's add an Ingress by deploying and integrating the [`traefik-k8s`](https://charmhub.io/traefik-k8s) charm with `spark-history-server-k8s`:
141
+
Let's add an ingress by deploying and integrating the [`traefik-k8s`](https://charmhub.io/traefik-k8s) charm with `spark-history-server-k8s`:
Now that Traefik has been deployed and configured, we can fetch the Ingress URL of the Spark History Server by running the `show-proxied-endpoints` action on the Traefik charm:
148
+
Now that Traefik has been deployed and configured, we can fetch the ingress URL of the Spark History Server by running the `show-proxied-endpoints` action on the Traefik charm:
Copy file name to clipboardExpand all lines: python/CONTRIBUTING.md
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -89,7 +89,20 @@ For documentation in this repository the following conventions are applied (see
89
89
The full form must be used at least once per page.
90
90
The full form must be used at the first entry to the page’s headings, body of text, callouts, and graphics.
91
91
For subsequent usage, the full form can be substituted by alternatives.
92
+
### Kubernetes Terminology
92
93
94
+
When documenting Kubernetes concepts, follow the [Kubernetes Documentation Style Guide](https://kubernetes.io/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects) for API object capitalization:
95
+
96
+
-**Use UpperCamelCase** (PascalCase) when referring specifically to Kubernetes API objects: Pod, Service, Namespace, ConfigMap, Secret, Ingress, Deployment, Node, etc.
97
+
-**Use lowercase** when generally discussing the concept or referring to instances in prose.
98
+
99
+
For example, "The Pod object contains a `hostPath` field" (API reference) vs "the pod is running on node-1" (general discussion).
100
+
101
+
Special cases:
102
+
103
+
-**kubeconfig**: Use lowercase when referring to the configuration file (e.g., "update the kubeconfig file").
104
+
-**KUBECONFIG**: Use uppercase when referring to the environment variable.
Canonical welcomes contributions to Charmed Apache Spark. Please check out our [contributor agreement](https://ubuntu.com/legal/contributors) if you're interested in contributing to the solution.
0 commit comments