Skip to content

Commit b603f82

Browse files
authored
[PRA-154] docs: Polish the terminology capitalisation (#188)
* Updated terminology capitalisation * Sorted the custom_wordlist
1 parent 6276026 commit b603f82

9 files changed

Lines changed: 69 additions & 64 deletions

File tree

docs/.custom_wordlist.txt

Lines changed: 40 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -3,39 +3,74 @@ addon
33
addons
44
backend
55
backends
6+
Canonical's
67
Charmcraft
78
cjk
9+
conformant
810
cryptographically
11+
csv
12+
CVEs
913
databag
1014
databags
15+
datalake
16+
datalakes
17+
DBeaver
18+
dbeaver
1119
dvipng
1220
fonts
1321
freefont
22+
Furo
1423
github
24+
GitHub
1525
GPG
1626
GPU
1727
GPUs
1828
gyre
19-
https
2029
html
21-
io
30+
https
2231
Intersphinx
32+
io
33+
ip
34+
Kaggle
35+
kaggle
36+
kubeconfig
37+
lakehouse
38+
landscape
2339
lang
40+
lastmod
2441
LaTeX
2542
latexmk
43+
loadbalancer
44+
loadbalancers
45+
metastore
2646
Multipass
47+
MyST
48+
Open Graph
2749
otf
50+
PDF
51+
performant
52+
plaintext
2853
plantuml
2954
PNG
3055
postgres
3156
postgresql
57+
PR
58+
PVCs
3259
Pygments
3360
pymarkdown
3461
QEMU
35-
Rockcraft
62+
Read the Docs
3663
readthedocs
64+
reStructuredText
65+
Rockcraft
3766
rst
67+
serverless
68+
Servlet
3869
sitemapindex
70+
Sphinx
71+
Spread
72+
spread_test_example
73+
Storages
3974
subproject
4075
subprojects
4176
SVG
@@ -45,6 +80,7 @@ TOC
4580
toctree
4681
txt
4782
uncommenting
83+
URL
4884
utils
4985
VMs
5086
WCAG
@@ -54,49 +90,5 @@ wordlist
5490
xetex
5591
xindy
5692
xml
57-
ip
58-
spread_test_example
59-
Furo
60-
PDF
61-
Open Graph
62-
MyST
63-
YouTube
64-
reStructuredText
65-
GitHub
66-
Sphinx
67-
URL
68-
PR
69-
Read the Docs
70-
Spread
71-
landscape
72-
lastmod
7393
yaml
74-
conformant
75-
PVCs
76-
Servlet
77-
Storages
78-
datalake
79-
datalakes
80-
kubeconfig
81-
CVEs
82-
Canonical's
83-
metastore
84-
DBeaver
85-
lakehouse
86-
serverless
87-
Kaggle
88-
GPUs
89-
Kubeconfig
90-
ConfigMaps
91-
plaintext
92-
databag
93-
serverless
94-
performant
95-
configmap
96-
configmaps
97-
loadbalancer
98-
loadbalancers
99-
kaggle
100-
csv
101-
dbeaver
102-
94+
YouTube

docs/explanation/configuration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ the latter sources on top of previous ones in case of multi-level definitions.
2626

2727
## Group configuration
2828

29-
Group configurations are centrally stored as secrets in K8s,
29+
Group configurations are centrally stored as Secret objects,
3030
and managed by `spark-integration-hub-k8s` charm that takes care of managing
3131
their lifecycle from creation, modification and deletion.
3232
See the
@@ -36,7 +36,7 @@ setting up group configurations. These are valid across users, machines and sess
3636

3737
## User configuration
3838

39-
User configurations are centrally stored as secrets in K8s, but they are
39+
User configurations are centrally stored as Secret objects, but they are
4040
managed by the user using the `spark-client` snap and/or `spark8t` Python library.
4141
For more information, please refer to
4242
[here](/how-to/manage-service-accounts/using-spark-client-snap) for the `spark-client`

docs/explanation/cryptography.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ The communication between K8s and Spark8t is always encrypted by default with au
7676
Since Apache Spark Client snap is a wrapper around this library, the communication between the snap and the K8s cluster
7777
is also authenticated and encrypted.
7878

79-
The Spark8t Python library stores Apache Spark properties as Kubernetes Secrets because these properties may contain
79+
The Spark8t Python library stores Apache Spark properties as Secret objects because these properties may contain
8080
credentials like database password, AWS S3 access key, secret keys, etc.
8181
These are encrypted both at rest and in transit by default by the underlying K8s engine.
8282

@@ -135,7 +135,7 @@ Azure object storages. Credentials are stored in peer-relation data for S3 Integ
135135
the Azure Object storage, and they are communicated to other charms (Spark History Server, Integration Hub
136136
and Apache Kyuubi) via relation databag.
137137

138-
Integration Hub stores the credentials into Kubernetes Secrets to be made available for Spark jobs.
138+
Integration Hub stores the credentials into Secret objects to be made available for Spark jobs.
139139
Driver and executors store configuration files (where credential information is stored) unencrypted in `/etc/spark8t/conf`.
140140
Apache Kyuubi stores credentials in unencrypted configuration files in `/opt/spark/conf`, to be used to configure
141141
Apache Spark Engine, and `/opt/kyuubi/conf`, to be used to configure the Kyuubi Server.

docs/explanation/monitoring.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,9 +49,9 @@ Every pod needs to be run using a service account that needs to be set up with t
4949
correct permissions. Charmed Apache Spark tooling, i.e. the `spark-client` snap
5050
and the `spark8t` Python libraries, make sure that service account are
5151
created correctly and configured appropriately. The driver pod must be running
52-
with a service account that is able to create pods, services and configmaps in
52+
with a service account that is able to create Pods, Services and ConfigMaps in
5353
order to correctly spawn executors. Moreover, Charmed Apache Spark service accounts
54-
also store Apache Spark configuration centrally in Kubernetes as secrets, that must
54+
also store Apache Spark configuration centrally in Kubernetes as Secret objects, that must
5555
be readable/writable depending on their scope. Please refer to the explanations
5656
about the [Charmed Apache Spark hierarchical configuration](explanation-configuration) for more information.
5757

docs/how-to/deploy/environment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -308,7 +308,7 @@ terraform output
308308
# resource_group_name = "TestSparkAKSRG"
309309
```
310310

311-
#### Generating the Kubeconfig file
311+
#### Generating the kubeconfig file
312312

313313
To generate the `kubeconfig` file for connecting the client to the newly created cluster:
314314

docs/how-to/manage-service-accounts/using-spark-client-snap.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ using Juju relations. For more information about how to use the configuration hu
1717
```
1818

1919
```{caution}
20-
The following commands assume that you have administrative permission on the namespaces (or on the Kubernetes cluster) so that the corresponding resources (such as service accounts, secrets, roles, and role bindings) can be created and deleted.
20+
The following commands assume that you have administrative permission on the namespaces (or on the Kubernetes cluster) so that the corresponding resources (such as ServiceAccounts, Secrets, Roles, and RoleBindings) can be created and deleted.
2121
```
2222

2323
## Create service account
@@ -98,7 +98,7 @@ spark-client.service-account-registry get-primary
9898

9999
## Cleanup a service account
100100

101-
To delete the service account together with the other resources created, e.g. secrets, role, role-bindings, etc.:
101+
To delete the service account together with the other resources created, e.g. Secrets, Roles, RoleBindings, etc.:
102102

103103
```bash
104104
spark-client.service-account-registry delete --username demouser --namespace demonamespace

docs/tutorial/1-environment-setup.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ The MicroK8s setup is complete.
166166

167167
## The `spark-client` snap
168168

169-
For Apache Spark jobs to be running run on top of Kubernetes, a set of resources (service account, associated roles, role bindings etc.) need to be created and configured.
169+
For Apache Spark jobs to be running run on top of Kubernetes, a set of resources (ServiceAccount, associated Roles, RoleBindings etc.) need to be created and configured.
170170
To simplify this task, the Charmed Apache Spark solution offers the `spark-client` snap. Install the snap:
171171

172172
```bash
@@ -179,14 +179,14 @@ Let's create a Kubernetes namespace for us to use as a playground in this tutori
179179
kubectl create namespace spark
180180
```
181181

182-
We will now create a Kubernetes service account that will be used to run the Spark jobs. The creation of the service account can be done using the `spark-client` snap, which will create necessary roles, role bindings and other necessary configurations along with the creation of the service account:
182+
We will now create a ServiceAccount that will be used to run the Spark jobs. The creation of the ServiceAccount can be done using the `spark-client` snap, which will create necessary Roles, RoleBindings and other necessary configurations along with the creation of the ServiceAccount:
183183

184184
```bash
185185
spark-client.service-account-registry create \
186186
--username spark --namespace spark
187187
```
188188

189-
This command does a number of things in the background. First, it creates a service account in the `spark` namespace with the name `spark`. Then it creates a role with name `spark-role` with all the required RBAC permissions and binds that role to the service account by creating a role binding.
189+
This command does a number of things in the background. First, it creates a ServiceAccount in the `spark` namespace with the name `spark`. Then it creates a Role with name `spark-role` with all the required RBAC permissions and binds that Role to the ServiceAccount by creating a RoleBinding.
190190

191191
These resources can be viewed with `kubectl get` commands as follows:
192192

@@ -363,7 +363,7 @@ With the access key, secret key, and the endpoint properly configured, you shoul
363363

364364
For Apache Spark to be able to access and use our local S3 bucket, we need to provide a few configuration options including the bucket endpoint, access key and secret key.
365365

366-
In the Charmed Apache Spark solution, these configurations are stored in a Kubernetes secret and bound to a Kubernetes service account. When Spark jobs are executed using that service account, all associated configurations are automatically retrieved and supplied to Apache Spark.
366+
In the Charmed Apache Spark solution, these configurations are stored in a Secret object and bound to a ServiceAccount. When Spark jobs are executed using that service account, all associated configurations are automatically retrieved and supplied to Apache Spark.
367367

368368
The S3 configurations can be added to the existing `spark` service account with the following command:
369369

docs/tutorial/4-history-server.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -136,16 +136,16 @@ The Spark History Server comes with a Web UI for users to view and monitor the S
136136

137137
### Setup
138138

139-
The web UI can be accessed at port 18080 of the IP address of the `spark-history-server-k8s/0` unit. However, it's good practice to access it via a Kubernetes Ingress rather than directly accessing the unit's IP address. Using an Ingress will allow us to have a common entrypoint to the applications running in the Juju model.
139+
The web UI can be accessed at port 18080 of the IP address of the `spark-history-server-k8s/0` unit. However, it's good practice to access it via an ingress rather than directly accessing the unit's IP address. Using an ingress will allow us to have a common entrypoint to the applications running in the Juju model.
140140

141-
Let's add an Ingress by deploying and integrating the [`traefik-k8s`](https://charmhub.io/traefik-k8s) charm with `spark-history-server-k8s`:
141+
Let's add an ingress by deploying and integrating the [`traefik-k8s`](https://charmhub.io/traefik-k8s) charm with `spark-history-server-k8s`:
142142

143143
```bash
144144
juju deploy traefik-k8s --channel latest/stable --trust
145145
juju integrate traefik-k8s spark-history-server-k8s
146146
```
147147

148-
Now that Traefik has been deployed and configured, we can fetch the Ingress URL of the Spark History Server by running the `show-proxied-endpoints` action on the Traefik charm:
148+
Now that Traefik has been deployed and configured, we can fetch the ingress URL of the Spark History Server by running the `show-proxied-endpoints` action on the Traefik charm:
149149

150150
```bash
151151
juju run traefik-k8s/0 show-proxied-endpoints

python/CONTRIBUTING.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,20 @@ For documentation in this repository the following conventions are applied (see
8989
The full form must be used at least once per page.
9090
The full form must be used at the first entry to the page’s headings, body of text, callouts, and graphics.
9191
For subsequent usage, the full form can be substituted by alternatives.
92+
### Kubernetes Terminology
9293

94+
When documenting Kubernetes concepts, follow the [Kubernetes Documentation Style Guide](https://kubernetes.io/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects) for API object capitalization:
95+
96+
- **Use UpperCamelCase** (PascalCase) when referring specifically to Kubernetes API objects: Pod, Service, Namespace, ConfigMap, Secret, Ingress, Deployment, Node, etc.
97+
- **Use lowercase** when generally discussing the concept or referring to instances in prose.
98+
99+
For example, "The Pod object contains a `hostPath` field" (API reference) vs "the pod is running on node-1" (general discussion).
100+
101+
Special cases:
102+
103+
- **kubeconfig**: Use lowercase when referring to the configuration file (e.g., "update the kubeconfig file").
104+
- **KUBECONFIG**: Use uppercase when referring to the environment variable.
105+
- **Product names**: Capitalize properly (e.g., "Azure Kubernetes Service", "Amazon Elastic Kubernetes Service").
93106
## Canonical Contributor Agreement
94107

95108
Canonical welcomes contributions to Charmed Apache Spark. Please check out our [contributor agreement](https://ubuntu.com/legal/contributors) if you're interested in contributing to the solution.

0 commit comments

Comments
 (0)