Skip to content

Commit ef4d6b5

Browse files
committed
Fix markdown linting test
1 parent d281a33 commit ef4d6b5

3 files changed

Lines changed: 15 additions & 5 deletions

File tree

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
plugins:
2+
md022:
3+
enabled: false

releases/3.4/yaml/docs/tutorial/1-environment-setup.md

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,10 @@ kubectl get roles -n spark
190190
kubectl get rolebindings -n spark
191191
```
192192

193-
[details="Output example"]
193+
<details>
194+
195+
<summary> Output example</summary>
196+
194197
```text
195198
kubectl get serviceaccounts -n spark
196199
NAME SECRETS AGE
@@ -206,7 +209,8 @@ kubectl get rolebindings -n spark
206209
NAME ROLE AGE
207210
spark-role-binding Role/spark-role 2m48s
208211
```
209-
[/details]
212+
213+
</details>
210214

211215
Now, launch a PySpark shell using the service account you created earlier to verify that it works:
212216

@@ -394,7 +398,10 @@ You can also see the configuration stored in a Kubernetes secret:
394398
kubectl get secret -n spark -o yaml
395399
```
396400

397-
[details="Output example"]
401+
<details>
402+
403+
<summary> Output example</summary>
404+
398405
```text
399406
apiVersion: v1
400407
items:
@@ -418,7 +425,8 @@ kind: List
418425
metadata:
419426
resourceVersion: ""
420427
```
421-
[/details]
428+
429+
</details>
422430

423431
With that, the tutorial’s environment setup is complete!
424432

releases/3.4/yaml/docs/tutorial/2-distributed-data-processing.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,6 @@ You can do that by manually setting the version of image used on the cluster sid
106106
```bash
107107
spark-client.service-account-registry add-config --username spark --namespace spark --conf spark.kubernetes.container.image=ghcr.io/canonical/charmed-spark:3.4.2-22.04_edge
108108
```
109-
```
110109

111110
Here, we split the data into two parts, generating a distributed data structure, where each line is stored in one of the (possibly many) executors.
112111
The number of vowels in each line is computed, line by line, with the `count_vowels` function on each executor in parallel.

0 commit comments

Comments
 (0)