Skip to content

Commit e5842d5

Browse files
committed
copy from english
1 parent 670c896 commit e5842d5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+1010
-0
lines changed
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
---
2+
title: Architecture
3+
linkTitle: 1. Architecture
4+
weight: 2
5+
time: 5 minutes
6+
---
7+
8+
The Spring PetClinic Java application is a simple microservices application that consists of frontend and backend services. The frontend service is a Spring Boot application that serves a web interface to interact with backend services. The backend services are Spring Boot applications that serve RESTful API's to interact with a MySQL database.
9+
10+
By the end of this workshop, you will have a better understanding of how to enable **automatic discovery and configuration** for your Java-based applications running in Kubernetes.
11+
12+
The diagram below details the architecture of the Spring PetClinic Java application running in Kubernetes with the Splunk OpenTelemetry Operator and automatic discovery and configuration enabled.
13+
14+
![Splunk Otel Architecture](../images/auto-instrumentation-java-diagram.png)
15+
16+
---
17+
18+
Based on the [**example**](https://github.com/signalfx/splunk-otel-collector-chart/blob/main/examples/enable-operator-and-auto-instrumentation/spring-petclinic-java.md) **Josh Voravong** created.
Lines changed: 138 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,138 @@
1+
---
2+
3+
title: Deploy the Splunk OpenTelemetry Collector
4+
linkTitle: 1. Deploy OpenTelemetry Collector
5+
weight: 1
6+
---
7+
8+
To get Observability signals (**metrics, traces** and **logs**) into **Splunk Observability Cloud** we need to deploy the Splunk OpenTelemetry Collector into the Kubernetes cluster.
9+
10+
For this workshop, we will be using the Splunk OpenTelemetry Collector Helm Chart. First, we need to add the Helm chart repository to Helm and run `helm repo update` to ensure the latest version:
11+
12+
{{< tabs >}}
13+
{{% tab title="Install Helm Chart" %}}
14+
15+
``` bash
16+
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart && helm repo update
17+
```
18+
19+
{{% /tab %}}
20+
{{% tab title="Output" %}}
21+
22+
```text
23+
Using ACCESS_TOKEN={REDACTED}
24+
Using REALM=eu0
25+
"splunk-otel-collector-chart" has been added to your repositories
26+
Using ACCESS_TOKEN={REDACTED}
27+
Using REALM=eu0
28+
Hang tight while we grab the latest from your chart repositories...
29+
...Successfully got an update from the "splunk-otel-collector-chart" chart repository
30+
Update Complete. ⎈Happy Helming!⎈
31+
```
32+
33+
{{% /tab %}}
34+
{{< /tabs >}}
35+
36+
**Splunk Observability Cloud** offers wizards in the UI to walk you through the setup of the OpenTelemetry Collector on Kubernetes, but in the interest of time, we will use the Helm install command below. Additional parameters are set to enable the operator for automatic discovery and configuration and code profiling.
37+
38+
* `--set="operator.enabled=true"` - this will install the OpenTelemetry operator that will be used to handle automatic discovery and configuration.
39+
* `--set="splunkObservability.profilingEnabled=true"` - this enables Code Profiling via the operator.
40+
41+
To install the collector run the following command. Do **NOT** edit this:
42+
43+
{{< tabs >}}
44+
{{% tab title="Helm Install" %}}
45+
46+
```bash
47+
helm install splunk-otel-collector --version {{< otel-version >}} \
48+
--set="operatorcrds.install=true", \
49+
--set="operator.enabled=true", \
50+
--set="splunkObservability.realm=$REALM" \
51+
--set="splunkObservability.accessToken=$ACCESS_TOKEN" \
52+
--set="clusterName=$INSTANCE-k3s-cluster" \
53+
--set="splunkObservability.profilingEnabled=true" \
54+
--set="agent.service.enabled=true" \
55+
--set="environment=$INSTANCE-workshop" \
56+
--set="splunkPlatform.endpoint=$HEC_URL" \
57+
--set="splunkPlatform.token=$HEC_TOKEN" \
58+
--set="splunkPlatform.index=splunk4rookies-workshop" \
59+
splunk-otel-collector-chart/splunk-otel-collector \
60+
-f ~/workshop/k3s/otel-collector.yaml
61+
62+
{{% /tab %}}
63+
{{% tab title="Output" %}}
64+
65+
``` plaintext
66+
LAST DEPLOYED: Fri Apr 19 09:39:54 2024
67+
NAMESPACE: default
68+
STATUS: deployed
69+
REVISION: 1
70+
NOTES:
71+
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Platform endpoint "https://http-inputs-o11y-workshop-eu0.splunkcloud.com:443/services/collector/event".
72+
73+
Splunk OpenTelemetry Collector is installed and configured to send data to Splunk Observability realm eu0.
74+
75+
[INFO] You've enabled the operator's auto-instrumentation feature (operator.enabled=true)! The operator can automatically instrument Kubernetes hosted applications.
76+
- Status: Instrumentation language maturity varies. See `operator.instrumentation.spec` and documentation for utilized instrumentation details.
77+
- Splunk Support: We offer full support for Splunk distributions and best-effort support for native OpenTelemetry distributions of auto-instrumentation libraries.
78+
```
79+
80+
{{% /tab %}}
81+
{{< /tabs >}}
82+
83+
Ensure the Pods are reported as **Running** before continuing (this typically takes around 30 seconds).
84+
85+
{{< tabs >}}
86+
{{% tab title="kubectl get pods" %}}
87+
88+
``` bash
89+
kubectl get pods | grep splunk-otel
90+
```
91+
92+
{{% /tab %}}
93+
{{% tab title="Output" %}}
94+
95+
``` text
96+
splunk-otel-collector-k8s-cluster-receiver-6bd5567d95-5f8cj 1/1 Running 0 10m
97+
splunk-otel-collector-agent-tspd2 1/1 Running 0 10m
98+
splunk-otel-collector-operator-69d476cb7-j7zwd 2/2 Running 0 10m
99+
```
100+
101+
{{% /tab %}}
102+
{{< /tabs >}}
103+
104+
Ensure there are no errors reported by the Splunk OpenTelemetry Collector (press `ctrl + c` to exit) or use the installed **awesome** `k9s` terminal UI for bonus points!
105+
106+
{{< tabs >}}
107+
{{% tab title="kubectl logs" %}}
108+
109+
``` bash
110+
kubectl logs -l app=splunk-otel-collector -f --container otel-collector
111+
```
112+
113+
{{% /tab %}}
114+
{{% tab title="Output" %}}
115+
116+
```text
117+
2021-03-21T16:11:10.900Z INFO service/service.go:364 Starting receivers...
118+
2021-03-21T16:11:10.900Z INFO builder/receivers_builder.go:70 Receiver is starting... {"component_kind": "receiver", "component_type": "prometheus", "component_name": "prometheus"}
119+
2021-03-21T16:11:11.009Z INFO builder/receivers_builder.go:75 Receiver started. {"component_kind": "receiver", "component_type": "prometheus", "component_name": "prometheus"}
120+
2021-03-21T16:11:11.009Z INFO builder/receivers_builder.go:70 Receiver is starting... {"component_kind": "receiver", "component_type": "k8s_cluster", "component_name": "k8s_cluster"}
121+
2021-03-21T16:11:11.009Z INFO [email protected]/watcher.go:195 Configured Kubernetes MetadataExporter {"component_kind": "receiver", "component_type": "k8s_cluster", "component_name": "k8s_cluster", "exporter_name": "signalfx"}
122+
2021-03-21T16:11:11.009Z INFO builder/receivers_builder.go:75 Receiver started. {"component_kind": "receiver", "component_type": "k8s_cluster", "component_name": "k8s_cluster"}
123+
2021-03-21T16:11:11.009Z INFO healthcheck/handler.go:128 Health Check state change {"component_kind": "extension", "component_type": "health_check", "component_name": "health_check", "status": "ready"}
124+
2021-03-21T16:11:11.009Z INFO service/service.go:267 Everything is ready. Begin running and processing data.
125+
2021-03-21T16:11:11.009Z INFO [email protected]/receiver.go:59 Starting shared informers and wait for initial cache sync. {"component_kind": "receiver", "component_type": "k8s_cluster", "component_name": "k8s_cluster"}
126+
2021-03-21T16:11:11.281Z INFO [email protected]/receiver.go:75 Completed syncing shared informer caches. {"component_kind": "receiver", "component_type": "k8s_cluster", "component_name": "k8s_cluster"}
127+
```
128+
129+
{{% /tab %}}
130+
{{< /tabs >}}
131+
132+
>[!INFO] Deleting a failed installation
133+
>If you make an error installing the OpenTelemetry Collector you can start over by deleting the
134+
>installation with the following command:
135+
>
136+
>``` bash
137+
>helm delete splunk-otel-collector
138+
>```
Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
---
2+
title: Deploy the PetClinic Application
3+
linkTitle: 2. Deploy PetClinic Application
4+
weight: 3
5+
---
6+
7+
The first deployment of our application will be using prebuilt containers to give this base scenario: a regular Java microservices-based application running in Kubernetes that we want to start observing. So let's deploy the application:
8+
9+
{{< tabs >}}
10+
{{% tab title="kubectl apply" %}}
11+
12+
``` bash
13+
kubectl apply -f ~/workshop/petclinic/deployment.yaml
14+
```
15+
16+
{{% /tab %}}
17+
{{% tab title="Output" %}}
18+
19+
``` text
20+
deployment.apps/config-server created
21+
service/config-server created
22+
deployment.apps/discovery-server created
23+
service/discovery-server created
24+
deployment.apps/api-gateway created
25+
service/api-gateway created
26+
service/api-gateway-external created
27+
deployment.apps/customers-service created
28+
service/customers-service created
29+
deployment.apps/vets-service created
30+
service/vets-service created
31+
deployment.apps/visits-service created
32+
service/visits-service created
33+
deployment.apps/admin-server created
34+
service/admin-server created
35+
service/petclinic-db created
36+
deployment.apps/petclinic-db created
37+
configmap/petclinic-db-initdb-config created
38+
deployment.apps/petclinic-loadgen-deployment created
39+
configmap/scriptfile created
40+
```
41+
42+
{{% /tab %}}
43+
{{< /tabs >}}
44+
45+
At this point, we can verify the deployment by checking that the Pods are running. The containers need to be downloaded and started, so this may take a couple of minutes.
46+
47+
{{< tabs >}}
48+
{{% tab title="kubectl get pods" %}}
49+
50+
``` bash
51+
kubectl get pods
52+
```
53+
54+
{{% /tab %}}
55+
{{% tab title="Output" %}}
56+
57+
```bash
58+
NAME READY STATUS RESTARTS AGE
59+
splunk-otel-collector-k8s-cluster-receiver-655dcd9b6b-dcvkb 1/1 Running 0 114s
60+
splunk-otel-collector-agent-dg2vj 1/1 Running 0 114s
61+
splunk-otel-collector-operator-57cbb8d7b4-dk5wf 2/2 Running 0 114s
62+
petclinic-db-64d998bb66-2vzpn 1/1 Running 0 58s
63+
api-gateway-d88bc765-jd5lg 1/1 Running 0 58s
64+
visits-service-7f97b6c579-bh9zj 1/1 Running 0 58s
65+
admin-server-76d8b956c5-mb2zv 1/1 Running 0 58s
66+
customers-service-847db99f79-mzlg2 1/1 Running 0 58s
67+
vets-service-7bdcd7dd6d-2tcfd 1/1 Running 0 58s
68+
petclinic-loadgen-deployment-5d69d7f4dd-xxkn4 1/1 Running 0 58s
69+
config-server-67f7876d48-qrsr5 1/1 Running 0 58s
70+
discovery-server-554b45cfb-bqhgt 1/1 Running 0 58s
71+
```
72+
73+
{{% /tab %}}
74+
{{< /tabs >}}
75+
76+
Make sure the output of `kubectl get pods` matches the output as shown in the above Output tab. Ensure all the services are shown as **Running** (or use `k9s` to continuously monitor the status).
77+
78+
To test the application, you need to obtain the public IP address of your instance. You can do this by running the following command:
79+
80+
``` bash
81+
curl http://ifconfig.me
82+
83+
```
84+
85+
Validate if the application is running by visiting **http://<IP_ADDRESS>:81** (replace **<IP_ADDRESS>** with the IP address you obtained above). You should see the PetClinic application running. The application is also running on ports **80** & **443** if you prefer to use those or port **81** is unreachable.
86+
87+
![Pet shop](../../images/petclinic.png)
88+
89+
Make sure the application is working correctly by visiting the **All Owners** **(1)** and **Veterinarians** **(2)** tabs, confirming that you see a list of names on each page.
90+
91+
![Owners](../../images/petclinic-owners.png)
92+
93+
<!--
94+
Once they are all running, the application will take a few minutes to fully start up, create the database and synchronize all the services, so let's use the time to check the local private repository is active.
95+
96+
#### Verify the local Private Registry
97+
98+
Later on, when we test our **automatic discovery and configuration** we are going to build new containers to highlight some of the additional features of Splunk Observability Cloud.
99+
100+
As configuration files and source code will be changed, the containers will need to be built and stored in a local private registry (which has already been enabled for you).
101+
102+
To check if the private registry is avaiable, run the following command (this will return an empty list):
103+
104+
{{< tabs >}}
105+
{{% tab title="Check the local Private Registry" %}}
106+
107+
``` bash
108+
curl -X GET http://localhost:9999/v2/_catalog
109+
```
110+
111+
{{% /tab %}}
112+
{{% tab title="Output" %}}
113+
114+
```text
115+
**{"repositories":[]}**
116+
```
117+
118+
{{% /tab %}}
119+
{{< /tabs >}}
120+
-->
Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
---
2+
title: Preparation of the Workshop instance
3+
linkTitle: 2. Preparation
4+
weight: 3
5+
archetype: chapter
6+
time: 15 minutes
7+
---
8+
9+
The instructor will provide you with the login information for the instance that we will be using during the workshop.
10+
11+
When you first log into your instance, you will be greeted by the Splunk Logo as shown below. If you have any issues connecting to your workshop instance, please reach out to your Instructor.
12+
13+
``` text
14+
$ ssh -p 2222 splunk@<IP-ADDRESS>
15+
16+
███████╗██████╗ ██╗ ██╗ ██╗███╗ ██╗██╗ ██╗ ██╗
17+
██╔════╝██╔══██╗██║ ██║ ██║████╗ ██║██║ ██╔╝ ╚██╗
18+
███████╗██████╔╝██║ ██║ ██║██╔██╗ ██║█████╔╝ ╚██╗
19+
╚════██║██╔═══╝ ██║ ██║ ██║██║╚██╗██║██╔═██╗ ██╔╝
20+
███████║██║ ███████╗╚██████╔╝██║ ╚████║██║ ██╗ ██╔╝
21+
╚══════╝╚═╝ ╚══════╝ ╚═════╝ ╚═╝ ╚═══╝╚═╝ ╚═╝ ╚═╝
22+
Last login: Mon Feb 5 11:04:54 2024 from [Redacted]
23+
splunk@show-no-config-i-0d1b29d967cb2e6ff ~ $
24+
```
25+
26+
To ensure your instance is configured correctly, we need to confirm that the required environment variables for this workshop are set correctly. In your terminal, run the following script and check that the environment variables are present and set with actual valid values:
27+
28+
{{< tabs >}}
29+
{{% tab title="Script" %}}
30+
31+
``` bash
32+
. ~/workshop/petclinic/scripts/check_env.sh
33+
```
34+
35+
{{% /tab %}}
36+
{{% tab title="Example Output" %}}
37+
38+
``` bash
39+
ACCESS_TOKEN = <redacted>
40+
REALM = <e.g. eu0, us1, us2, jp0, au0 etc.>
41+
RUM_TOKEN = <redacted>
42+
HEC_TOKEN = <redacted>
43+
HEC_URL = https://<...>/services/collector/event
44+
INSTANCE = <instance_name>
45+
```
46+
47+
{{% /tab %}}
48+
{{< /tabs >}}
49+
50+
Please make a note of the `INSTANCE` environment variable value as this will be used later to filter data in **Splunk Observability Cloud**.
51+
52+
For this workshop, **all** the above environment variables are required. If any have values missing, please contact your Instructor.
53+
54+
> [!SPLUNK] Delete any existing OpenTelemetry Collectors
55+
>If you have previously completed a Splunk Observability workshop using this EC2 instance, you
56+
>need to ensure that any existing installation of the Splunk OpenTelemetry Collector is
57+
>deleted. This can be achieved by running the following command:
58+
>
59+
>``` bash
60+
>helm delete splunk-otel-collector
61+
>```
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
---
2+
title: Verify Kubernetes Cluster metrics
3+
linkTitle: 3. Verify Cluster Metrics
4+
weight: 4
5+
time: 10 minutes
6+
---
7+
8+
Once the installation has completed, you can log in to **Splunk Observability Cloud** and verify that the metrics are flowing in from your Kubernetes cluster.
9+
10+
From the left-hand menu, click on **Infrastructure** and select **Kubernetes**, then select the **Kubernetes nodes** tile.
11+
12+
![NavigatorList](../images/navigatorlist.png)
13+
14+
Once you are in the **Kubernetes nodes** overview, change the **Time** filter from **-1h** to the last 15 minutes **(-15m)** to focus on the latest data, then select **Table** to list all the nodes that are reporting metrics.
15+
16+
Next, in the **Refine by:** panel, select **Cluster name** and choose your cluster from the list.
17+
18+
{{% notice title="Tip" style="info" icon="lightbulb" %}}
19+
To identify your specific cluster, use the `INSTANCE` value from the shell script output you ran during setup. This unique identifier helps you locate your workshop cluster among other nodes in the list.
20+
{{% /notice %}}
21+
22+
This will filter the list to show only the nodes from your cluster.
23+
24+
![K8s Nodes](../images/k8s-nodes.png)
25+
26+
Switch to the **K8s node logs** view to see the logs from your nodes.
27+
28+
![Logs](../images/k8s-peek-at-logs.png)

0 commit comments

Comments
 (0)