Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update training from Kibana to Loki #219

Draft
wants to merge 14 commits into
base: master
Choose a base branch
from
5 changes: 3 additions & 2 deletions .vscode/extensions.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
{
"recommendations": [
"streetsidesoftware.code-spell-checker"
"streetsidesoftware.code-spell-checker",
"DavidAnson.vscode-markdownlint"
]
}
}
12 changes: 10 additions & 2 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,13 @@
"openshift",
"OCIO",
"parkade"
]
}
],
"files.eol": "\n",
"files.insertFinalNewline": true,
"markdownlint.config": {

"MD033": {
"allowed_elements": ["kbd"]
}
}
}
46 changes: 26 additions & 20 deletions 101-lab/content/12_logging_and_visualizations.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,19 @@

<kbd>[![Video Walkthrough Thumbnail](././images/12_logging_thumb.png)](https://youtu.be/zDAJcN5yTCg)</kbd>

### EFK for Aggregated Logs
The OpenShift platform provides an aggregated logging stack that is automatically configured to centralize and store logs from application pods. These logs are only retained for a short period of time, currently about 14 days, but can be used to help identify issues with application pods.
## EFK for Aggregated Logs

Kibana is the primary interface for viewing and querying logs.
The OpenShift platform provides an aggregated logging stack that is automatically configured to centralize and store logs from application pods. These logs are only retained for a short period of time, currently about 14 days, but can be used to help identify issues with application pods.

#### Access the archive link from a pod
The shortcut towards accessing the Kibana is from the `Logs` tab of a running pod. Kibana can also be accessed directly at its [url](https://kibana-openshift-logging.apps.silver.devops.gov.bc.ca/).
Kibana is the primary interface for viewing and querying logs.

### Access the archive link from a pod

The shortcut towards accessing the Kibana is from the `Logs` tab of a running pod. Kibana can also be accessed directly at its [url](https://kibana-openshift-logging.apps.silver.devops.gov.bc.ca/).

- Select the running `rocketchat-[username]` pod and select the Logs tab

<kbd>![](./images/10_logging_01.png)</kbd>
<kbd>![10_logging_01](./images/10_logging_01.png)</kbd>

- Click on the "Show in Kibana" link to go to Kibana
- Kibana login is setup with SSO, you will see the same login page as of OpenShift console
Expand All @@ -23,41 +24,46 @@ The shortcut towards accessing the Kibana is from the `Logs` tab of a running po
- Index pattern: `app*`
- Timestamp field name: `@timestamp`

<kbd>![](./images/10_logging_setup_01.png)</kbd>

<kbd>![](./images/10_logging_setup_02.png)</kbd>
<kbd>![10_logging_setup_01](./images/10_logging_setup_01.png)</kbd>

<kbd>![10_logging_setup_0](./images/10_logging_setup_02.png)</kbd>

- Click the 'discover' tab and review the logging interface and the query that has been automatically populated (there are more examples to explore at the end of this section)

<kbd>![](./images/10_logging_02.png)</kbd>
<kbd>![10_logging_02](./images/10_logging_02.png)</kbd>

- Modify the query and time picker to select the entire namespace within the last few hours. First, I'm going to edit the query `kubernetes.namespace_name:"[-dev]"`, but replacing `[-dev]` with the name of the namespace I'm using. In the example, this is `kubernetes.namespace_name:"d8f105-dev"`. Next, I'll add a filter based on the `@timestamp` of the log messages, checking if each log entry `is between` the time periods `now-3h` and `now` and only displaying those logs.

- Modify the query and time picker to select the entire namespace within the last few hours. First, I'm going to edit the query `kubernetes.namespace_name:"[-dev]"`, but replacing `[-dev]` with the name of the namespace I'm using. In the example, this is `kubernetes.namespace_name:"d8f105-dev"`. Next, I'll add a filter based on the `@timestamp` of the log messages, checking if each log entry `is between` the time periods `now-3h` and `now` and only displaying those logs.
<kbd>![10_logging_03](./images/10_logging_03.png)</kbd>

<kbd>![](./images/10_logging_03.png)</kbd>
<kbd>![12_kibana_filter](./images/12_kibana_filter.png)</kbd>

<kbd>![](./images/12_kibana_filter.png)</kbd>
- To see quick summary charts of a particular field, click the field name in the left menu of selected or available fields. In this case, let's click on the `kubernetes.pod_name` field.

- To see quick summary charts of a particular field, click the field name in the left menu of selected or available fields. In this case, let's click on the `kubernetes.pod_name` field.
<kbd>![12_kibana_timestamp](./images/12_kibana_timestamp.png)</kbd>

<kbd>![](./images/12_kibana_timestamp.png)</kbd>
- Let's visualize which pods have been generating the most logs in the last 15 minutes. Click the 'visualize' button at the bottom of our `kubernetes.pod_name` field. Then, in the top right hand corner of the Kibana interface, look for the option to change the time range. Change this to `Last 15 minutes` from the `Quick` menu. You may wish to explore experimenting with visualizing other fields and time ranges.

- Let's visualize which pods have been generating the most logs in the last 15 minutes. Click the 'visualize' button at the bottom of our `kubernetes.pod_name` field. Then, in the top right hand corner of the Kibana interface, look for the option to change the time range. Change this to `Last 15 minutes` from the `Quick` menu. You may wish to explore experimenting with visualizing other fields and time ranges.
<kbd>![12_kibana_time_range](./images/12_kibana_time_range.png)</kbd>

<kbd>![](./images/12_kibana_time_range.png)</kbd>
<kbd>![12_kibana_visualization](./images/12_kibana_visualization.png)</kbd>

<kbd>![](./images/12_kibana_visualization.png)</kbd>
### Some useful queries you can try on

- Get logs for the whole namespace

#### Some useful queries you can try on:
- Get logs for the whole namespace:
```sql
kubernetes.namespace_name:"[-dev]"
```

- Use application labels to query logs from the same deployment:

```sql
kubernetes.namespace_name:"[-dev]" AND kubernetes.flat_labels:"deployment=[deployment_name]"
```

- Get error logs only:

```sql
kubernetes.namespace_name:"[-dev]" AND level:error
```
Expand Down
2 changes: 1 addition & 1 deletion openshift-201/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The Openshift 201 Lab is divided into the following topics:
* [Java Application (Maven)](./pipelines.md)
* [Resource Management](./resource-mgmt.md)
* [Network Policy & ACS](./network-policy.md)
* [Application Logging with Kibana](./logging.md)
* [Application Logging with Loki](./logging.md)
* [Best Practices for Image Management](./image-management.md)
* [Pod Auto Scaling](./rh201-pod-auto-scale.md)
* [Post Outage Checkup](./post-outage-checkup.md)
Binary file added openshift-201/images/logging/loki-logs-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added openshift-201/images/logging/loki-main.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified openshift-201/images/logging/pod-logs-02.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions openshift-201/instructor-resources/201-workshop-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,9 +174,9 @@ Tekton: Pipeline is kind of like a template, the pipeline run takes the paramete

### Recovery Checklist

## Application Logging with Kibana
## Application Logging with Loki

### Kibana Logging
### Loki Logging
### Usage

## Network Policy and ACS
Expand Down
71 changes: 41 additions & 30 deletions openshift-201/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,83 +4,87 @@

[Video walkthrough](https://youtu.be/VnpelRzTjOw)

## Objectives:
## Objectives

After completing this section, you should know how to view application logs in Kibana, navigate the list of fields, and create/save queries.

## Setup

We will setup a sample application that will produce a log entry every 5 seconds.

### Create a new application
### Create a new application

```bash
oc -n [-dev] new-app --name logging-app \
--context-dir=openshift-201/materials/logging \
https://github.com/BCDevOps/devops-platform-workshops

```

You should see output similar to the follow:
<pre>

```text
...<em>output omitted</em>...
imagestream.image.openshift.io "logging-app-jmacdonald" created
buildconfig.build.openshift.io "logging-app-jmacdonald" created
deployment.apps "logging-app-jmacdonald" created
service "logging-app-jmacdonald" created
--> Success
...<em>output omitted</em>...
</pre>

```

### Follow Build

Use the `oc -n [-dev] logs` command to check the build logs from the `logging-app` build:

```bash
oc -n [-dev] logs -f bc/logging-app
```
<pre>

```text
...<em>output omitted</em>...
Writing manifest to image destination
Storing signatures
...<em>output omitted</em>...
Push successful
</pre>
```

## Kibana
## Loki

### Accessing Kibana
You can access Kibana directly at this [url](https://kibana-openshift-logging.apps.silver.devops.gov.bc.ca/) or it is also accessible from the OpenShift console.
### Accessing Logs

Note: If you receive an unauthorized error (e.g. `{"statusCode":401,"error":"Unauthorized","message":"Authentication Exception"}`), follow steps here to fix: https://stackoverflow.developer.gov.bc.ca/a/119/16
You can access Loki in the OpenShift console in the Developer mode under Observe -> Logs.

Select the running pod that was just created:
<kbd>![loki-logs-1](images/logging/loki-logs-01.png)</kbd>

<kbd>![pod-logs-1](images/logging/pod-logs-01.png)</kbd>
Or, you can access it from a pods tabs.

Navigate to the Logs tab and click the `Show in Kibana` link
Select the running pod that was just created

<kbd>![pod-logs-2](images/logging/pod-logs-02.png)</kbd>
<kbd>![pod-logs-1](images/logging/pod-logs-01.png)</kbd>

### First time Setup
If this is your first time logging in to Kibana you may see a screen to setup a search index. See the steps in the Logging and Visualizations 101 lab [here](https://github.com/BCDevOps/devops-platform-workshops/blob/master/101-lab/content/12_logging_and_visualizations.md#logging-and-visualizations).
Navigate to the Aggregated Logs tab

<kbd>![pod-logs-2](images/logging/pod-logs-02.png)</kbd>

### View Logs
To view logs click on the `Discover` tab on the left navigation pane.

<kbd>![kibana-discover](images/logging/kibana-discover.png)</kbd>

By default you will see something like this:

<kbd>![kibana-main](images/logging/kibana-main.png)</kbd>
<kbd>![kibana-main](images/logging/loki-main.png)</kbd>

1. Index Pattern you created above.
2. Fields selected to show (`_source` is selected by default)
3. Available Fields to add to your display
4. Log entries that match the filter, search, etc.
5. Current activity given the time frame chosen
6. Search bar used to search for specific entries
7. Time frame chosen for the logs shown (default is last 15 minutes)
1. You can select to filter on the content of logs, or by namespace, pod, or container name.
2. Current applied filters
3. This will show a bar chart of the number of logs per time period that match your filter
4. Time range to show logs for
5. Set the page to refresh the log results every X time period
6. Adds the namespace, pod, and container names to all the log entries displayed below
7. Some detailed stats on how your query was performed
8. Button to run the query again
9. Show the LogQL query being used
10. Log entries that match the filter, search, etc.

### Fields

Let's select 2 fields for viewing from the `Available fields` panel on the left.

1. `kubernetes.container_name` - this is the name of the container running in kubernetes. This should be `logging-app`
Expand All @@ -93,9 +97,11 @@ Your screen should look similar to following:
### Queries

Let's say we are only interested in the messages with the number 10 in them. Change the search terms to be the following:
```

```text
kubernetes.container_name:"logging-app" AND message:10
```

__NOTE__ if you aren't seeing results it may have been more than 15 minutes since the entry with the number 10 was logged. If so, change the timeframe in the upper right corner to `Last 30 minutes` or higher if needed.

<kbd>![kibana-search-10](images/logging/kibana-search-10.png)</kbd>
Expand All @@ -107,6 +113,7 @@ If you want to save your query (including the selected fields) click the save bu
<kbd>![kibana-save-search](images/logging/kibana-save-search.png)</kbd>

### Filters

If you plan on doing a Google type search you can use a query. If you are selecting a possible value from a drop down like the `kubernetes.container_name` it can be faster to use a filter.

Clear out the text in your search bar and then click the `Add a filter +` button just below the search bar:
Expand All @@ -120,17 +127,21 @@ Choose the `kubernetes.container_name` for the field, `is` as the operator and `
You should now only see your entries in the list similar to the query we performed above. You can also save this filter by clicking the save button at the top just like we did with the query.

## Conclusion

There are many fields available to choose from. Feel free to experiment with adding other fields to your results. For example you could add the `kubernetes.container_image` to your list if you are interested in looking at which version of the app the logs are from.

The queries we did in this lab are pretty simple. Take a look at the [Kibana Query Language](https://www.elastic.co/guide/en/kibana/current/kuery-query.html) for more information on how to write complex queries.

### Clean up

To clean up the lab environment run the following command to delete all of the resources we created:

```bash
oc -n [-dev] delete all -l app=logging-app

deployment.apps "logging-app" deleted
buildconfig.build.openshift.io "logging-app" deleted
imagestream.image.openshift.io "logging-app" deleted
```

Next topic - [Best Practices of Image Management](https://github.com/BCDevOps/devops-platform-workshops/blob/master/openshift-201/image-management.md)
8 changes: 4 additions & 4 deletions openshift-201/materials/quiz.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,13 +96,13 @@
- All of the above


## Kibana Logging
## Loki Logging

1\. How long are logs stored in Kibana?
1\. How long are logs stored in Loki?

- Logs are stored indefinitely
- Logs are stored indefinitely
- Log stores build up annually but then are burned in the winter
- Logs are stored for 3 days
- Logs are stored for 3 days
- Logs are stored for 14 days

## Network Policy & ACS
Expand Down
2 changes: 1 addition & 1 deletion openshift-201/network-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ ACS will generate a baseline network flow for our deployments this can be viewed

From the `baseline settings` tab you can also click on `simulate baseline as network policy`. This will generate a YAML network policy file with rules for the observed baseline traffic.

Next topic - [Application Logging With Kibana](https://github.com/BCDevOps/devops-platform-workshops/blob/master/openshift-201/logging.md)
Next topic - [Application Logging With Loki](https://github.com/BCDevOps/devops-platform-workshops/blob/master/openshift-201/logging.md)

## Links

Expand Down