Skip to content

Latest commit

 

History

History
494 lines (363 loc) · 15.6 KB

File metadata and controls

494 lines (363 loc) · 15.6 KB

Week 2 — Distributed Tracing

Required Homework/Tasks

Following the live streamed Week 2 about Observability, I was able to complete the required assignment with slight challenges, but I was able to scale through.

1. Setting up Honeycomb

Let's set up our environment in honeycomb

First, login to your account through this link

Once logged in, got to ENVIRONMENT on the left panel of your page

Image of the ENVIRONMENT panel

Click Manage Environments to create one. I named mine bootcamp

Image of environment creation

After creating the environment, you should can see the environment you just created.

Image of bootcamp environment

Furthermore,I went ahead to grab/copy the API keys under View API keys. The API keys used will determine the environment in which the data will land in.

Let's export our API key to gitpod using the commands below:

# to export it to our gitpod terminal 
export HONEYCOMB_API_KEY="..."

# confirm export 
env | grep HONEY

# to have it permanently saved unto gitpod env variables
# when it starts next time, we don't have to export it again  
gp env HONEYCOMB_API_KEY="..."

2. Instrument backend flask to use OpenTelemetry (OTEL) with Honeycomb.io as the provider

Honeycomb typically refers to a cloud-based observability platform. Honeycomb is an all-in-one solution for distributed tracing, metrics, and logging that provides deep visibility into the performance and behavior of complex systems and applications. It uses OTEL libraries which is simply the OpenTelemetry libraries. Read more here

Let's set our honeycomb env variable for our backend in the docker-compose.yml file. Add the following lines of code to the file:

OTEL_SERVICE_NAME: 'backend-flask'
OTEL_EXPORTER_OTLP_ENDPOINT: "https://api.honeycomb.io"
OTEL_EXPORTER_OTLP_HEADERS: "x-honeycomb-team=${HONEYCOMB_API_KEY}"

Now, let's install the OTEL libraries into our application. Add the following lines to the requirements.txt file located in the backend-flask/ directory

opentelemetry-api 
opentelemetry-sdk 
opentelemetry-exporter-otlp-proto-http 
opentelemetry-instrumentation-flask 
opentelemetry-instrumentation-requests # this should instrument outgoing HTTP calls

Go ahead and install the dependencies you listed in the requirements.txt file In the backend-flask directory, run the following command:

pip install -r requirements.txt

Image of installation

Let's create and initialize honeycomb by adding the following lines of code to app.py file

# Honeycomb
from opentelemetry import trace
from opentelemetry.instrumentation.flask import FlaskInstrumentor
from opentelemetry.instrumentation.requests import RequestsInstrumentor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Initialize tracing and an exporter that can send data to Honeycomb
provider = TracerProvider()
processor = BatchSpanProcessor(OTLPSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer(__name__)

# Initialize automatic instrumentation with Flask
app = Flask(__name__) # if this link already exists, DON'T call it again
FlaskInstrumentor().instrument_app(app)
RequestsInstrumentor().instrument()

After the updates, test out your configuration by running this command: docker compose up

Image of Datasets creation

Confirm that honeycomb is getting your data. If not, here are some steps for troubleshooting

  • Check what API key your environment is using. Do that by checking your env variables env | grep HONEYCOMB_API_KEY

  • If it is still not set, you can restart or create a new workspace, but make sure you commit and push your changes to your repository to avoid data loss.

3. Working with traces/spans/attribute in Honeycomb.io

Checkout this doc for better documentation

Configure a Tracer

Add the following lines of code to your backend-flask/services/home_activities.py file:

from opentelemetry import trace

# add before the HomeActivities class
tracer = trace.get_tracer("home.activities") 

Create a Span

Create a span with our configured tracer. A span describes what is happening in your application.

Add the following lines of code to your backend-flask/services/home_activities.py file:

# add under def run():
with tracer.start_as_current_span("home-activities-mock-data"):
# make sure every other line beneath is properly indented under the code you pasted 

After the updates, test out your configuration by running this command: docker compose up

To create some spans, append this URL to your backend, .../api/activities/home

Image of Created Spans

Add Attribute to Span

These attributes gives us more context to our logs. Go ahead and add a few by including these lines of code to your backend-flask/services/home_activities.py file:

# in the def run(): section
# add attribute -> app.now 
span = trace.get_current_span()
span.set_attribute("app.now", now.isoformat())

# at the bottom -> app.result_length
span.set_attribute("app.result_length", len(results))

After the updates, test out your configuration by running this command: docker compose up

4. Run queries to explore traces within Honeycomb.io

With our previously hard-coded attributes app.now and app.result_length, let's create and run some queries

Query 1

Image of Query 1 Image of Query 1

Query 2

Image of Query 2

Query 3

Latency - checks how long these requests take Image of Query 3

5. Instrument AWS X-ray into the backend

AWS X-Ray is a distributed tracing system that helps developers analyze and debug distributed applications. Read more here

Install AWS X-ray

We need to install the AWS SDK

Add the following lines to the requirements.txt file located in the backend-flask/ directory

aws-xray-sdk

Go ahead and install the dependencies you listed in the requirements.txt file In the backend-flask directory, run the following command:

pip install -r requirements.txt

Image of AWS XRay Installation

Instrument X-ray for Flask

To instrument our backend-flask, add the following lines of code to the app.py file

from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.ext.flask.middleware import XRayMiddleware

xray_url = os.getenv("AWS_XRAY_URL")
xray_recorder.configure(service='backend-flask', dynamic_naming=xray_url)

XRayMiddleware(app, xray_recorder)

Create Sampling Rule

Create a json file in the aws/json directory

aws/json/xray.json

Add the following lines of code to your newly created file:

{
  "SamplingRule": {
      "RuleName": "Cruddur",
      "ResourceARN": "*",
      "Priority": 9000,
      "FixedRate": 0.1,
      "ReservoirSize": 5,
      "ServiceName": "backend-flask",
      "ServiceType": "*",
      "Host": "*",
      "HTTPMethod": "*",
      "URLPath": "*",
      "Version": 1
  }
}

Create an x-ray trace group and a sampling rule. Then, run the following command:

# create a group in AWS x-ray or Gipod Terminal
aws xray create-group \
   --group-name "Cruddur" \
   --filter-expression "service(\"backend-flask\")"

Image of XRay Group Image of AWS XRay Group

create a sampling rule

aws xray create-sampling-rule --cli-input-json file://aws/json/xray.json

Image of Sample Group Image of AWS Sample Group

Configure X-ray daemon with docker-compose

Setup the daemon in the docker-compose.yml file by adding these following lines:

# add these env variables above in the ENV section
AWS_XRAY_URL: "*4567-${GITPOD_WORKSPACE_ID}.${GITPOD_WORKSPACE_CLUSTER_HOST}*"
AWS_XRAY_DAEMON_ADDRESS: "xray-daemon:2000"

xray-daemon:
    image: "amazon/aws-xray-daemon"
    environment:
      AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}"
      AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"
      AWS_REGION: "us-east-1"
    command:
      - "xray -o -b xray-daemon:2000"
    ports:
      - 2000:2000/udp

After the updates, test out your configuration by running this command: docker compose up

Check your x-ray container logs to make sure logs were successfully sent to AWS X-ray.

Image of XRay logs

Image of AWS XRay logs

6. Creating a Custom Segment and Subsegment with AWS X-ray

In the backend-flask/services/user_activities.py file, add the following lines of code:

from aws_xray_sdk.core import xray_recorder

# Start a segment
	subsegment = xray_recorder.begin_segment('mock-data')

    dict = {
      "now": now.isoformat(),
      "results-size": len(model['data'])
    }

    subsegment.put_metadata('key', dict, 'namespace')

    # Close subsegment
    xray_recorder.end_subsegment()

Image of XRay Segment Image of AWS XRay Segment Image of XRay SubSegment

Add the following to the app.py file:

@app.route("/api/activities/home", methods=['GET'])
@xray_recorder.capture('actAWS ivities_home'(assets/))
def data_home():
  data = HomeActivities.run(logger=LOGGER)
  return data, 200

@app.route("/api/activities/@<string:handle>", methods=['GET'])
@xray_recorder.capture('activities_users')
def data_handle(handle):
  model = UserActivities.run(handle)
  if model['errors'] is not None:
    return model['errors'], 422
  else:
    return model['data'], 200

@app.route("/api/activities/<string:activity_uuid>", methods=['GET'])
@xray_recorder.capture('activities_show')
def data_show_activity(activity_uuid):
  data = ShowActivity.run(activity_uuid=activity_uuid)
  return data, 200

7. Installing WatchTower and how to write custom logger to send app log data to CloudWatch Log Group

Add the following lines to the requirements.txt file located in the backend-flask/ directory

watchtower

Go ahead and install the application you listed in the requirements.txt file In the backend-flask directory, run the following command:

pip install -r requirements.txt

Setting environment variables for watchtower

In the docker-compose.yml file, add the following lines:

AWS_DEFAULT_REGION: "${AWS_DEFAULT_REGION}"
AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}"
AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"

Configuring our CloudWatch logger. Add the following lines to the app.py file:

# CloudWatch Logs
import watchtower
import logging
from time import strftime

# Configuring Logger to Use CloudWatch
LOGGER = logging.getLogger(__name__)
LOGGER.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler()
cw_handler = watchtower.CloudWatchLogHandler(log_group='cruddur')
LOGGER.addHandler(console_handler)
LOGGER.addHandler(cw_handler)
LOGGER.info("test message")

Before @app.route... paste this before it:

@app.after_request
def after_request(response):
    timestamp = strftime('[%Y-%b-%d %H:%M]')
    LOGGER.error('%s %s %s %s %s %s', timestamp, request.remote_addr, request.method, request.scheme, request.full_path, response.status)
    return response

Log something in one of the API endpoint by adding the following lines to our backend-flask/services/home_activities.py file:

# in the class HomeActivities: section, update and add these lines
def run(logger):
    logger.info("HomeActivities")

In the app.py file, update your code to look like this:

@app.route("/api/activities/home", methods=['GET'])
def data_home():
  data = HomeActivities.run(logger=LOGGER)
  return data, 200

After the updates, test out your configuration by running this command: docker compose up

Image of CloudWatch Log Image of CloudWatch Log Details Image of CloudWatch Log Events

8. Integrate Rollbar for Error Logging

Rollbar is a cloud-based platform for detecting and resolving software errors and exceptions. Rollbar integrates with popular development tools and languages, such as JavaScript, Ruby, Python, PHP, and Java, to provide a comprehensive error tracking solution for web and mobile applications. Read more here

Add the following lines to the requirements.txt file located in the backend-flask/ directory

blinker 
rollbar

Now install the application you listed in the requirements.txt file In the backend-flask directory, run the following command:

pip install -r requirements.txt

Let's add the rollbar access token

export ROLLBAR_ACCESS_TOKEN=""
gp env ROLLBAR_ACCESS_TOKEN=""

Image of Rollbar Image of Rollbar Access Token

Add ACCESS_TOKEN to docker-compose.yml

ROLLBAR_ACCESS_TOKEN: "${ROLLBAR_ACCESS_TOKEN}"

Import Rollbar Libraries

Let's import the rollbar libraries Add the following lines in the app.py file

# Rollbar
import rollbar
import rollbar.contrib.flask
from flask import got_request_exception

# after app = Flask(__name__),
# add these lines 
rollbar_access_token = os.getenv('ROLLBAR_ACCESS_TOKEN')
@app.before_first_request
def init_rollbar():
    """init rollbar module"""
    rollbar.init(
        # access token
        rollbar_access_token,
        # environment name
        'production',
        # server root directory, makes tracebacks prettier
        root=os.path.dirname(os.path.realpath(__file__)),
        # flask already sets up logging
        allow_logging_basic_config=False)

    # send exceptions from `app` to rollbar, using flask's signal system.
    got_request_exception.connect(rollbar.contrib.flask.report_exception, app)

Add Rollbar Endpoint

Below, check for the @app.route... to add a rollbar endpoint

@app.route('/rollbar/test')
def rollbar_test():
    rollbar.report_message('Hello World!', 'warning')
    return "Hello World!"

Image of Rollbar Hello World

After the updates, test out your configuration by running this command: docker compose up

Then we can test out the new endpoint I added by appending /rollbar/test to your backend URL Image of Rollbar Test

9. Trigger an error and observe it with Rollbar

I created an error log for Rollbar. Delete some words in one of the ../service/ file to create an error.

Navigate to this URL .../api/activities/home to view the error page

Image of Rollbar Error page

We should see the error message logged in your rollbar account, under "Items"

Image of Rollbar Error message