This repository contains the different resources used during Port's live webinar from 30.05.2023 - Building a fully operational internal developer platform.
You can view the webinar here.
- Building A Fully Operational Internal Developer Platform
- Table of contents
The webinar does start with an environment that has some initial setup:
- It has 2 K8s clusters (specifically provisioned on EKS but any K8s cluster will do), one will be the
testcluster and one theprodcluster - It has a GitHub organization that Port's GitHub App will be installed on, in addition to containing some initial repositories that will automatically be ingested as microservice entities, the organization also includes a repository with a scaffolder GitHub workflow that will be used to create a new repository microservice
Here are the different resources provided in this repository:
Not all blueprints used during the webinar are listed here, only the ones that are not provided by a Port template, or ones that are extended from their original template format, in order to follow along and avoid any errors due to missing blueprints or relations, we recommend to go over the following flow when using the blueprints from this repository:
- Create the domain blueprint
- Create the system blueprint
- Create the environment blueprint
- Deploy the GitHub template
- Deploy the K8s template
- Update the microservice blueprint
- Update the workload blueprint
- Update the workflowRun blueprint
- Continue following along with the rest of the resources
domainblueprintsystemblueprintenvironmentblueprintmicroserviceblueprint - note that this blueprint is created by the GitHub template used during the webinar, but over the course of the webinar it is extended, so all of these extensions can be seen in the file provided here.workloadblueprint - note that this blueprint is created by the K8s template template used during the webinar, but over the course of the webinar it is extended, so all of these extensions can be seen in the file provided here.workflowRunblueprint - note that this blueprint is created by the GitHub template used during the webinar, but over the course of the webinar it is extended, so all of these extensions can be seen in the file provided here.
Most entities in the webinar are created automatically by the different apps, actions and exporters, but some are created manually to tie everything together, these entity definitions will be provided here
subscriptiondomain entityauthenticationsystem entityprodenvironment entitytestenvironment entity
During the webinar we use 3 different self-service actions, they are all provided here along with the payloads used to trigger them.
scaffoldMicroservice.json- The definition of the scaffold microservice actiondeployToTest.json- The definition of the deploy to test actiondeployToProd.json- The definition of the deploy to prod action
auth-service.json- scaffold the auth microservicenotification-service-test.json- deploy the notification service to the test clusternotification-service-prod.json- deploy the notification service to the prod cluster
During the webinar we use 3 different GitHub workflows, they are all provided here along with an explanation of the secrets required to run them.
scaffold-cookiecutter- this is the code for the GitHub workflow that will scaffold a new microservice by creating a new repository based on a Cookiecutter template.- The workflow requires the following secrets to run:
PORT_CLIENT_ID- a client ID used to get an access token from PortPORT_CLIENT_SECRET- a client secret used to get an access token from PortORG_TOKEN- A GitHub personal access token with permissions to create a new repository
- The workflow requires the following secrets to run:
deploy-to-test- this is the code for the GitHub workflow that will deploy the notification service to the test EKS cluster.- The workflow requires the following secrets to run:
PORT_CLIENT_ID- a client ID used to get an access token from PortPORT_CLIENT_SECRET- a client secret used to get an access token from PortAWS_ACCESS_KEY_ID- An AWS access key ID used to authenticate with AWS (only required if connecting to an EKS cluster)AWS_SECRET_ACCESS_KEY- An AWS secret access key used to authenticate with AWS (only required if connecting to an EKS cluster)KUBE_CONFIG_DATA_TEST- A base64 encoded kubeconfig with the information to connect to the K8s cluster
- The workflow will create a new namespace and deployment according to the following
app.ymlfile, it makes use a public docker image
- The workflow requires the following secrets to run:
deploy-to-prod- this is the code for the GitHub workflow that will deploy the notification service to the prod EKS cluster.- The workflow requires the following secrets to run:
PORT_CLIENT_ID- a client ID used to get an access token from PortPORT_CLIENT_SECRET- a client secret used to get an access token from PortAWS_ACCESS_KEY_ID- An AWS access key ID used to authenticate with AWS (only required if connecting to an EKS cluster)AWS_SECRET_ACCESS_KEY- An AWS secret access key used to authenticate with AWS (only required if connecting to an EKS cluster)KUBE_CONFIG_DATA_TEST- A base64 encoded kubeconfig with the information to connect to the K8s cluster
- The workflow will create a new namespace and deployment according to the following
app.ymlfile, it makes use a public docker image
- The workflow requires the following secrets to run:
During the webinar we make an update to the K8s exporter config, adding some more mappings that extend the defaults provided by Port's template.
exporter-config-test.yml- the updated K8s exporter config for the test clusterexporter-config-prod.yml- the updated K8s exporter config for the prod cluster
In order to deploy Port's K8s exporter you will need to use the following commands:
This snippet will also be provided to you by Port when initially using the K8s template inside Port
Important: please setup the K8s template in Port before deploying the K8s exporter!
Deploy the exporter to the test K8s cluster:
export CLUSTER_NAME="test"
export PORT_CLIENT_ID="PORT_CLIENT_ID"
export PORT_CLIENT_SECRET="PORT_CLIENT_SECRET"
curl -s https://raw.githubusercontent.com/port-labs/template-assets/main/kubernetes/install.sh | bashDeploy the exporter to the prod K8s cluster:
export CLUSTER_NAME="prod"
export PORT_CLIENT_ID="PORT_CLIENT_ID"
export PORT_CLIENT_SECRET="PORT_CLIENT_SECRET"
curl -s https://raw.githubusercontent.com/port-labs/template-assets/main/kubernetes/install.sh | bashAfter extending the data model we deploy an updated config to Port's K8s exporter on both clusters, that updated config file is provided in exporter-config-test.yml and exporter-config-prod.yml.
Deploy the updated exporter configuration to the test K8s cluster:
helm upgrade --install port-k8s-exporter port-labs/port-k8s-exporter \
--create-namespace --namespace port-k8s-exporter \
--set secret.secrets.portClientId=PORT_CLIENT_ID --set secret.secrets.portClientSecret=PORT_CLIENT_SECRET \
--set-file configMap.config=./config/test/exporter-config-test.ymlDeploy the updated exporter configuration to the prod K8s cluster:
helm upgrade --install port-k8s-exporter port-labs/port-k8s-exporter \
--create-namespace --namespace port-k8s-exporter \
--set secret.secrets.portClientId=PORT_CLIENT_ID --set secret.secrets.portClientSecret=PORT_CLIENT_SECRET \
--set-file configMap.config=./config/test/exporter-config-prod.ymlDuring the webinar we create a single scorecard on the microservice blueprint, you can find it's definition in microservice-ownership.json
During the webinar we add some visualizations to the microservice and to the domain blueprint, here are the definitions for these visualizations:
Here is the definition for the cycle time chart (to configure the chart open the page of a specific microservice entity and click on "Add Visualization"):
Here is the definition for the tech radar chart (to configure the chart open the page of a specific domain entity and click on "Add Visualization"):

