This workshop will guide you through building Continuous Integration (CI) and Continuous Deployment (CD) pipelines with Visual Studio Team Services (VSTS) for use with Azure Kubernetes Service. The pipeline will utilize Azure Container Registry to build the images and Helm for application updating.
- Clone this repo in Azure Cloud Shell.
- Complete previous labs:
The general workflow/result will be as follows:
- Push code to source control (Github)
- Trigger a continuous integration (CI) build pipeline when project code is updated via Git
- Package app code into a container image (Docker Image) created and stored with Azure Container Registry
- Trigger a continuous deployment (CD) release pipeline upon a successful build
- Deploy container image to AKS upon successful a release (via Helm chart)
- Rinse and repeat upon each code update via Git
- Profit
In order to trigger this pipeline you will need your own Github account and forked copy of this repo. Log into Github in the browser and get started.
-
Broswe to https://github.com/azure/kubernetes-hackfest and click "Fork" in the top right.
-
Grab your clone URL from Github which will look something like:
https://github.com/thedude-lebowski/kubernetes-hackfest.git
-
Clone your repo in Azure Cloud Shell.
Note: If you have cloned the repo in earlier labs, the directory name will conflict. You can either delete the old one or just rename it before this step.
git clone https://github.com/<your-github-account>/kubernetes-hackfest.git cd kubernetes-hackfest/labs/cicd-automation/jenkins
-
Modify the Jenkinsfile pipeline
The pipeline file references your Azure Container Registry in a variable. Edit the
labs/cicd-automation/jenkins/Jenkinsfile
file and modify line 4 of the code:def ACRNAME = 'youracrname'
-
Initialize Helm With RBAC
Note: You may have already installed Helm in the earlier lab. You can validate with
helm version
kubectl apply -f helm-rbac.yaml
helm init --service-account tiller
helm install stable/jenkins --name jenkins -f values.yaml
This will take a couple of minutes to fully deploy
-
Get credentials and IP to Login To Jenkins
printf $(kubectl get secret --namespace default jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
export SERVICE_IP=$(kubectl get svc --namespace default jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo http://$SERVICE_IP:8080/login
Login with the password from previous step and the username: admin
Note: The Jenkins pod can take a couple minutes to start. Ensure it is
Running
prior to attempting to login.
-
Browse to Jenkins Default Admin Screen
-
Click on
Credentials
-
Select
System
under Credentials -
On the right side click the
Global Credentials
drop down and selectAdd Credentials
-
Enter the following: Example Below
- Kind = Azure Service Principal
- Scope = Global
- Subscription ID = use Subscription ID from cluster creation
- Client ID = use Subscription ID from cluster creation
- Client Secret = use Client Secret from cluster creation
- Tenant ID = use Tenant ID from cluster creation
- Azure Environment = Azure
- Id = azurecli
- Description = Azure CLI Credentials
-
Click
Verify Service Principal
-
Click
Save
- Edit Jenkinsfile with the following command
code Jenkinsfile
- Replace the following variable with the Azure Container Registry created previously
- def ACRNAME = '<container_registry_name>'
-
Open Jenkins Main Admin Interface
-
Click
Create New Project
-
Enter "aks-hackfest" for Item Name
-
Select
Multibrach Pipeline
-
Under Branch Sources
Click Add
->Git
-
In Project Replository enter
your forked git repo
-
In Build Configuration -> Script Path -> use the following path
labs/cicd-automation/jenkins/Jenkinsfile
-
Scroll to bottom of page and click
Save
- Go back to Jenkins main page
- Select the newly created pipeline
- Select
Scan Multibranch Pipeline Now
This will scan your git repo and run the Jenkinsfile build steps. It will clone the repository, build the docker image, and then deploy the app to your AKS Cluster.
-
Select the
master
under branches -
Select
build #1
under Build History -
Select
Console Output
-
Check streaming console output for any errors
- Confirm pods are running
kubectl get pods
- Get service IP of deployed app
kubectl get service/service-tracker-ui
- Open browser and test application
EXTERNAL-IP:8080