Part 3 : Deploy kubernetes orchestrator with azure container service


Create kubernetes orchestrator with azure container service using acs-engine and validating the deployment by getting cluster info along with few kubectl commands.


You have read through the Part 1 and Part 2 . If you missed the first two parts, please read.


Cheat sheet :

#Install gofish
curl -fsSL | bash
gofish init

#Install/Configure acs-engine
gofish install acs-engine

#Install Kubernetes cli (Follow details steps here :
brew install kubernetes-cli

Lets make sure the acs-engine and kubernetes cli are installed properly

acs-engine version
kubectl version

Note: Part 1 and Part 2 Pre-requisites are mandatory

Follow the steps

The easy 5 steps to achieve this.

Create azure resource group, azure aad application and azure service principle

az account set –subscription <subscriptionName>
azureSubscriptionId=$(az account show –query id -o tsv)
# Azure resource group to deploy cluster
az group create –name $clusterResourceGroupName –location westeurope
az ad app create –display-name $appName –homepage "$appName" –identifier-uris "$appName"
aadappId=$(az ad app list –display-name $appName –query '[].appId' -o tsv)
echo $aadappId
# Note :: No scope or role provided, then the default will provide contributer role for the whole subscription
az ad sp create-for-rbac –name $aadappId –password $spnPwd –role "Contributor" –scopes "/subscriptions/$azureSubscriptionId/resourceGroups/$clusterResourceGroupName"
spnAppId=$(az ad sp list –display-name $aadappId –query "[].appId" -o tsv)
echo $spnAppId
# List the roles assigned to the SPN
az role assignment list –assignee $spnAppId –all
# Optional : Insert Additional Role assignment here
# az role assignment create –assignee $spnAppId –role "contributor" –scope "/subscriptions/$azureSubscriptionId/resourceGroups/$clusterResourceGroupName"

Create cluster definition file

The supported orchestrators can be found by running the following command

acs-engine orchestrators

Use the client Id generated by the step 1 and the same secret used in the step 1. generate the ssh keys ( ) and update the public key for keyData. To get started the other default values should be good enough but feel free to change them as you need. Also, you can add more addons if required (will cover this in the next blog). This template is based on the acs-engine example.

"dnsPrefix": ""
"keyData": "ss"
"clientId": "OUTPUTFROMSTEP01"

The definition file can be found

"apiVersion": "vlabs",
"properties": {
"orchestratorProfile": {
"orchestratorType": "Kubernetes",
"orchestratorRelease": "1.12",
"kubernetesConfig": {
"addons": [
"name": "kubernetes-dashboard",
"enabled" : true
"masterProfile": {
"count": 1,
"dnsPrefix": "di-k8s-1",
"vmSize": "Standard_D2_v2"
"agentPoolProfiles": [
"name": "agentpool1",
"count": 1,
"vmSize": "Standard_D2_v2",
"availabilityProfile": "AvailabilitySet"
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
"keyData": "ssh-rsa AAAuRprvQUOPt3luHq/Q1GGyx75I/NAD6baRr xyz@abc-MacBook-Pro.local"
"servicePrincipalProfile": {
"clientId": "OUTPUTFROMSTEP01",


Generate the ARM template

Lets generate the ARM template and associated resources required for the deployment. once the command is completed you will find a _output folder, where all files will be available.

acs-engine generate acsinfrastructure/k8s-scenario01.json
# Note : update the location of your file

Deploy the ARM template

Its time to push the big deploy button. Once the deployment is kicked off, In under 20mins you should have your cluster up and running.

az group deployment create \
--name di-k8s-1-deployment \
--resource-group dik8sscenario01-rg \
--template-file azuredeploy.json \
--parameters azuredeploy.parameters.json

Make sure the above deployment is completed successfully without any errors.

Validating the cluster

First step is to merge / use the kubconfig from the _output folder generated by the acs-engine. note, choose the files matching your region. There will be one file for each region.

# To view the merged config.
kubectl config view
# make sure your new cluster info is merged, below command should output the cluster name
kubectl config get-clusters

#Get the cluster details
kubectl cluster-info

Kubernetes master is running at
Heapster is running at
KubeDNS is running at
kubernetes-dashboard is running at
Metrics-server is running at
tiller-deploy is running at

kubectl get pods --all-namespaces

This will list all the pods running under kubernetes.

Next blog post is about creating helm charts and deploying applications.

Note: If you stuck in any of the steps please reach out, happy to help out.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.