Goal:
Create kubernetes orchestrator with azure container service using acs-engine and validating the deployment by getting cluster info along with few kubectl commands.
Assumptions:
You have read through the Part 1 and Part 2 . If you missed the first two parts, please read.
Pre-requisites:
Cheat sheet : https://kubernetes.io/docs/reference/kubectl/cheatsheet/
#Install gofish curl -fsSL https://raw.githubusercontent.com/fishworks/gofish/master/scripts/install.sh | bash gofish init #Install/Configure acs-engine gofish install acs-engine #Install Kubernetes cli (Follow details steps here : https://kubernetes.io/docs/tasks/tools/install-kubectl/) brew install kubernetes-cli
Lets make sure the acs-engine and kubernetes cli are installed properly
acs-engine version kubectl version
Note: Part 1 and Part 2 Pre-requisites are mandatory
Follow the steps
The easy 5 steps to achieve this.
Create azure resource group, azure aad application and azure service principle
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
az account set –subscription <subscriptionName> | |
azureSubscriptionId=$(az account show –query id -o tsv) | |
# Azure resource group to deploy cluster | |
clusterResourceGroupName="dik8sscenario01-rg" | |
az group create –name $clusterResourceGroupName –location westeurope | |
appName="dik8sscenario01" | |
az ad app create –display-name $appName –homepage "http://dinventive.com/$appName" –identifier-uris "http://dinventive.com/$appName" | |
aadappId=$(az ad app list –display-name $appName –query '[].appId' -o tsv) | |
echo $aadappId | |
spnPwd="ReplacewithyourPassword" | |
# Note :: No scope or role provided, then the default will provide contributer role for the whole subscription | |
az ad sp create-for-rbac –name $aadappId –password $spnPwd –role "Contributor" –scopes "/subscriptions/$azureSubscriptionId/resourceGroups/$clusterResourceGroupName" | |
spnAppId=$(az ad sp list –display-name $aadappId –query "[].appId" -o tsv) | |
echo $spnAppId | |
# List the roles assigned to the SPN | |
az role assignment list –assignee $spnAppId –all | |
# Optional : Insert Additional Role assignment here | |
# az role assignment create –assignee $spnAppId –role "contributor" –scope "/subscriptions/$azureSubscriptionId/resourceGroups/$clusterResourceGroupName" |
Create cluster definition file
The supported orchestrators can be found by running the following command
acs-engine orchestrators
Use the client Id generated by the step 1 and the same secret used in the step 1. generate the ssh keys ( https://docs.microsoft.com/en-us/azure/virtual-machines/linux/mac-create-ssh-keys ) and update the public key for keyData. To get started the other default values should be good enough but feel free to change them as you need. Also, you can add more addons if required (will cover this in the next blog). This template is based on the acs-engine example.
"dnsPrefix": "" "keyData": "ss" "clientId": "OUTPUTFROMSTEP01" "secret": "REPLACETHISWITHYOURSECRET"
The definition file can be found
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"apiVersion": "vlabs", | |
"properties": { | |
"orchestratorProfile": { | |
"orchestratorType": "Kubernetes", | |
"orchestratorRelease": "1.12", | |
"kubernetesConfig": { | |
"addons": [ | |
{ | |
"name": "kubernetes-dashboard", | |
"enabled" : true | |
} | |
] | |
} | |
}, | |
"masterProfile": { | |
"count": 1, | |
"dnsPrefix": "di-k8s-1", | |
"vmSize": "Standard_D2_v2" | |
}, | |
"agentPoolProfiles": [ | |
{ | |
"name": "agentpool1", | |
"count": 1, | |
"vmSize": "Standard_D2_v2", | |
"availabilityProfile": "AvailabilitySet" | |
} | |
], | |
"linuxProfile": { | |
"adminUsername": "azureuser", | |
"ssh": { | |
"publicKeys": [ | |
{ | |
"keyData": "ssh-rsa AAAuRprvQUOPt3luHq/Q1GGyx75I/NAD6baRr xyz@abc-MacBook-Pro.local" | |
} | |
] | |
} | |
}, | |
"servicePrincipalProfile": { | |
"clientId": "OUTPUTFROMSTEP01", | |
"secret": "REPLACETHISWITHYOURSECRET" | |
} | |
} | |
} |
Generate the ARM template
Lets generate the ARM template and associated resources required for the deployment. once the command is completed you will find a _output folder, where all files will be available.
acs-engine generate acsinfrastructure/k8s-scenario01.json # Note : update the location of your file
Deploy the ARM template
Its time to push the big deploy button. Once the deployment is kicked off, In under 20mins you should have your cluster up and running.
az group deployment create \ --name di-k8s-1-deployment \ --resource-group dik8sscenario01-rg \ --template-file azuredeploy.json \ --parameters azuredeploy.parameters.json
Make sure the above deployment is completed successfully without any errors.
Validating the cluster
First step is to merge / use the kubconfig from the _output folder generated by the acs-engine. note, choose the files matching your region. There will be one file for each region.
# To view the merged config. kubectl config view
# make sure your new cluster info is merged, below command should output the cluster name kubectl config get-clusters #Get the cluster details kubectl cluster-info
Kubernetes master is running at https://nameofthecluster.westeurope.cloudapp.azure.com
Heapster is running at https://nameofthecluster.westeurope.cloudapp.azure.com/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://nameofthecluster.westeurope.cloudapp.azure.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://nameofthecluster.westeurope.cloudapp.azure.com/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://nameofthecluster.westeurope.cloudapp.azure.com/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
tiller-deploy is running at https://nameofthecluster.westeurope.cloudapp.azure.com/api/v1/namespaces/kube-system/services/tiller-deploy:tiller/proxy
kubectl get pods --all-namespaces
This will list all the pods running under kubernetes.
Next blog post is about creating helm charts and deploying applications.
Note: If you stuck in any of the steps please reach out, happy to help out.