In this post I'll cover creating a containerised Azure Function V2 app, setting up Kubernetes clusters on two cloud providers (Google Cloud Kubernetes Engine & Digital Ocean), then deploying our function app to these clusters.

This post is split into the following sections:

Part 0: Introduction
Part 1: Setup
Part 2: Create Function App
Part 3: Create Docker Image & Push to Hub
Part 4: Creating a Cluster on Google Kubernetes Engine (GKE)
Part 5: Creating a Cluster on Digital Ocean with Containership

Part 0: Introduction

There's a lot to like with Azure Functions. I find they offer near-frictionless development & deployment, allowing me to go from idea to actually getting something out there & usable in a short space of time. But what would happen if I wanted to move functions away from Azure for some reason - either entirely, or perhaps as part of a multi-cloud strategy? ASP.NET Core web applications are a known quantity and relatively simple to host anywhere, but functions not so much - and I don't feel comfortable being completely tied to one platform.

Azure Functions are open source, with the runtime being portable - so, in theory at least allowing them to run anywhere. However, what about some of the things functions hosted on Azure bring to the table - high-availability, automatic scaling, and so on? Kubernetes is one of the solutions to this, so I thought I'd explore this further and find out how easy it was to create a self-hosted Kubernetes cluster with my functions deployed to it. If you want a quick intro to Kubernetes there's one in comic form, or a good introductory article here.

The various parts here are applicable to different environments. I'll be looking at Kubernetes hosted on two cloud platforms - Google Kubernetes Engine (GKE) and Digital Ocean, using Containership as the cluster installer/management tool for DO. All of the major players have offerings in this space - you can use a cloud platform other than Google or Digital Ocean, and setup Kubernetes yourself or via one of the various services out there. The major cloud providers all have managed Kubernetes offerings now in various states of readiness and there are a number of SaaS options for cluster setup. There are a lot of other options for trying out Kubernetes, including testing on your local machine(s), self-deployed - or indeed via another managed service such as Azure AKS.

If you just want to get Kubernetes up and running quickly I would recommend following the GKE section of the guide. I've found Google's offering to be one of the simplest to get started with as well as the most complete, and if you're signing up for a new account you'll get $300 worth of credit to play with.

And finally, I use a mixture of Windows 10, Server 2016 & Mac development environments, and these instructions should work across any of them with the correct software installed. In this example I'm going to be using my Windows environment, although it should be noted that creating linux container images from Windows may not be recommended due to the file permissions containers will be created with.

Part 1: Setup

Account Setup

As I mentioned above, this guide will provide walkthroughs for both GKE & Digital Ocean platforms.

For both of them you'll want to setup a free Docker Hub account to push/pull our Azure Functions image once we've built it. (Note - you can actually use the Google Container Registry instead if you're going to be using GKE, but I'm not using it for this guide)

For GKE, you'll need a (free) Google Cloud account. We don't need any other accounts to setup Kubernetes.

If using Digital Ocean you'll need an account with them. If you're feeling kind you can sign up with my referral link and we'll both receive free credit. Digital Ocean don't have a managed Kubernetes service themselves yet (or more accurately, they don't have one generally available - it's currently in preview) so to create the Kubernetes cluster I'll be using a free Containership.io account. Containership can build Kubernetes clusters on Azure, Amazon, Google and other providers such as Digital Ocean.

Summary:

Docker hub (free) account - link

Google Cloud account - link

Digital Ocean account (referral link)

Containership (free) account - link

Software Setup

You'll need to install Docker and Azure Functions Core Tools. We'll be creating our Function App via the Azure Functions Core Tools CLI, then creating a Docker image of this app and push this into a Docker Hub repo, ready to be deployed onto our cluster.

1. Download & install Docker

You can download Docker for Windows here

**2. Install the Azure Functions Core Tools **

The Github repo has installation instructions, but you can just install the tools with the following command (note - I'm assuming you have Node and npm installed, download and install from here first if you don't):

npm i -g azure-functions-core-tools@core --unsafe-perm true

The Core Tools version at the time of writing is 2.0.1-beta.37.

3. Optional (for Google) - Install Google Cloud SDK

This provides command line tools for working with Google Cloud & GKE. It's recommended but you don't necessarily need to use this - a lot of functionality (including most that we'll cover here) can be found in their cloud dashboard. You can download from here

Part 2: Create Function App

In this part we'll create a simple function app with two functions available.

1. Create the function project via the CLI

Create a new directory for our project and run the following from the command line:

func init TestFunction --docker

This handily sets us up with a .csproj file & the Dockerfile we'll need later.

2. Create a HelloWorld function

Open project in Visual Studio and add an HTTP triggered HelloWorld function

2018-09-04-17_16_08-Window

3. Create a GoodbyeWorld function

Copy the HelloWorld.cs file, rename as GoodbyeWorld.cs and change as below:

4. Test our functions

Debug our functions from Visual Studio & test both functions are working

2018-09-04-17_26_26-Window

Ok, we're ready to move on to the next step - image creation and push to our docker hub repository.

Part 3: Docker Image Creation and Push to Hub

Microsoft make it very simple to deploy a function app to Kubernetes, with a single command - at least in theory. The Azure Function Core Tools documentation has an example of deploying to Kubernetes with a minimum of 3 and a maximum of 10 instances with the following command (switch out 'your registry name' with your docker hub ID):

func deploy --platform kubernetes --name myfunction --registry <your registry name> --min 3 --max 10

However, unfortunately this did not work for me (once I had actually setup Kubernetes) - I always received the following error:

The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.

I fiddled with this for a while but couldn't fix it. I thought it could be related to connection strings but changing these didn't fix the issue. Perhaps it needs AKS to work. So, the following instructions will cover manual creation of our Docker image, ready for deployment to Kubernetes.

1. Build the Docker image

Back in the command line, go to the TestFunction directory and run:

dotnet build
docker build -t testfunction .

This will build the Docker image. It will most likely take a while. Here's a screenshot of it in progress:

2018-09-04-17_32_08-Window

Important note: if you're building the image on Windows you'll receive the following message - it's obviously not ideal to build from Windows:

SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

2. Test the Docker image locally

Run the following command:

docker run -p 7071:80 testfunction

This runs our Azure Function container locally in Docker, on port 7071 (connecting up to port 80 in the container that the functions are listening on). Check to make sure our functions are running ok.

3. Push the Docker image to Docker Hub

We want to make our image available to our cluster, and an easy way to do this is to push our newly created function image to Docker Hub. You'll need to log Docker into your account first if you haven't already done so (docker login). Once Docker is logged in, run the following commands (note: replace 'tomfalk' with your docker username):

docker tag testfunction tomfalk/testfunction

docker push tomfalk/testfunction

Depending on your connection speed this could take a while to upload the image. Might be worth making a cup of tea while you're waiting.

4. Optional - make Docker repository private

Once the image has finished uploading you can mark the repository as private if you want to, from the Docker Hub website (you can access this quickly from the Docker desktop interface). However, if you do so you'll need to setup your cluster with credentials to access your repository. For simplicity I'm just leaving mine public for now as it doesn't contain anything sensitive.

Part 4: Creating a Kubernetes Cluster on Google Kubernetes Engine (GKE)

Google make it really simple to get started with Kubernetes. Which makes sense, as Kubernetes was actually created by Google, with them open-sourcing the project in 2014.

I should say that I'm by no means an expert with Kubernetes. If you're new to it I'd suggest reading some background, including an overview of the project.

I've been exploring various options for hosting it, including Azure AKS, RancherOS, self-deployed on Digital Ocean, and of course via GKE. I've found GKE to be one of the more easier & complete ways to get started. In terms of billing, in comparison to Digital Ocean you don't have to pay for a master node - so it could also be more cost-effective.

Let's get started.

1. Login to Google Cloud & navigate to Kubernetes Engine

You'll be presented with our (empty) clusters dashboard

2018-09-07-14_49_26-Window

2. Create cluster

Click the create cluster. Note the option to follow the walkthrough, which gives you some help if you're doing this for the first time on your own. We don't need to do this right now though.

I'm setting up my cluster with a single node pool containing 2 nodes, each with 1vCPU & 2GB memory.

2018-09-07-14_54_09-Window

Google provide some really nice setup options here in the advanced options area, including automatic cluster scaling (of nodes) and high availability options (including regional scaling, with nodes spread over zones in a region).

I noticed an option in the advanced section for trying the 'New Stackdriver beta Monitoring and Logging experience' which sounds exciting, so I've enabled that. I'm leaving everything else defaulted for now. All of the settings can be changed later.

2018-09-07-14_59_00-Window

Once you're ready hit the 'Create' button. This will take a couple of minutes, looking like this when it's ready:

2018-09-07-15_03_47-Window

We can view information & status of our cluster and change any settings as well.

2018-09-07-15_08_07-Window

2018-09-07-15_08_17-Window

And a view of one of our nodes:

2018-09-07-15_08_29-Window

3. Create our function app workload

Navigate to the workloads section of the dashboard.

2018-09-07-15_09_54-Window

Click on Deploy and you'll be presented with the following screen:

2018-09-07-15_10_28-Window

Make the following changes to deploy our test function app, substituting the container image location with your docker hub details. It should look similar to this:

2018-09-07-15_11_24-Window

And click deploy. If you give it a minute or so, you can then view the details of our deployment:

2018-09-07-15_12_57-Window

What I really like about this is that Google has provisioned our deployment automatially with an autoscaler based on CPU utilisation, with a minimum of 1 instance of our function app running, scaling up to a maximum of 5.

4. Expose the deployment with a service

Click on the 'Expose' button on the deployment details screen.

2018-09-07-15_16_26-Window

The defaults here are fine for us to test with. We want to use port 80 for our functions, and we'll use the service type 'Load Balancer'. This will create a load balancer for us automatically with an external IP to use to access our functions.

When it's finished creating you'll see the following screen:

2018-09-07-15_18_40-Window

And now we can copy the external endpoint IP address from this screen, and paste it into our browser...we have our working function app, all ready to test & scale.

2018-09-07-15_19_49-Window

2018-09-07-15_20_05-Window

2018-09-07-15_20_20-Window

5. Configure kubectl command-line access

You'll need to have the GCloud SDK installed for this.

Go back to the clusters area in the dashboard and click the 'Connect' button. You can then copy & paste the command to connect your local kubectl install to the cluster. We can now run commands directly against the cluster on our local computer.

I'm not going to cover many commands in this section. I do cover more kubectl commands in the Digital Ocean section. However, one quick example I'll provide is changing how our testfunction deployment autoscales.

You can view our autoscaler (known as an 'hpa' in Kubernetes) via the kubectl command:

kubectl describe hpa

We can delete it and replace it with a new one that scales on a lower CPU percent as follows:

kubectl delete hpa testfunction-hpa

kubectl autoscale deployment testfunction --min=1 --max=5 --cpu-percent=50

6. Stop our cluster (without deleting it)

Google Kubernetes Engine only charges you for the nodes that are running & any associated resources they use, not for the cluster management itself. If you want to scale down the cluster (effectively stopping it) without deleting it you set rhe nodes to 0 in the portal, or you can run the following command:

gcloud container clusters resize --region=$regionName $clusterName --size=0

You can then start the cluster again by running the same command with the size set to non-zero.

And that's it for our GKE section. I'll post soon on how our app performs under load on GKE. I hope you enjoyed reading this & found it useful, and as always please feel free to reach out with any comments or questions.

Part 5: Creating a Kubernetes Cluster on Digital Ocean with Containership

Digital Ocean is currently trialling their own managed Kubernetes service that I unfortunately haven't been able to test yet. However, during one of my dives into the Kubernetes rabbit-hole I came across Containership.io. According to their website they:

Provision, manage, and scale your Kubernetes infrastructure on-prem, in the cloud, or both, all within a single pane of glass. Whether you have one cluster or one thousand we make it easy to maintain your infrastructure.

In a nutshell, they offer a dashboard that allows you to automatically provision Kubernetes clusters on the various major cloud platforms, manage these, deploy workloads, and so on. I found their solution, although not quite as intuitive as another service such as Google's, allowed me to get something up and running quickly on Digital Ocean so I'm using it here to do the initial cluster provisioning. They offer a completely free community plan with unlimited users, clusters, and so on - with a paid enterprise version available that includes on-prem provisioning and additional security options. I'm impressed with their free offering.

Finally, even though the Containership platform allows you to provision workloads from their interface, for our Function app deployment I'll be using the command line from my local machine. We'll be able to connect our local Docker install (with kubectl, the command line interface for managing Kubernetes clusters) to our newly provisioned cluster. You can do all of the cluster management from the command line, you don't actually need to touch the Containership dashboard again after the initial provisioning.

Anyway, enough talk - let's get started.

1. Create account & login to Containership

You'll be presented with the dashboard below, ready to create a cluster.

2018-09-05-14_02_28-Window

2. Create new cluster

Select the option to create a cluster on any major cloud provider and continue.

2018-09-05-14_03_54-Clusters

3. Select provider

I've chosen Digital Ocean here. Note: you'll need to add your credentials to the platform - for Digital Ocean it's in the form of an API key which you generate from the Digital Ocean dashboard.

4. Select region

I'm using London 1.

2018-09-05-14_06_04-Create-Cluster

5. Set cluster options

I've set my cluster name to 'lbi-test-cluster' and labelled the environment as 'dev'.

2018-09-05-14_07_30-Create-Cluster

6. Setup master pool

Kubernetes has the concept of 'master' and 'node' components - basically, one or more master controllers & one or more worker nodes. Here we're configuring the master node. I'm leaving everything set to default here which is going to give us a single master node with an instance type with 1 VCPU & 2GB ram. This instance costs $10 per month to run.

2018-09-05-14_11_13-Create-Cluster

7. Setup worker pool(s)

I'm changing the node count here to 2, so we have 2 worker nodes. I'm leaving the instance size at the default, which is the same as the master - so these workers will cost us $20 per month.

You can enable droplet backups & monitoring here in the advanced options as well, and add additional worker pools if you like. I'm enabling droplet monitoring, although you get monitoring anyway through the Containership dashboard.

2018-09-05-14_14_05-Create-Cluster

8. Select plugins

I'm leaving everything defaulted here, but you can see the default plugins include Prometheus for metrics, which is nice.

2018-09-05-14_15_10-Create-Cluster

9. Review options and build our cluster!

You should get a summary similar to this:

2018-09-05-14_15_59-Create-Cluster

Which includes a nice summary of the total price you'll be paying. Click continue to start the cluster deployment. You'll then see the following screens:

2018-09-05-14_16_39-Clusters

And we can watch our cluster being built:

2018-09-05-14_16_49-Clusters

2018-09-05-14_17_41-Clusters

And after a few minutes:

2018-09-05-14_25_26-Clusters

We can go in and explore our cluster in more detail:

2018-09-05-14_25_57-Cluster-_-Overview

2018-09-05-14_26_09-Cluster-_-Workloads

2018-09-05-14_26_59-Node-Pools-_-Overview

And view more information about one of our nodes:

2018-09-05-14_28_02-Node-Pool-_-Overview

And finally, let's have a look at our cluster from the Digital Ocean dashboard:

2018-09-05-19_18_58-DigitalOcean---tom-project

And one of our nodes:

2018-09-05-19_19_30-DigitalOcean---d650fd3a-e85f-4298-a7ac-926495e4a7c5

10. Connect kubectl to our cluster

Finally, we'll connect our local kubectl install to our new cluster. Containership handily provide the connection information for us - just view the cluster details, and you'll see it on the main page:

2018-09-05-14_31_58-Cluster-_-Overview

Just copy the connection string and run it from the command line. This will switch our local kubectl context to our cluster. If you right click on the Docker application in the taskbar, the Kubernetes option should now display this newly created context.

11. Check our kubectl context

In the previous step we connected kubectl to our newly deployed Digital Ocean cluster. We can check this by running the following from the command line:

kubectl config current-context

You should get 'cs-ctx-' followed by a Guid.

12. Deploy to the cluster

Run the following command to deploy two instances of our function app (known as 'pods' in Kubernetes) to our cluster, substituting the docker image repository details for your own:

kubectl run --image=tomfalk/testfunction testfunction --port=80 --replicas=2

This should return 'deployment.apps "testfunction" created' if successful.

Now, if we check out our cluster dashboard on Containership, we can see our deployment in the 'Workloads' section:

2018-09-05-19_12_57-Cluster-_-Workloads

And an overview of our function app workload:

2018-09-05-19_14_37-Workloads-_-Overview

We can even drill down to the container itself:

2018-09-05-19_15_26-Containership-Cloud

Awesome! Now let's open up access to our workload - or expose the deployment in Kubernetes terms.

**13. Expose deployment **

We expose workloads by creating services - abstractions which define logical sets of pods, and a policy by which to access them (in their words). Basically, we will be creating an endpoint by which we can access all of the instances of our containerised function app, which could be scaled up or down. We'll create a service using NodePort directly to start with, then we'll introduce a Digital Ocean load balancer.

Exposing services is a big topic and there's a huge amount here to setup & configure in Kubernetes, including firewalls, etc, so lots more than I can cover here.

Nodeport

The first (and simplest/lowest cost way) is exposing our workload via NodePort. We run the command:

kubectl expose deployment testfunction --type=NodePort --name=testfunctionnp

And we should see: 'service "testfunctionnp" exposed'

Our Containership load balancers dashboard area should now show two new entries - type 'NodePort' and type 'Load Balancer'.

2018-09-05-19_32_59-Load-Balancers

We can also see more information about this service by running:

kubectl describe services testfunctionnp

Which will output:

2018-09-05-19_44_22-Window

Make a note of the NodePort here - we'll need this to access our function app. If I now browse to the IP of any of our nodes on the port opened up via NodePort we can see our function working:

2018-09-05-19_45_17-142.93.36.250_31613_api_helloworld_Name-Tom

Awesome! But let's go one step further and expose via a Digital Ocean load balancer on port 80...

Digital Ocean Load Balancer

Exposing via a load balancer service creates a Digital Ocean load balancer. The load balancer will have a rule setup on it to forward traffic on port 80 to the port opened up for the node in Kubernetes via NodePort.

Firstly, delete the service we created previously:

kubectl delete svc testfunctionnp

The command to create the service is:

kubectl expose deployment testfunction --type=LoadBalancer --name=testfunctionlb

You should near-instantly see a load balancer provisioned in your Digital Ocean dashboard. Give it a few moments and it'll have an IP address and settings you can view, with our forwarding rules setup:

2018-09-05-19_51_22-DigitalOcean---tom-project

2018-09-05-19_51_40-DigitalOcean---Control-Panel

And we can now browse to our functions using the load balancer IP, on port 80:

2018-09-05-19_53_08-46.101.65.178_api_helloworld_Name-Tom

**14. Scaling deployment **

The Kubernetes documentation covers scaling in more depth. To manually scale the number of running instances of our deployment we can run the following command - for example, to drop the instance count down to 1:

kubectl scale deployment testfunction --replicas=1

Our workloads area in Containership shows us we have just 1/1 instances running.

2018-09-05-19_57_37-Workloads

Or we could raise it to 10:

kubectl scale deployment testfunction --replicas=10

That seems a bit excessive though. Let's set it to scale up to 10 instances as a maximum, increasing instances at 50% CPU threshold, and see if that works for us. Let's drop the number down to 1 again:

kubectl scale deployment testfunction --replicas=1

Then use the following command to create an autoscaler for our deployment:

kubectl autoscale deployment testfunction --min=2 --max=10 --cpu-percent=50

Although Containership does not seem to have UI for anything auto-scaling related - we can see our deployment has 2 instances again:

2018-09-05-20_11_50-Workloads

We can view our autoscaler from the command line though. You can use the commands:

kubectl get hpa

kubectl describe hpa

However, you'll notice the targets will show 'unknown/50%'. After some research I discovered that the autoscaler will not work without limits being set for our testfunction deployment. We can set limits for CPU & memory quite easily, either by editing configuration files (via the Containership UI for example) or from the kubectl command line. Let's delete our function deployment & autoscaler and create it again with resource limits.

kubectl delete hpa testfunction

kubectl delete deployment testfunction

And create it again, this time with some resource requests & limits:

kubectl run testfunction --image=tomfalk/testfunction --limits="cpu=800m,memory=512Mi" --requests="cpu=100m,memory=150Mi"

And create an autoscaler:

kubectl autoscale deployment testfunction --min=2 --max=5 --cpu-percent=50

And describing the autoscaler gives us...the same. Hmm. Now I've noticed that it's actually unable to fetch metrics, complaining about being unable to find the metrics API. A bit of digging shows that we need a metrics server running on our cluster, which we don't seem to have. We can fix that by cloning the metrics server repo from Github, then running the following command:

kubectl create -f deploy/1.8+/

Which deploys our metrics server successfully.

Unfortunately, the above still didn't get metrics working. It looks like the API either isn't getting registered or needs some proxy configuration changes made. I'm working on this and will update when I've managed to get it up and running. I'll also post soon on how our app performs under load on Digital Ocean.

That's it for now. I hope you found this useful, and as always please feel free to reach out with any comments or questions.