About
In this post, I’ll show you how to get started with C# Azure Functions.
To quote Microsoft: “Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running. You focus on the code that matters most to you, in the most productive language for you, and Azure Functions handles the rest.“
I have been using Azure functions on Azure cloud at work for a few years now(often used for a microservice architecture). It’s a nice ecosystem with lots of services that easily integrate and connect with each other. The drawback of this is that it creates some vendor lockin as a lot of your infrastructure is now tied to a specific cloud provider. Azure functions can however be run in docker containers and deployed wherever you desire(self-hosted or another cloud provider). You can then use Kubernetes for container orchestration.
Note: Azure uses Kudu for this purpose and the Portal is just a nice web UI for the user.
Serverless Cloud Infrastructure
A lot of people say “Cloud is just someone else’s computer” and “Serverless isn’t really serverless as it actually runs on a server”. While these points are technically true they miss the point.
A “Serverless cloud” allows for dynamic resource allocation/deallocation based on the server load. It also refers to not having to deal with a server. More specifically, to deploy your application you don’t have to buy and set up the hardware, install/maintain the OS, make a VM, install your DB, runtime, configure the networking, keep the OS up to date, etc.
This makes development of the entire system much easier and faster. Within a few clicks, you can select your hosting plan, dynamically allocated/consumption mode or fixed plan and deploy the code straight from your code editor. Or you can very easily integrate a CI/CD pipeline (see this post I made). Then you can simply go to your deployed function in Azure and get the URL for it or connect it to an API gateway where you can further configure the endpoints, set up other policies(add API key, limit requests for the said key, CORS settings, …).
This is often cheaper than the On-Premise or IaaS approach unless your system is big enough to benefit from “economies of scale”. Or unless you have some other specific needs/requirements. For example, at work we have some services hosted on premise because we want to be able to access them even if our network or cloud goes down.
Microservices/Distributed Services
Serverless functions are often used to build a microservice/distributed services architecture.
1. Different parts/modules/functions of your system can scale independently.
2. Can be written by different teams using different tech stacks.
3. If one service goes down others are still usable.
4. A CI/CD pipeline for each module. This can simplify the development, release and gitflow process. In this post, I show you how to make a CI/CD pipeline for Azure Functions.
1. Communication failure between modules. You can help mitigate this by using fault handling as I described in this post.
2. Sharing common code like models becomes harder. You can solve this by creating shared a library/NuGet package like I described in this post.
3. You need a CI/CD pipeline for each module. You’ll need to create and maintain multiple pipelines. Making updates becomes more complicated if the changes are related and need to be deployed at the same time. But it can also make things easier if the changes are not related as you only have to update a single function.
Limitations and Drawbacks
1. No Multithreading
You can’t do multithreading with Azure functions or at least not the same way you would in a regular .NET application like for example in this post I made. Each request made to your function will instantiate a new single-threaded instance of your function for that specific request(if horizontal scaling is disabled the requests will simply get queued).
To achieve parallel processing durable Azure functions can be used as described in this post I made.
2. Dynamic IPs
You don’t get a static IP for your functions. By default, you will get a pool of possible IP addresses for inbound and outbound requests. In one case this became a problem for me as I had to access an API that used IP whitelisting for access control instead of an API key.
The solution was to create a virtual network and connect the function to it. Then create a public static IP and use a NAT gateway to “bridge” the IP and the virtual network. Here’s a more detailed explanation of the solution.
3. GDI32, System.Drawing, PDF Generation
For security reasons, you can’t make GDI calls which means you can’t use the System.Drawing namespace. This became a big problem for me when I wanted to create PDFs because a lot of libraries out there use the System.Drawing namespace for this purpose.
Trying to generate PDFs was a giant pain. At first, I used a C# wrapper library for the wkhtmltopdf library(free) but eventually, I moved to using the IronPDF library(not free) for better performance and better HTML/CSS support(wkhtmltopdf doesn’t even support CSS3 and flexbox). Initially, I was using FPDF/TCPDF in PHP before migrating to C# and Azure functions but that wasn’t exactly a great experience either.
Note: Here’s a great video showing you how to create PDFs in C# for free(suitable for some scenarios).
4. Cold Start Problem
Serverless functions can have a cold start (updated post from Microsoft) from a few seconds all the way up to a minute. With Azure functions, the time has been improved over the years but it can still be significant in certain cases. Also, this only happens on startup, so if a sufficient number of requests are coming, the response time will be low.
And sometimes response time just doesn’t matter that much. For example, let’s say you have an IoT sensor reporting back some data a few times an hour/day/month or maybe only when a specific condition is met(could be only a couple times a year or even less). Here it really doesn’t make any financial sense to keep the backend running 24/7/365, because it doesn’t really matter if the response time is 200ms or 20s.
But if needed you can improve the response time in a few ways. See the recommendation from Microsoft, using caching, implementing a queue, and processing the actual request/message a bit later when the function starts up or simply keeping at least one instance online. You can do this using a warmer function as shown in this post or you can enable the “always on” option in the settings. Sure it’s slightly more expensive than scaling down to 0 running instances but this way you have a guaranteed fast first response and autoscaling in case of an increase in traffic.
Note: I have also made a post here about warming your functions to avoid cold starts.
5. Other
Table Of Contents
Installing Azure Functions
Creating An Azure Functions Project
POST Requests, URL Parameters, JSON and Environmental Variables
Environmental variables can be found in the local.settings.json. Any variables added here as “Key”:”Value” can be accessed from the code using Environment.GetEnvironmentVariable(“Key”).
Sensitive data such as API keys and other credentials should be stored in here instead of being hardcoded. This is safer because this file won’t be committed in the code repository. Another possibility for storing and managing sensitive data is Azure Key Vault.
Later I will show you where to set these variables in Azure once the function is deployed.
Code:
using System.Collections.Generic; using System.Net; using Microsoft.Azure.Functions.Worker; using Microsoft.Azure.Functions.Worker.Http; using Microsoft.Extensions.Logging; using System.Text.Json; namespace DemoFunctionApp { public class Function1 { private readonly ILogger _logger; public Function1(ILoggerFactory loggerFactory) { _logger = loggerFactory.CreateLogger<Function1>(); } class TestModel { public string Test { get; set; } public string Text { get; set; } } [Function("Function1")] public async Task<HttpResponseData> Run([HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestData req) { //This is how you can use the logger to log information and errors. _logger.LogInformation("C# HTTP trigger function processed a request."); _logger.LogError("This is an error message."); _logger.LogWarning("This is a warning message."); //This is how you can read url parameters. var query = System.Web.HttpUtility.ParseQueryString(req.Url.Query); string message = query["message"]; //This is how you can get headers from the request. req.Headers.TryGetValues("Some-Header", out var someHeaderValues); string someHeader = someHeaderValues.First().ToString(); //Read the body of the request. string body = await req.ReadAsStringAsync(); //Parse the JSON body into a model. TestModel model = JsonSerializer.Deserialize<TestModel>(body); //Get an environmental variable. if (Environment.GetEnvironmentVariable("TEST_ENVAR") == model.Test) { //Create a text response. var responseTest = req.CreateResponse(HttpStatusCode.NoContent); responseTest.Headers.Add("Content-Type", "text/plain; charset=utf-8"); //Add text to response. responseTest.WriteString("Test. Query param: " + message); //Return the response. return responseTest; } else { var response = req.CreateResponse(HttpStatusCode.OK); response.Headers.Add("Content-Type", "text/plain; charset=utf-8"); response.WriteString("Not test. Query param: " + message); return response; } } } }
Publishing/Deploying Your Azure Function
Now let’s deploy our function online to Azure Cloud. This can be done very easily straight from Visual Studio. Simply open the Build menu, select Publish then select Azure. You can also go directly to the Azure web portal and create everything individually using similar configuration wizards.
Note: If you want to know how to set up a CI/CD pipeline using Azure DevOps check out this other post I made.
When prompted for the storage account let’s create a new one. If you already have an existing one you could deploy multiple functions into it. Every function needs to be deployed “into” one. A storage account contains Tables, Queues, File Shares, and Blobs(we’ll take a look at storage accounts in more depth later). The function itself will be deployed to a File Share and it will utilize the Tables(key-value pair database) to store logs(and state in case of Durable Functions).
Next, we’ll create a new Application Insights. However, you could direct the logging data from your function to an existing one log workspace. I would recommend you do that if you have multiple functions that are part of the same project/API/backend for example microservices architecture backend for your app.
Azure Portal
If we open up our function this is the view that we get. Let’s explore some of the basic settings available to us in the menu on the left.
Application Insights Logging
For more detailed logs we can look at the Applications Insights:
Note: you can also get to Application Insights by clicking the first item in the resource group(shown in one of the previous images).
Now the following will open up:
If you would call other resources(APIs) this info. would appear here. If the resources being called are on Azure and connected to the same Application Insights instance you can drill down further into that resource and see what it’s doing.
On the right, you can see additional information. If an exception occurs you will be able to see it’s the stack trace and the specific line of code where it occurred.
Finally, if you click the small blue icon on top this final window will open up.
Alerts
Storage Account
- Blobs are for storing files.
- File shares are also for storing files but also provide a folder structure.
- Queues can receive messages. This can be used to trigger Azure Functions queue binding.
- Azure Tables is a key-value pair database that can store large amounts of data and has very fast lookups.
Deployment Slots
Resource Scaling
A server can either scale up(vertical scaling) or scale out(horizontal scaling). Vertical scaling means allocating more resources(RAM, Disk, CPU cores) meanwhile horizontal scaling means creating more instances and balancing the load between them.
Here the Scale up tab is grayed out as we have selected the consumption plan earlier on. In the Scale Out tab, you can set the max instances your app can scale to.
CORS
API Gateway
Lastly, let’s see how to connect our Azure function to an API gateway. This is not necessary however if you have functions that are only called by other functions or other internal services and aren’t supposed to be called directly by the end users it’s a good idea to put all these functions into a virtual network to isolate them from the external world. Then you can connect the virtual network to the API gateway and use it to expose only certain endpoints.
Additionally, an API Gateway can offer an easy way to manage API keys, impose API call rate limits, version APIs, re-route the endpoint to a different backend function, set CORS, provide caching, SSL offloading, etc.