Getting Started With C# Azure Functions

Getting Started With C# Azure Functions
Share:

About

In this post, I’ll show you how to get started with C# Azure Functions.

To quote Microsoft: “Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running. You focus on the code that matters most to you, in the most productive language for you, and Azure Functions handles the rest.

I have been using Azure functions on Azure cloud at work for a few years now(often used for a microservice architecture). It’s a nice ecosystem with lots of services that easily integrate and connect with each other. The drawback of this is that it creates some vendor lockin as a lot of your infrastructure is now tied to a specific cloud provider. Azure functions can however be run in docker containers and deployed wherever you desire(self-hosted or another cloud provider). You can then use Kubernetes for container orchestration. 


Note:
Azure uses Kudu for this purpose and the Portal is just a nice web UI for the user.

Serverless Cloud Infrastructure

In the graphic below serverless would equate to the PaaS(platform as a service) column.
diagram comparing On premise Iaas paas and Saas

A lot of people say “Cloud is just someone else’s computer” and “Serverless isn’t really serverless as it actually runs on a server”. While these points are technically true they miss the point. 

A “Serverless cloud” allows for dynamic resource allocation/deallocation based on the server load. It also refers to not having to deal with a server. More specifically, to deploy your application you don’t have to buy and set up the hardware, install/maintain the OS, make a VM, install your DB, runtime, configure the networking, keep the OS up to date, etc. 

This makes development of the entire system much easier and faster. Within a few clicks, you can select your hosting plan, dynamically allocated/consumption mode or fixed plan and deploy the code straight from your code editor. Or you can very easily integrate a CI/CD pipeline (see this post I made). Then you can simply go to your deployed function in Azure and get the URL for it or connect it to an API gateway where you can further configure the endpoints, set up other policies(add API key, limit requests for the said key, CORS settings, …).

This is often cheaper than the On-Premise or IaaS approach unless your system is big enough to benefit from “economies of scale”. Or unless you have some other specific needs/requirements. For example, at work we have some services hosted on premise because we want to be able to access them even if our network or cloud goes down.

Microservices/Distributed Services

Serverless functions are often used to build a microservice/distributed services architecture

The pros of this architecture are:

    1. Different parts/modules/functions of your system can scale independently.
    2. Can be written by different teams using different tech stacks.
    3. If one service goes down others are still usable.
    4. A CI/CD pipeline for each module. This can simplify the development, release and gitflow process. In this post, I show you how to make a CI/CD pipeline for Azure Functions.

Some of the cons are:

1. Communication failure between modules. You can help mitigate this by using fault handling as I described in this post.
2. Sharing common code like models becomes harder. You can solve this by creating shared a library/NuGet package like I described in this post.
3. You need a CI/CD pipeline for each module. You’ll need to create and maintain multiple pipelines. Making updates becomes more complicated if the changes are related and need to be deployed at the same time. But it can also make things easier if the changes are not related as you only have to update a single function.

Limitations and Drawbacks

1. No Multithreading

You can’t do multithreading with Azure functions or at least not the same way you would in a regular .NET application like for example in this post I made. Each request made to your function will instantiate a new single-threaded instance of your function for that specific request(if horizontal scaling is disabled the requests will simply get queued).

To achieve parallel processing durable Azure functions can be used as described in this post I made.

2. Dynamic IPs

You don’t get a static IP for your functions. By default, you will get a pool of possible IP addresses for inbound and outbound requests. In one case this became a problem for me as I had to access an API that used IP whitelisting for access control instead of an API key.

The solution was to create a virtual network and connect the function to it. Then create a public static IP and use a NAT gateway to “bridge” the IP and the virtual network. Here’s a more detailed explanation of the solution.

3. GDI32, System.Drawing, PDF Generation

For security reasons, you can’t make GDI calls which means you can’t use the System.Drawing namespace. This became a big problem for me when I wanted to create PDFs because a lot of libraries out there use the System.Drawing namespace for this purpose. 

Trying to generate PDFs was a giant pain. At first, I used a C# wrapper library for the wkhtmltopdf library(free) but eventually, I moved to using the IronPDF library(not free) for better performance and better HTML/CSS support(wkhtmltopdf doesn’t even support CSS3 and flexbox). Initially, I was using FPDF/TCPDF in PHP before migrating to C# and Azure functions but that wasn’t exactly a great experience either.

Note: Here’s a great video showing you how to create PDFs in C# for free(suitable for some scenarios).

4. Cold Start Problem

Serverless functions can have a cold start (updated post from Microsoft) from a few seconds all the way up to a minute. With Azure functions, the time has been improved over the years but it can still be significant in certain cases. Also, this only happens on startup, so if a sufficient number of requests are coming, the response time will be low.

And sometimes response time just doesn’t matter that much. For example, let’s say you have an IoT sensor reporting back some data a few times an hour/day/month or maybe only when a specific condition is met(could be only a couple times a year or even less). Here it really doesn’t make any financial sense to keep the backend running 24/7/365, because it doesn’t really matter if the response time is 200ms or 20s.

But if needed you can improve the response time in a few ways. See the recommendation from Microsoftusing cachingimplementing a queue, and processing the actual request/message a bit later when the function starts up or simply keeping at least one instance online. You can do this using a warmer function as shown in this post or you can enable the “always on” option in the settings. Sure it’s slightly more expensive than scaling down to 0 running instances but this way you have a guaranteed fast first response and autoscaling in case of an increase in traffic.

Note: I have also made a post here about warming your functions to avoid cold starts.

5. Other

Here is a list of hosting limitations while here is a detailed list of other limitations for Azure Web Apps in general.

Table Of Contents

Installing Azure Functions

Note: In this post, I will be using Visual Studio Community but it’s possible to use VS Code if you want(see how to here). A few years ago I myself exclusively used VS Code for Azure functions development at work as I didn’t have a Visual Studio Professional license.
Open the Visual Studio Installer, click “Modify” for your Visual Studio installation, select the “Azure development” module and install it.

Creating An Azure Functions Project

Select the Azure Functions project.
Here I will set the function to be an “Http trigger”. The other options can be left on their defaults.
Note: You can easily add any of the triggers listed in the dropdown to your function later. I cover how to do that in this post.
Start the Azure function as you would any other project in Visual Studio by pressing the green play button or by pressing F5.
I will use Postman to call the API. You can use any other dev tool or as this is a GET endpoint you can also simply paste the url into your browser and you will be able to see the same response.

POST Requests, URL Parameters, JSON and Environmental Variables

In the following code example, I will show you how to how to send/read URL parameters, send/receive/read JSON and how to use environmental variables.

Environmental variables can be found in the local.settings.json. Any variables added here as “Key”:”Value” can be accessed from the code using Environment.GetEnvironmentVariable(“Key”)

Sensitive data such as API keys and other credentials should be stored in here instead of being hardcoded. This is safer because this file won’t be committed in the code repository. Another possibility for storing and managing sensitive data is Azure Key Vault.

Later I will show you where to set these variables in Azure once the function is deployed.

Code:

using System.Collections.Generic;
using System.Net;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Azure.Functions.Worker.Http;
using Microsoft.Extensions.Logging;
using System.Text.Json;

namespace DemoFunctionApp
{
    public class Function1
    {
        private readonly ILogger _logger;

        public Function1(ILoggerFactory loggerFactory)
        {
            _logger = loggerFactory.CreateLogger<Function1>();
        }

        class TestModel
        {
            public string Test { get; set; }
            public string Text  { get; set; }
        }

        [Function("Function1")]
        public async Task<HttpResponseData> Run([HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestData req)
        {
            //This is how you can use the logger to log information and errors.
            _logger.LogInformation("C# HTTP trigger function processed a request.");
            _logger.LogError("This is an error message.");
            _logger.LogWarning("This is a warning message.");

            //This is how you can read url parameters.
            var query = System.Web.HttpUtility.ParseQueryString(req.Url.Query);
            string message = query["message"];

            //This is how you can get headers from the request.
            req.Headers.TryGetValues("Some-Header", out var someHeaderValues);
            string someHeader = someHeaderValues.First().ToString();

            //Read the body of the request.
            string body = await req.ReadAsStringAsync();

            //Parse the JSON body into a model.
            TestModel model = JsonSerializer.Deserialize<TestModel>(body);

            //Get an environmental variable.
            if (Environment.GetEnvironmentVariable("TEST_ENVAR") == model.Test)
            {
                //Create a text response.
                var responseTest = req.CreateResponse(HttpStatusCode.NoContent);
                responseTest.Headers.Add("Content-Type", "text/plain; charset=utf-8");

                //Add text to response.
                responseTest.WriteString("Test. Query param: " + message);

                //Return the response.
                return responseTest;
            }
            else 
            {
                var response = req.CreateResponse(HttpStatusCode.OK);
                response.Headers.Add("Content-Type", "text/plain; charset=utf-8");

                response.WriteString("Not test. Query param: " + message);

                return response;
            }
        }
    }
}
If you send the request now logs will show in the console as seen in the second image. When the function is deployed you can use Log Analytics to view the logs(we’ll take a look at that later).

Publishing/Deploying Your Azure Function

Now let’s deploy our function online to Azure Cloud. This can be done very easily straight from Visual Studio. Simply open the Build menu, select Publish then select AzureYou can also go directly to the Azure web portal and create everything individually using similar configuration wizards.

Note: If you want to know how to set up a CI/CD pipeline using Azure DevOps check out this other post I made.

Here we’ll go with the serverless option running on Linux. Then let’s create a new function on Azure. If we were just making an update to an already deployed function we would simply select it from the list and the changes would be uploaded.

When prompted for the storage account let’s create a new one. If you already have an existing one you could deploy multiple functions into it. Every function needs to be deployed “into” one. A storage account contains TablesQueuesFile Shares, and Blobs(we’ll take a look at storage accounts in more depth later). The function itself will be deployed to a File Share and it will utilize the Tables(key-value pair database) to store logs(and state in case of Durable Functions).

Next, we’ll create a new Application Insights. However, you could direct the logging data from your function to an existing one log workspace. I would recommend you do that if you have multiple functions that are part of the same project/API/backend for example microservices architecture backend for your app.

As you can see a new function was created, select it and click finish. This will create a new deployment profile. Now whenever when you press Publish the code changes will get deployed to the Azure function we just created.

Azure Portal

Finally, let’s take a look at the infrastructure on the Azure portal. This is the resource group that we created before using the configuration wizard in Visual Studio. A resource group is simply a “folder” or container to group together related resources.

If we open up our function this is the view that we get. Let’s explore some of the basic settings available to us in the menu on the left.

Here you can see the endpoints.
If you click on a particular endpoint the following window will open. Here you can see live logs, make test calls, edit files and get the function URLs(+ API keys as URL parameters).
Metrics tab:
Logs tab:

Application Insights Logging

For more detailed logs we can look at the Applications Insights:

Note: you can also get to Application Insights by clicking the first item in the resource group(shown in one of the previous images).

If you get a request you can click on the “Server Requests” graph and the following will open up. Click on the blue “Samples” button and you can select one of the requests from the list on the right to further inspect it.

Now the following will open up:

If you would call other resources(APIs) this info. would appear here. If the resources being called are on Azure and connected to the same Application Insights instance you can drill down further into that resource and see what it’s doing.

On the right, you can see additional information. If an exception occurs you will be able to see it’s the stack trace and the specific line of code where it occurred.

Finally, if you click the small blue icon on top this final window will open up.

Here you will be able to see the individual logs that were made. You can perform further filtering by modifying the data query if desired.

Alerts

Next let’s look at alerts that allow you to configure rules/conditions under which you will get notified about issues such as server overload, errors, latency, … Under “Monitoring” select “Alerts” and then create a new rule.
Here you can define the rule according to which an alert will be triggered. The notification can be an email, SMS, API call, etc.

Storage Account

Now let’s take a closer look at the storage account where the function is actually deployed(second item in the resource group shown in one of the previous images).  
The function itself will be deployed in the file share or in the blob storage if you do a “zip deployment”. You can use the “Storage Browser” from the menu on the left or the Azure Storage Explorer desktop app(which is what I prefer).
As you can see in the menu on the left the storage account contains Blob ContainersFile SharesQueues and Tables.
    1. Blobs are for storing files.
    2. File shares are also for storing files but also provide a folder structure.
    3. Queues can receive messages. This can be used to trigger Azure Functions queue binding.
    4. Azure Tables is a key-value pair database that can store large amounts of data and has very fast lookups.
Note: The function metrics that we were looking at before get stored in the tables(ones prefixed with “$Metrics”).

Deployment Slots

Next, let’s look at deployment slots which can come in very handy. You can essentially create a copy of your function. I use deployment slots to create a production, test and development deployment versions of my backends/APIs/services.

Resource Scaling

A server can either scale up(vertical scaling) or scale out(horizontal scaling). Vertical scaling means allocating more resources(RAM, Disk, CPU cores) meanwhile horizontal scaling means creating more instances and balancing the load between them.

Here the Scale up tab is grayed out as we have selected the consumption plan earlier on. In the Scale Out tab, you can set the max instances your app can scale to. 

CORS

If you want to be able to call your Azure function API endpoints from a browser you have to set CORS(Cross-Origin Resource Sharing). Find the CORS tab in the left menu and add the domain of the webpage/site domain where the API will be called from or simply remove all the URLs and use an * to allow the API to be called from any domain.

API Gateway

Lastly, let’s see how to connect our Azure function to an API gateway. This is not necessary however if you have functions that are only called by other functions or other internal services and aren’t supposed to be called directly by the end users it’s a good idea to put all these functions into a virtual network to isolate them from the external world. Then you can connect the virtual network to the API gateway and use it to expose only certain endpoints.

Additionally, an API Gateway can offer an easy way to manage API keys, impose API call rate limits, version APIs, re-route the endpoint to a different backend function, set CORS, provide cachingSSL offloadingetc.

Note: I won’t cover setting up an API gateway(see Microsoft docs here if you are interested) in this post as it’s getting quite long already and APIM is not Azure function specific as you can use it with a wide variety of other services.
Under “API” select “API Management”, then select your instance APIM instance and create a new API.
Next, select the specific endpoint of your Azure function(currently we only have one but you could have multiple). Then create the endpoint in the API Gateway.
This is how the newly created endpoint will look in the API gateway.
If you click the “Add policy” plus sign you will get the following window where you can configure a bunch of different policies such as IP filtering, rate limiting, cors, caching, etc.
Note: You can also open the code editor by clicking the “</>” symbol and manually define the policies in XML.
Share:

Leave a Reply

Your email address will not be published. Required fields are marked *

The following GDPR rules must be read and accepted:
This form collects your name, email and content so that we can keep track of the comments placed on the website. For more info check our privacy policy where you will get more info on where, how and why we store your data.

Advertisment ad adsense adlogger