Azure Service Bus to Schedule Azure Function Execution

One of our clients were interested in displaying information on various stocks on their website. They had already identified the API that provided the information that they needed. This API, however, would rate-limit requests based on your API key. We could make no more than 1 request per second. Because the application was planning on targeting more users than 1-request-per-second would allow, we had to come up with an alternate solution. This article reviews how we were able to integrate Azure’s Service Bus to help us reach our goals.

The Plan

We decided to create our own database and API that would drive their application. Since the target investment vehicles were particularly illiquid, one update a day would be sufficient for our purposes. We would schedule a daily pull of data from the previously identified API and populate our database with only the relevant stocks and their data.

Because the requests were so infrequent and could be quickly executed, we decided to use Azure Functions to make the requests. This made it so that we didn’t have to stand up an entire machine to make requests once a day; we could instead rent some compute power at very low costs. It was configured to run once a day and begin the query for all of the stocks in which we were interested.

Trouble in Paradise

However, we quickly learned that this single function would still execute the requests too fast, incurring rate-limiting. Additionally, we found that this strategy gave us no retry on failure. When one request would fail, the function would go on to the next stock, but would never retry the failed one. With over 60 web requests going out, a single failure is not unlikely.

Our requests, as seen by the API host.

In other words, it was brittle and unreliable. We needed to come up with a new strategy that would:

  • Allow for scheduling or offsetting the requests to avoid rate-limiting
  • Be robust to individual failures
  • Preferably allow for implementing retry logic

Azure Service Bus to the Rescue

We decided to use the Azure Service Bus to support our application. It fit the bill perfectly. We would use one function to create a message for each stock that we were interested in collection info for. We would then create a second Azure Function for processing those messages.

Azure Service Bus messages can be scheduled to be processed at a later time, getting around the rate limiting issue. Each message can be processed individually, ridding us of the problem of one failure causing the whole thing to fall apart. And best of all, messages that fail can be put back on the queue, giving us retry logic. In short, it gave us every thing we needed.

Our process for creating the messages

Creating the Service Bus Messages

Below is the function that we used to create the messages for the queue. It would consult our own Ticker API to get all the ticker symbols that we wished to process. This function had a timer trigger. It was set to run every day at 2:00 AM every day.

using System;
using System.Collections.Generic;
using System.Text;
using Client.Azure.DataIngestion.Clients;
using Microsoft.Azure.ServiceBus;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;

namespace Client.Azure.DataIngestion
{
    public static class CreateServiceBusMessages
    {
        private const string QUEUE_NAME = "QUEUE_NAME";
        private const string SERVICE_BUS_CONNECTION_STRING = "SERVICE_BUS_CONNECTION_STRING";

        [FunctionName("CreateServiceBusMessages")]
        public static void Run([TimerTrigger("0 0 2 */1 * *")]TimerInfo myTimer, ILogger log, ExecutionContext context)
        {
            IConfigurationRoot config = new ConfigurationBuilder()
                .SetBasePath(context.FunctionAppDirectory)
                .AddJsonFile("local.settings.json", optional: true, reloadOnChange: true)
                .AddEnvironmentVariables()
                .Build();

            // The ticker client gets us all the ticker symbols we're interested in processing
            log.Log(LogLevel.Information, "Creating Ticker client.");
            var tickerClient = new TickerApiClient("http://tickerApiLocation.com");
            IEnumerable<string> tickers = tickerClient.GetTickers();

            // Create service bus client
            string queueName = config.GetValue<string>(QUEUE_NAME);
            string connectionString = config.GetValue<string>(SERVICE_BUS_CONNECTION_STRING);
            connectionString += $"EntityPath={queueName}";
            var builder = new ServiceBusConnectionStringBuilder(connectionString);
            var queueClient = new QueueClient(builder);

            var executeTimeout = new TimeSpan(0, 0, 0);

            foreach (string ticker in tickers)
            {
                DateTime scheduledTime = DateTime.Now.Add(executeTimeOut);
                log.Log(LogLevel.Information, $"Retrieve data for fund with ticker '{ticker}' scheduled for {scheduledTime}");

                queueClient.SendAsync(new Message()
                {
                    Body = Encoding.UTF8.GetBytes(ticker),
                    ScheduledEnqueueTimeUtc = scheduledTime
                });

                executeTimeout = executeTimeout.Add(new TimeSpan(0, 0, 30));
            }
        }
    }
}

The executeTimeout variable allows us to create a 30 second offset between the scheduled enqueue times. This will make it so each message is processed 30 seconds apart.

Consuming the Service Bus Messages

Next, we created our message processing function. Its job was to:

  • Grab a message off of the queue
  • Look up the data from the downstream API
  • Push the data into our database

Azure Functions already has a built-in method for grabbing messages from the queue called a Service Bus Trigger. The %QUEUE_NAME% tells the function to find the name of the queue from the function’s configuration under the variable QUEUE_NAME. Connection = “SERVICE_BUS_CONNECTION_STRING” tells the function that the connection string for the service bus can be found in the functions configuration as SERVICE_BUS_CONNECTION_STRING.

In the code below, the content of the message is pushed in to the the message variable. For us, this is the ticker for which we want to query data. We simply grab that ticker and feed it to our downstream API. Whatever we got back from that, we merge it into the database.

using Client.Azure.DataIngestion.Clients;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;

namespace Client.Azure.DataIngestion
{
    public static class InsertDataForSymbol
    {
        [FunctionName("InsertDataForSymbol")]
        public static void Run([ServiceBusTrigger("%QUEUE_NAME%", Connection = "SERVICE_BUS_CONNECTION_STRING")]string message, 
        ILogger log, 
        ExecutionContext context)
        {
            var apiClient = new ApiClient("https://targetApiUri.com");
            var dbClient = new DatabaseClient("databaseConnectionString");

            if (!string.IsNullOrWhiteSpace(message))
            {
                var dataFromApi = apiClient.GetDataForSymbol(message);
                dbClient.MergeData(dataFromApi);
            }
        }
    }
}

Costs and Conclusion

With these small changes, we were able to make the client’s code more robust, more reliable, and avoid the downstream API’s rate-limiting, all while minimizing cost. These functions would run once a day, with each request taking about 1 second. With 60 tickers x 1 second per ticker x 30 days a month, we were at about 1,800 executions per month. This falls within the range of the free tier for Azure Functions.

As for the Service Bus, we would also have about 1,800 messages per month. The lowest service bus tier, Basic, charges $0.05 for the first 1,000,000 messages. For our client, that is $0.05 per month for service bus consumption.

After presenting the costs and benefits to the client, they were very satisfied with the results. We rolled out the process to Azure about 6 months ago and it has been reliably feeding us data ever since.

Let us help you out!

Code Vanguard has worked with clients of all sizes, helping them to use the cloud to scale with their business. We’ve helped them to developed tools and processes ranging from automated business flows to distributed cloud processing. If you’d like Code Vanguard to help your organization with Azure development, feel free to reach out to us!


Check out some of our other posts about Azure!

James Stephens About the author

James is the founder of Code Vanguard and one of its developers. He is an applied mathematician turned computer programming. His focuses are on security, DevOps, automation, and Microsoft Azure.

No Comments

Post a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.