Hosting Suave in the Azure App Service


In my previous post, I spoke about the deployment aspects of the Azure App Service, and how in conjunction with Kudu, F# and FAKE, we can utilise a SCM-based solution for deployment that can essentially follow the exact same build process as is performed locally.

In this post I want to discuss the process behind hosting Suave (or indeed any application that listens to HTTP traffic) in the Azure App Service.

What is Azure App Service?

The Azure App Service is a broad service that contains multiple “sub services”: –

  • Web apps
  • Logic apps
  • Mobile apps
  • API apps

We’re interested in Web Apps in the post. If you’ve used Azure before, and had an ASP .NET web application, it was an easy decision to pick the Azure App Service as the service to host your app. What’s not so well known – and I admit that until I spent some time looking into it I just assumed that it wasn’t possible – is that you can use the service to host any executable application within the IIS process, and have the app service simply act as a passthru, routing HTTP requests through to your application and back again.

Why would you want to do this though? Why not just use a Cloud Service or raw VM? I would direct you to my previous post on Azure services but in a nutshell, the app service provides a higher-level service than either of the others – think of it as IIS-as-a-service – with support for: –

  • SCM-based deployment e.g. GitHub, BitBucket etc.
  • Metrics and alerting services
  • Scale up application size on demand
  • Automatic load balancing with scale out on demand or based on metrics such as CPU
  • Turn-key authentication features
  • Slots – deploy different versions of code to test before flipping to live
  • A/B testing support
  • Web jobs

So, basically a lot of things that you’d need to manage yourself in a production web application all come out of the box. And the good thing is that you can get all of these features with e.g. a Suave web application as well – it’s not just for ASP .NET.

Creating an Azure Web App

To create a web app, simply log into the Azure portal, select New from the left hand side menu, choose Web and Mobile and finally Web App. Fill in the details, confirm, and you’ll end up with an empty website that you can browse to and receive a stock Azure Website page. So now we have an empty application, how do we put our code into the app?

Binding Suave to an Azure Web App

The first thing you’ll need to do is get your code into the Azure web app that you’ve just created. There are a number of ways that you can achieve this.

Firstly, you can use SCM-based deployment, which I detailed in my previous post. But a quicker way to go for a “one-off” deployment is simply to FTP in and copy the files across. To do this, in the Portal, navigate to your empty web application and hit the Get Publish Settings option from the menu bar of the web app pane. This will give you an xml file, inside of which are the FTP address and credentials. You can then FTP in and simply copy up your application into the wwwroot folder.

Note that you can also use HTTP as well as MS Web Deploy (either through the command line or Visual Studio), although I suspect that that would require making your Suave application appear as a web app through custom project GUIDs.

Configuring the Azure App Service

A standard Suave application (at least, all the examples I’ve seen) run as either .fsx scripts or executables. Indeed there are already a few examples of running Suave within an Azure website – and I should give credit to Scott Hanselman and Steffen Forkmann for getting the basic Suave example up and running here. The majority of what I’ve done from here is based on that work – the difference is that rather than hosting FAKE itself which runs a simple Suave application within an .fsx file, I’m not using FAKE as a host at all (although I do use FAKE for the build stage as per my previous blog post). Instead, all I’m doing is hosting an .NET executable that launches Suave.

So how do we do it? Bear in mind that the Azure App Service is essentially just IIS as a managed service. It’s actually rather straightforward once you know what’s required, which is simply to instruct IIS to redirect all traffic to our Suave application. How do we do that?

Adding a custom web.config

In addition to your standard executable app.config which contains all the config and binding redirects etc., you need a slimline web.config which is used by IIS to startup and then redirect traffic to your Suave application. It doesn’t contain much: –

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <handlers>
      <remove name="httpplatformhandler" />
      <add name="httpplatformhandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified"/>
    </handlers>
    <httpPlatform stdoutLogEnabled="true" stdoutLogFile="suave.log" startupTimeLimit="20" processPath="%HOME%\site\wwwroot\SuaveHost.exe" arguments="%HTTP_PLATFORM_PORT%"/>
  </system.webServer>
</configuration>

The key parts to take away from this are: –

  1. Remove the standard HTTP Platform Handler and replace with another one.
  2. Specify to use SuaveHost.exe (my application name) as the application in the processPath attribute.
  3. Pass in an argument to the application – the internal port that traffic will come in on using the %HTTP_PLATFORM_PORT% variable. You can pass in multiple arguments here, just ensure they are space separated.

Handling arguments in Suave

Now that we have hooked our application into IIS, in Suave we simply run our application and take in the port number as an argument: –

[<EntryPoint>]
let main [| port |] =
    let config =
        { defaultConfig with
              bindings = [ HttpBinding.mk HTTP IPAddress.Loopback (uint16 port) ]
              listenTimeout = TimeSpan.FromMilliseconds 3000. }
    // rest of application

That’s it!

Managing Suave through Azure App Service

Now that you have your application up and running in Azure, what can you do? Well, you can log into the Azure portal and get some metrics of your website immediately as a configurable dashboard: –

Untitled.png

The charts are configurable, so you can select which metrics you’d like to show e.g. which HTTP codes etc. over what time period. We can also look at the process explorer – and sure enough, there’s our SuaveHost.exe application: –

Suave2

And we can even drill into the process: –

suave3

Conclusion

Of course, what I’ve shown you above is just scratching the surface of what you can do with Azure. It’s possible to do all the other things I mentioned at the start of this post, such as scale up the size of the web server, scale out to multiple instances, create multiple deployment slots etc., all from within the portal. Or perhaps you’d like to set up custom alerts based on any of the dashboard metrics over a certain period of time e.g. “> 50 HTTP 404s in the last 5 minutes” and send an email / hit an HTTP endpoint etc.? No problem – that’s supported out of the box.

It’s actually all incredibly easy and really allows you to simply focus on the work of developing an application and let Azure manage the infrastructural challenges. In fact, I can’t imagine self-hosting (or self-managing) any web-facing application when you have a service like this available. Hopefully I’ve shown though, that’s it’s not just the stock ASP .NET website that can be run through Azure web apps – we can host Suave as well without much effort at all.

The source code that was used for this post is available here.

 

 

Deploying Azure web applications with FAKE


The Azure App Service is a great service that makes hosting web-facing applications extremely easy, with support for many value adds out of the box e.g. scale out, A/B testing and authentication are all included. I’ve recently been looking at how you can use this service within the context of some F# frameworks and libraries e.g. Suave. I’ll blog about the Suave side of things in another post – there’s a lot to it – but one of the other parts I wanted to mention was that FAKE now has support for Kudu, the Azure App Service SCM deployment engine.

What is Kudu?

One of the features that App Service offers is a multitude of deployment options, including FTP, HTTP and also source control web hooks. The latter supports a number of providers, including GitHub, BitBucket, VSTS and even a locally hosted git repository. The App Service listens to push events on a specific branch, downloads the source code onto the web server (into a sandboxed location) and then copies it into the website proper. The latter stage – the copy – is the one of most interest. Essentially, the app service runs a batch file which can do whatever is needed to do a build and deploy. For a .NET application, this typically includes: –

  1. Perform an MSBuild of the application.
  2. Copy the outputs to the “staging” directory on the web site.
  3. Run KuduSync to deploy to the actually web application folder.

KuduSync itself essentially does a few things: –

  • Does a diff of the current files to deploy from the previous deployment.
  • Removes any obsolete files from the app.
  • Copies over any new / updated files from the staging directory to the app.
  • Makes a list of the deployed files for comparison the next time it runs.

The Azure CLI extensions come with some commands to “pre-generate” a batch file for specific, common use cases e.g. ASP .NET application etc.., but you’ll often need to do something more than just that – and this means getting your hands dirty with a kudu script.

A Sample Kudu script

So here’s a standard Kudu build script (which I’ve actually minimised as much as possible) which deploys some raw web assets (HTML, JS etc.), builds a .NET application and deploys a web job : –

:: Restore NuGet packages
.paket\paket.bootstrapper.exe
.paket\paket.exe restore

:: Copy static site content over - note the "excludes.txt" which contains file types to ignore....
xcopy src\webhost "%DEPLOYMENT_TEMP%\" /Y /E /Q /EXCLUDE:excludes.txt
IF !ERRORLEVEL! NEQ 0 goto error

:: Deploy an F# script as a continuously running Web Job
xcopy src\Sample.fsx "%DEPLOYMENT_TEMP%\app_data\jobs\continuous\Sample\" /Y
IF !ERRORLEVEL! NEQ 0 goto error

:: Build to the temporary path
cd "%DEPLOYMENT_SOURCE%"
call :ExecuteCmd "%MSBUILD_PATH%" /m /t:Build /p:Configuration=Release;OutputPath="%DEPLOYMENT_TEMP%";UseSharedCompilation=false %SCM_BUILD_ARGS% /v:m
IF !ERRORLEVEL! NEQ 0 goto error
cd ..

:: KuduSync
call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
IF !ERRORLEVEL! NEQ 0 goto error

There’s actually a whole host of things here: –

  1. First I’m pulling down my Nuget dependencies with Paket (this could just as easily be Nuget.exe) before moving onto our main build.
  2. xcopy across the website assets. There are some files I don’t want, so I also have to add an “excludes.txt” file which contains file types I don’t want as an argument to xcopy. This was a real pain to figure out and to use the correct arguments to xcopy.
  3. Copy across an .fsx file as a web job. I needed to figure out how web jobs are stored on the app service in order to know the path to build up, of course.
  4. Do some jumping around of folders before doing an MSBuild of my application.
  5. Call Kudu Sync to do the final deploy, passing in the set of folder locations needed for the tool.

Using batch files for a built pipeline probably isn’t the best way to go. The problem with this is that managing a set of build sets quickly becomes a pain with a batch file – you have GOTOs everywhere, labels etc., you can’t do complex control flow etc. etc. Imagine you wanted to now run unit tests, and if that failed, do some set of tasks, but if it passed, do something else etc. etc. – it quickly becomes a nightmare.

Enter FAKE

On the other hand, FAKE is an excellent library and DSL designed to act manage a build pipeline. Not only does it have loads of helpers for e.g. file system access, config file rewriting, environment variables and MSBuild etc. but it allows us to define build pipelines with dependencies – even conditional stages. Finally, because FAKE is just F# and runs on the full .NET framework, you can always break out and just run any .NET code you want directly from within a FAKE script. With FAKE, you can have a single build script for e.g. local builds, CI builds and also now supports Kudu deployment builds through the newly-added Kudu module in FAKE. Let’s see what the above build script looks like in FAKE: –

You can see here that there are several distinct build steps, which are composed together as dependencies on one another at the very end using the ==> operator. Note that the code above is actually just F# although we’re using a specific DSL with custom operators to set up a “build chain”. So if any of the stages fail, the whole build will fail and we’ll be presented with a summary log (which we can see directly in the Azure portal) of the results of the build. Notice also the lack of environment variables etc. – the Kudu helper module takes care of all of that for us – whilst we don’t need gotos anymore because FAKE handles the build pipeline.

Now our Kudu script is much simpler, because we’re delegating control of the main build orchestration to a language better able to reason about and define program flow: –

:: Restore NuGet packages
.paket\paket.bootstrapper.exe
.paket\paket.exe restore

:: Start main build script
packages\FAKE\tools\FAKE.exe build.fsx

Conclusion

Kudu and Azure App Service are great tools. By plugging FAKE into the mix, we get both a succinct and easy to use scripting experience with the power of the .NET framework and a fantastic language like F# as well.

Visual Studio Team Services and FAKE


What is VSTS?

Visual Studio Team Services (VSTS) is Microsoft’s cloud-based source control / CI build / work item tracking system (with a nice visual task board). It’s a platform that is evolving relatively quickly, with lots of new features being added all the time. It also comes with a number of plans including a free plan which entitles you to an unlimited number of private repositories with up to 5 users (and MSDN users do not count), plus a fixed number of hours for centralized builds.

The catch is that there is (at least, I couldn’t find!) any way to host a completely public repository with unlimited users – obviously a big problem if you want to have an open source project with lots of contributors. But for a private team, it might be a good model for you. You can also use the CI build facilities of VSTS with GitHub repositories – so in this sense you can treat it as a competitor to something like AppVeyor perhaps.

Contrary to common opinion, VSTS is completely compatible with Git as a source control repository. Yes, you can opt to use TFS as a source control model, but (in my opinion) you’d have to be crazy to do this or have a team that are really used to the TFS way of working – I find Git to be a much, much more effective source control system.

Why FAKE?

I wanted to try and see whether it was possible to get VSTS working with FAKE.

One of the best things about FAKE is, in addition to the ease of use, flexibility and power that you get by creating build tasks directly within F# (and therefore with the full .NET behind it), is that because you are not dependent on hosting a bespoke build server with custom tasks etc., such as Team City, it’s extremely rare (hopefully never) that you have a situation where a build runs locally but fails to run on the build server.

Unlike relying on e.g. Team City to orchestrate your build, you delegate the entire CI build steps to FAKE – MSBuild, Unit Tests, configuration file rewriting etc. etc.. So if the build fails, you don’t have to e.g. log into the Team City box, check some log files etc. – your FAKE script does all the heavy lifting so you can run the exact same steps locally.

Putting it all together

So my goal was to write a simple FAKE build script which pulled down any dependencies, performed a build and ran unit tests – all integrated within VSTS. As it turns out, it wasn’t very difficult at all.

Firstly, we hook up the build to source control. In our case, it’s the Git repository of the Team Project, so works straight out of the box, but you can point to another Git repository e.g. GitHub as well. You can also select multiple branches. We then set a trigger to occur on each commit.

1.pngSecondly, we have to set up the actual build steps. As we’re delegating to FAKE to perform the whole build + tests, we want to use as few “custom” VSTS tasks as possible. In fact, we actually only need two steps.

  1. Some way to download Paket or Nuget, and then initiate the FAKE build.
  2. Some way of tieing in the results of XUnit that we’re going to run in FAKE into the VSTS test reports.

Unlike old-school TFS etc., VSTS now has an extensible and rich set of build tasks that you can chain together – no need for Workflow Foundation etc. at all here: –

3.png

Notice the “Batch Script” task above – perfect for our needs, as we can use it to perform our first build task to download Paket and then start FAKE.

We can now see what the FAKE script does – this is probably nothing more than what you would do normally with FAKE anyway to clean the file system, perform a build and then run unit tests: –

Notice that when we run unit tests, we also emit the results as an XML file. This is where the second build task comes into VSTS (Publish Test Results), which is used to parse the XML results and tie into VSTS’ build report.

2.png

So when we next perform a commit, we’ll see a build report that looks something like this: –

4.png

Notice that the chart on the right shows that I’ve run 2 unit tests that were successful – this is the second build task parsing the XUnit output. Of course we can also drill into the different stages if needed to see the output: –

5.png

Conclusion

This post isn’t as much about either VSTS or FAKE features per se, as it is about illustrating how both VSTS and FAKE are flexible enough that we can plug the two together. What’s great about this approach is that we’re not locked in to VSTS as a build system – we’re just using FAKE and running it centrally – but if we’re using VSTS we also can benefit from the integration that VSTS offers with e.g. Visual Studio and the build system e.g. creating work items, associating commits to work items and viewing from VS etc. etc. – whilst still using FAKE for our build.

MBrace, CloudFlows and FSharp.Data – data analysis made easy


In case you’ve not seen it before, MBrace is a simple programming model for scalable cloud data scripting and programming with .NET. It’s written in F#, but has growing support for C# and VB .NET. Over the past year or so, I worked closely with the MBrace team to help get it working smoothly on Microsoft Azure, using features such as Service Bus and Storage to provide an excellent development and deployment experience. As MBrace gears up for a v1 release, the design of the API is looking extremely positive.

I’m going to demonstrate here a simple example that illustrates how easy it is to start working with a large CSV file available on the internet in an MBrace cluster, parsing and querying data easily – we’re going to analyse UK house prices over the past year (this file is freely available on the gov.uk website).

I’m going to assume that you have an MBrace cluster up and running – if you don’t, you can either use a local development cluster or download the latest source code and deploy a full cluster onto Azure using the example MBrace Worker Role supplied in the MBrace Azure source code.

Type Providers on MBrace

We’ll start by generating a schema for our data using FSharp.Data and its CSV Type Provider. Usually the type provider can infer all data types and columns but in this case the file does not include headers, so we’ll supply them ourselves. I’m also using a local version of the CSV file which contains a subset of the data (the live dataset even for a single month is > 10MB): –

In that single line, we now have a strongly-typed way to parse CSV data. Now, let’s move onto the MBrace side of things. I want to start with something simple – let’s get the average sale price of a property, by month, and chart it.

A CloudFlow is an MBrace primitive which allows a distributed set of transformations to be chained together, just like you would with the Seq module in F# (or LINQ’s IEnumerable operators for the rest of the .NET world), except in MBrace, a CloudFlow pipeline is partitioned across the cluster, making full use of resources available in the cluster; only when the pipelines are completed in each partition are they aggregated together again.

Also notice that we’re using type providers in tandem with the distributed computation. Once we call the ParseRows function, in the next call in the pipeline, we’re working with a strongly-typed object model – so DateOfTransfer is a proper DateTime etc. All dependent assemblies have automatically been shipped with MBrace; it wasn’t explicitly designed to work with FSharp.Data – it just works. So now that we have an array of int * float i.e. month * price, we can easily map it on a chart: –

MBrace1Easy.

Persisted Cloud Flows

Even better, MBrace supports something called Persisted Cloud Flows (known in the Spark world as RDDs). These are flows whose results are partitioned and cached across the cluster, ready to be re-used again and again. This is particularly useful if you have an intermediary result set that you wish to query multiple times. In our case, we might persist the first few lines of the computation (which involves downloading the data from source and parsing with the CSV Type Provider), ready to be used for any number of strongly-typed queries we might have: –

So notice that the first query takes 45 seconds to execute, which involves downloading the data and parsing it via the CSV type provider. Once we’ve done that, we persist it across the cluster in memory – then we can re-use that persisted flow in all subsequent queries, each of which just takes a few seconds to run.

Conclusion

MBrace is on the cusp of a 1.0 release – it’s ready for you to start using now, and offers not only a powerful and flexible set of abstractions for distributed computations, but as you can see from above, if you’ve used the collection libraries in F# before it’s a very smooth transition to make the leap to distributed collection queries. In less than ten lines of code, you can start writing distributed queries against live datasets with the minimum of effort.

Stateless services on Azure Service Fabric in F#


In my previous posts, I discussed the use of the Service Fabric (SF) actor framework (which is loosely based on Orleans) and F#, and how we can use FP features within an actor model, even one designed for OO languages.

Exposing Services with Service Fabric

Ironically, the actor framework with SF is one of its more complex features – you can use SF to host literally any .NET code you want. There are a number of features within SF designed to allows you to rapidly host scalable systems, with support for state replication out-of-the-box. In this post, I want to illustrate the steps needed to host the F#, FP-first web server Suave in Service Fabric. It turns out that there’s really not much code needed at all.

  1. We create an F# executable that is compatible with Service Fabric.
  2. We create a service that inherits from the StatelessService class (we’ll discuss Stateful Services in another post).
  3. We override the CreateCommunicationListener method. This is important – essentially this method’s responsibility is to create an object that can handle incoming traffic from external sources. We also make a note of the port that Suave will be running on.
  4. We configure an endpoint in the Service Fabric configuration for that same port. This tells SF to allow inbound traffic in. This is roughly analogous to opening up an endpoint in Cloud Services. This is something you should have also specified when creating the cluster itself in Azure (if not, you’ll need to manually configure the load balancer to allow traffic through).
  5. In our Main program, we register the service with SF.

The key part is (3), where we implement the functionality that should get called to handle incoming requests. It’s pretty basic really: –

So CreateCommunicationListener() expects an instance of ICommunicationListener that will create the web server for us. Luckily with F#’s object initializers we don’t even have to declare a formal type – we can simply create the object on the fly. As you can see, all it does is start up Suave using default settings. You might elect to supply the port that it starts on from the endpoint configuration in Service Fabric – this is done in the Initialize method, and is included in the full sample.

Once done, you can configure the scalability of the service in config – if you want three instances, just set the instance attribute to 3 in the ApplicationManifest file of your hosting Service Fabric application. If you want it on every node, you set the attribute to -1 (because we all know that -1 is the universal standard for “absence of a number” – we don’t need no option types ;-). Note that running this locally with multiple instances won’t work, since they all try to run on the same port, but in the real world it’d work fine I’m sure.

As an aside, if you want any arbitrary service that doesn’t necessarily need incoming traffic e.g. something subscribing to service bus or writing to a DB, you don’t have to implement anything regarding ICommunicationListener. There’s simply a RunAsync() method that you can put any code inside that you want.

So, there you have Suave in Service Fabric with a minimal amount of code. For this you’ll get an auto-load balancing, scalable and automatically healing service. In my next post, I’ll demonstrate StatefulServices and how we can use them to automatically manage state across a cluster of services.

Building Azure Service Fabric Actors with F# – Part 2


In Part 1, I provided an overview of what Service Fabric (SF) is, and provided some step-by-step guidance on how to get up and running with the Service Fabric local installation. In this post, I want to move from the infrastructure to the code, and show how we can use F# with an Actor model designed primarily for C# and VB .NET, whilst still retaining an idiomatic F# feel where possible.

All code for the full sample used as the basis for this series is available here.

Actors in Service Fabric

Firstly, I’ll show you some elided examples of how we modelled some features of my cat as an Actor in Service Fabric. Every cat has some state which is affected by actions it does, and needs to be persisted across calls. In Service Fabric, we call this a “stateful” actor. After every “state-updating” action (in SF terms, this equates to a method call on the actor), SF will automatically persist your state back to disk and automatically replicate to other nodes in the SF cluster (typically at least two others); if your primary node goes down, one of the secondaries will immediately take over and the failed node will be silently replaced in the background. You can also have so-called “read only” actions, which do not modify state but typically return some payload to the caller. You can typically think of these as “getter” methods / properties on a class. You’ll normally have a mix of both state-mutating and read-only methods on a given actor.

Implementing Stateful Actors in F#

Every stateful Actor in SF inherits from the type Actor<T>, where T is the state that needs to be persisted. It shows up as a member property on the actor, State. Service Fabric will automatically create one of these when starting every given actor, and silently persist / load it across calls etc.

We’ll start by modelling the state on the Actor by default with a standard OO class in F# – see below. Notice the DataContract and DataMember attributes – these are used by the persistence layer of SF to de/re-hydrate state to an Actor. Personally I’m not particularly fond of these attributes – there are plenty of serialization frameworks out there that seem to work just fine without decorating every single property, so why are we stuck with this old-school approach? Perhaps there’s a way to replace the serialization in SF – I haven’t tried yet.

Anyway, here’s an example method on Cat, called Jump(). It takes in a destination of where the cat is jumping to, and depending on the destination, this affects the cat – and the owner’s – Happiness (in a more fully featured model, the owner themselves would probably be an actor with their own state). The cat will also work up an appetite by Jumping(). Hunger can be alleviated by Feeding() the cat.

On the one hand, F# works nicely with interfaces – we still don’t have to specify types, as they are inferred from the interface we’re implementing. However, this sample is still somewhat unsatisfactory to me as an F#-first person: I’m used to creating copies of data from other data, not mutating it. I also don’t like this approach of modifying state in several places arbitrarily – I feel uneasy when seeing code like this. It seems very statement oriented, with side effects everywhere – something I struggle to reason about easily. There must be something better!

Use immutable data structures on Actors

As it turns out, there is. Notice that up until now we’ve basically written everything in an OO style, using standard C#/ VB constructs like classes etc. – we’ve not used any F# types. We can actually use many F# features without too much fuss, and they can quickly help us out in our quest to getting back to sane and easy-to-reason-about code.

Firstly, we can change the way we model our state from a class to an F# record. This actually works without any problem, once you do the same WCF-style attribute decoration, and add the [<CLIMutable>] attribute – this is necessary as although Records boil down to standard Classes, by default there’s no public setter on any properties, so SF can’t rehydrate state by default. We can also add in other F#-only features, like units of measure, if we want – as these are a compile-only feature, there’s no issue with serialization of them.

On their own, using records within SF only works up to a point – we’re forced to make copies of state, rather than mutating the single attributes of the State member multiple times, which is a good thing. However, it still looks undesirable – we’re now just mutating the State member property on the Actor instead! Plus it’s not clear when and where we should replace the contents of the State member within the method – every time? Once at the end of the method call? Something in between?

Adapting functional patterns into Actors

Let’s take a step back and think about the two types of methods I mentioned earlier on – state-updating and read-only calls. The former intends to do some processing, and update the State of the actor. The latter typically reads from the State and returns some data to the caller (I’m setting aside things like calling external dependencies etc. which for simplifies’ sake we can ignore – plus it really doesn’t affect us here as we would partially apply our functions with dependencies). We can formally specify such actions and implement them with something like this: –

Notice how now our functions are much simpler – Jump is made up of a single expression that generates the new State of the Actor, based on the input state and distance – we’re no longer mutating state multiple times, or even once. And because State is an immutable record, it’s impossible to modify the supplied input State ever.

Plugging pure functions into Actors

Now that we’ve formalised how we see our actor methods working, we can re-write our earlier code from the anything-goes, mutate-everywhere C# style to one that is easier to test, easier to reason about and more idiomatic from an FP, F# point of view. You’ll notice that the implementation code above is back in a module – so how do we plug this into our OO Actor model?

There are a few ways, but the easiest one is with the help of a couple of shim functions that tightly control the mutation of the Actor State, whilst delegating control to our purely functional code for business logic. Our core code is kept free from worrying about the mutation of state and is performed in a consistent manner; our SF Actor model simply delegates to them.

A word on Read-Only Service Fabric methods

Another point worth mentioning are Read Only methods in Service Fabric. These are methods that you, as the developer, tell the SF runtime “I will never amend state in this method – don’t try to persist state at the end of the call”. This is achieved in SF simply by placing the [<Readonly>] attribute on the method. I don’t like this much for two reasons. Firstly, the attribute differs from the System.ComponentModel [<ReadOnly>] attribute simply by virtue of the fact that it has a different casing on one of the characters in the type. Use the wrong one accidentally and things will quickly go pop with your actor (believe me – I did it during the creation of the code referenced in this post; the error that you get isn’t helpful either). The other, more dangerous issue is that there is no compile time safety around the use of the [<Readonly>] attribute. If you decide to start changing state in one of these calls – tough. You won’t get any support from the compiler, nor from the runtime. Your method simply won’t update state and you’ll be left wondering why your application isn’t behaving correctly.

With the “adapt to a functional style” approach, whilst we don’t eliminate the issue completely – you still have to decorate the methods appropriately – we at least get compile-time checking on read-only functions, because they don’t allow us to return state; you therefore can’t accidentally modify the state of an actor. In addition, because we’re now using records, which are themselves immutable, it’s impossible for us to modify the state that was supplied to us.

For a simple example like the one supplied, one could argue that the extra delegation and modules etc. complicates matters compared to e.g. C# / OO. However, once you start writing even mildly complicate business logic, it quickly becomes a tiny cost compared to the simplification you benefit from through immutability, records etc.. as well as the usual other benefits of F#.

Taking it further

You can take this approach even further – in other actor frameworks, rather than adopting the “method-per-action” approach, a more functional approach is to have a single message which is itself a discriminated union containing all the different messages ; we then pattern match on this in order to process the message appropriately. We can apply this sort of pattern for updating-state messages, although it isn’t exactly idiomatic SF actor code (I’ve supplied an example in the source code).

Another alternative might be to create a custom Computation Expression (perhaps similar to the Writer monad that Tomas Petricek blogged about many moons ago) in order to make this modification to state even more succinct. Perhaps someone could write one 😉

Conclusion

We’ve seen how we can marry up some features inherent to the F# type system in order to enforce a cleaner way of reasoning about the code that our actors have to implement, through a couple of simple function signatures and some simple adaptors. We’ve also seen how F#, and typical FP paradigms, can be used in an reliable and distributable framework designed for a mutable-first OO consumer.

In part three, I want to illustrate how we can quickly and easily host arbitrary services on top of Service Fabric in F# for just about any code you might want to write, and how we can easily scale it to large volume.

Building Azure Service Fabric Actors with F# – Part 1


This post is the first part of a brief overview of Service Fabric and how we can model Service Fabric Actors in F#. Part 1 will cover the details of how to get up and running in SF, whilst Part 2 will look at the challenges and solutions to modelling stateful actors in a OO-based framework within F#.

What is Service Fabric?

Service Fabric is a new service on Azure (currently in preview at the time of writing) which is designed to support reliable, scalable (at “hyper scale”) and maintainable distributed applications and services – with automatic support for things like replication of state across nodes, automatic failover & recovery and multi tenanting services on the same instances. It supports (currently) both stateful and stateless micro-services and actor model architectures (more on this shortly). The good thing about Service Fabric (SF) from a risk/reward point of view is that it’s not a new technology – it actually underpins a lot of existing Azure services themselves such as Azure SQL, DocDB and even Cortana, so when Microsoft says it’s a reliable and scalable technology, they’ve been using it for a while now with a lot of big services on Azure. The other nice thing is that whilst it’s still private preview for running in Azure, you can get access to a locally running SF here. This isn’t an emulator like with Azure Storage – it’s apparently the “full” SF, just running locally. Nice.

Actors on Service Fabric

As mentioned, SF supports an Actor model in both stateful and stateless modes. It’s actually based on the Orleans codebase, although I was pleasantly surprised to see that there’s actually no C# code-generation whatsoever in SF – the only bit that’s auto-generated are some XML configuration files which I suspect will be pretty much boilerplate for most people and rarely change.

Why would you want to try SF out? Well, simply put, it allows you to focus on the code you write, as opposed to the infrastructure side of things. You spin up an SF cluster (or run the local version), deploy your code to it, and off you go. This is right up my alley, as someone who likes to focus on creating solutions and sometimes has little patience for messing around with infrastructural challenges or difficulties that prevent me from doing what I’m best at.

Getting up and running with Service Fabric

I’ve been using Service Fabric for a little while now, and spent a couple of hours getting it up and running in F#. As it turns out, it’s not too much hassle to do aside from a few oddities, which I’ll outline here: –

  • Download and install VS2015. Community edition should be fine here. You’ll also need WIndows 8 or above.
  • Download and install the SDK.
  • Create a new Service Fabric solution and an Stateful Actor service. This will give you four projects: –
    • A SF hosting project. This has no code in it, but essentially just the manifest for what services get deployed and how to host them.
    • An Actors project. This holds your actor classes and any associated code; it also serves as a bootstrapper that deploys the appropriate services into SF; as such, it’s actually an executable program which does this during Main(). It also holds a couple XML configuration files that describe the name of the package and each of the services that will be hosted.
    • An Interfaces project. This holds your actor interfaces. I suspect that this project could just as easily be collapsed into the actors one, although I suppose for binary compatibility you might want to keep the two separate so you can update the implementations without redeploying the interfaces to clients.
    • A console test project. This just illustrates how to connect to the Service Fabric and create actors. In the F# world these projects serve zero purpose since we can just create a script file to interact with our code, so I deleted this immediately.
  • Convert to Paket (optional). If you use Paket rather than Nuget for dependency management, change over now. The convert-from-nuget works first time; you’ll end up with a simplified packages file of just a single dependency (Microsoft.ServiceFabric.Actors), plus you’ll get all the other benefits over Nuget.
  • Create F# project equivalents. The two core projects, the Actors and Interfaces projects, can simply be recreated as an F# Console App and Class Library respectively. The only trick is to copy across the PackageRoot configuration folder from the C# Actors project to the equivalent F# one. Once you’ve done this, you can essentially disregard the C# projects.
  • Configure the F# projects. I set both projects to 4.5.1 (as this is what the C# ones default to) – I briefly tried (and failed) to get them up and running in 4.5.2 or 4.6. Also, make sure that both projects target x64 rather than AnyCPU. This is more than just changing the target in the project settings – you must create a Configuration (via Configuration Manager) called x64!
  • Create an interface. This is pretty simple – each actor is represented by an interface that inherits from IActor (a marker interface). Make sure that all arguments in all methods all have explicit names! If you don’t do this, your actors will crash on initialisation.
  • Create the implementation. Here’s an example of a Cat actor interface and implementation.

  • Update the Hosting project. Reference the implementation from the Hosting project and update the configuration appropriately.

Luckily, I’ve done all of this in a sample project available here.

Running your project

Once you’ve done all this, you can simply hit F5 (or Publish from the Host project) and watch as your code is launched into the Fabric via the UI.

SFEYou can then also call into your actors via e.g. an F# script:-

I’m looking forward to talking more about the coding side of this in my next post, where we can see how code that is inherently mutable doesn’t always fit idiomatically into F#, and how we can take advantage of F#’s ability to mix and match OO and FP styles to improve readability and understanding of our code without too much effort.