Introducing Type Providers


Note: This article has been excerpted from my upcoming book, Learn F# . Save 37% off with code fccabraham!

Welcome to the world of data! This article will:

  • Gently introduce us to what type providers are
  • Get us up to speed with the most popular type provider, FSharp.Data.

What Are Type Providers?

Type Providers are a language feature first introduced in F#3.0:

An F# type provider is a component that provides types, properties, and methods for use in your program. Type providers are a significant part of F# 3.0 support for information-rich programming.

https://docs.microsoft.com/en-us/dotnet/articles/fsharp/tutorials/type-providers/index

At first glance, this sounds a bit fluffy – we already know what types, properties and methods are. And what does “information-rich programming” mean? Well, the short answer is to think of Type Providers as T4 Templates on steroids – that is, a form of code generation, but one that lives inside the F# compiler. Confused? Read on.

Understanding Type Providers

Let’s look at a somewhat holistic view of type providers first, before diving in and working with one to actually see what the fuss is all about. You’re already familiar with the notion of a compiler that parses C# (or F#) code and builds MSIL from which we can run applications, and if you’ve ever used Entity Framework (particularly the earlier versions) – or old-school SOAP web services in Visual Studio – you’ll be familiar with the idea of code generation tools such as T4 Templates or the like. These are tools which can generate C# code from another language or data source.

figure1
Figure 1 – Entity Framework Database-First code generation process

Ultimately, T4 Templates and the like, whilst useful, are somewhat awkward to use. For example, you need to attach them into the build system to get them up and running, and they use a custom markup language with C# embedded in them – they’re not great to work with or distribute.

At their most basic, Type Providers are themselves just F# assemblies (that anyone can write) that can be plugged into the F# compiler – and can then be used at edit-time to generate entire type systems for you to work with as you type. In a sense, Type Providers serve a similar purpose to T4 Templates, except they are much more powerful, more extensible, more lightweight to use, and are extremely flexible – they can be used with what I call “live” data sources, as well as offering a gateway not just to data sources but also to other programming languages.

figure2
Figure 2 – A set of F# Type Providers with supported data sources

Unlike T4 Templates, Type Providers can affect type systems without re-building the project, since they run in the background as you write code. There are dozens, if not hundreds of Type Providers out there, from working with simple flat files such as CSV, to SQL, to cloud-based data storage repositories such as Microsoft Azure Storage or Amazon Web Services S3. The term “information-rich programming” refers to the concept of bringing disparate data sources into the F# programming language in an extensible way.

Don’t worry if that sounds a little confusing – we’ll take a look at our first Type Provider in just a second.

Quick Check

  1. What is a Type Provider?
  2. How do type providers differ from T4 templates?
  3. Is the number of type providers fixed?

Working with our first Type Provider

Let’s look at a simple example of a data access challenge – working with some soccer results, except rather than work with an in-memory dataset, we’ll work with a larger, external data source – a CSV file that you can download at https://raw.githubusercontent.com/isaacabraham/learnfsharp/master/data/FootballResults.csv. You need to answer the following question: which three teams won at home the most over the whole season.

Working with CSV files today

Let’s first think about the typical process that you might use to answer this question: –

figure3
Figure 3 – Steps to parse a CSV file in order to perform a calculation on it.

Before we can even begin to perform the calculation, we first need to understand the data. This normally means looking at the source CSV file in something like Excel, and then designing a C# type to “match” the data in the CSV. Then, we do all of the usual boilerplate parsing – opening a handle to the file, skipping the header row, splitting on commas, pulling out the correct columns and parsing into the correct data types, etc. Only after doing all of that can you actually start to work with the data and produce something actually valuable. Most likely, you’ll use a console application to get the results, too. This process is more like typical software engineering – not a great fit when we want to explore some data quickly and easily.

Introducing FSharp.Data

We could quite happily do the above in F#; at least using the REPL affords us a more “exploratory” way of development. However, it wouldn’t remove the whole boilerplate element of parsing the file – and this is where our first Type Provider comes in – FSharp.Data.

FSharp.Data is an open source, freely distributable NuGet package which is designed to provide generated types when working with data in CSV, JSON, or XML formats. Let’s try it out with our CSV file.

Scripts for the win

At this point, I’m going to advise that you move away from heavyweight solutions and start to work exclusively with standalone scripts – this fits much better with what we’re going to be doing. You’ll notice a build.cmd file in the learnfsharp code repository (https://github.com/isaacabraham/learnfsharp). Run it – it uses Paket to download a number of NuGet packages into the packages folder, which you can reference directly from your scripts. This means we don’t need a project or solution to start coding – we can just create scripts and jump straight in. I’d recommend creating your scripts in the src/code-listings/ folder (or another folder at the same level, e.g. src/learning/) so that the package references shown in the listings here work without needing changes.

Now you try

  1. Create a new standalone script in Visual Studio using File -> New. You don’t need a solution here – remember that a script can work standalone.
  2. Save the newly created file into an appropriate location as described in “Scripts for the win”.
  3. Enter the following code from listing 1:
// Referencing the FSharp.Data assembly
#r @"..\..\packages\FSharp.Data\lib\net40\FSharp.Data.dll"
open FSharp.Data

// Connecting to the CSV file to provide types based on the supplied file
type Football = CsvProvider< @"..\..\data\FootballResults.csv">

// Loading in all data from the supplied CSV file
let data = Football.GetSample().Rows |> Seq.toArray

That’s it. You’ve now parsed the data, converted it into a type that you can consume from F# and loaded it into memory. Don’t believe me? Check this out: –

figure4
Figure 4 – Accessing a Provided Type from FSharp.Data

You now have full intellisense to the dataset – that’s it! You don’t have to manually parse the data set – that’s been done for you. You also don’t need to “figure out” the types – the Type Provider will scan through the first few rows and infer the types based on the contents of the file! In effect, this means that rather than using a tool such as Excel to “understand” the data, you can now begin to use F# as a tool to both understand and explore your data.

Backtick members

You’ll see from the screenshot above, as well as from the code when you try it out yourself, that the fields listed have spaces in them! It turns out that this isn’t actually a type provider feature, but one that’s available throughout F# called backtick members. Just place a double backtick (“) at the beginning and end of the member definition and you can put spaces, numbers or other characters in the member definition. Note that Visual Studio doesn’t correctly provide intellisense for these in all cases, e.g. let-bound members on modules, but it works fine on classes and records.

Whilst we’re at it, we’ll also bring down an easy-to-use F#-friendly charting library, XPlot. This library gives us access to charts available in Google Charts as well as Plotly. We’ll use the Google Charts API here, which means adding dependencies to XPlot.GoogleCharts (which also brings down the Google.DataTable.Net.Wrapper package).

  1. Add references to both the GoogleCharts and Google.DataTable.Net.Wrapper assemblies. If you’re using standalone scripts, both packages will be in the packages folder after running build.cmd – just use #r to reference the assembly inside one of the lib/net folders.
  2. Open up the GoogleCharts namespace.
  3. Execute the following code to calculate the result and plot them as a chart.
data
|> Seq.filter(fun row ->
    row.``Full Time Home Goals`` > row.``Full Time Away Goals``)
|> Seq.countBy(fun row -> row.``Home Team``)
|> Seq.sortByDescending snd
|> Seq.take 10
|> Chart.Column
|> Chart.Show
// countBy generates a sequence of tuples (team vs number of wins)
// Chart.Column converts the sequence of tuples into an XPlot Column Chart
// Chart.Show displays the chart in a browser window
figure5
Figure 5 – Visualising data sourced from the CSV Type Provider

In just a few lines of code, we were able to open up a CSV file we’ve never seen, explore the schema of it, perform some operations on it, and then chart it in less than 20 lines of code – not bad! This ability to rapidly work with and explore datasets that we’ve not even seen before, whilst still allowing us to interact with the full breadth of .NET libraries that are out there gives F# unparalleled abilities for bringing in disparate data sources to full-blown applications.

Type Erasure

The vast majority of type providers fall into the category of erasing type providers. The upshot of this is that the types generated by the provider exist only at compile time. At runtime, the types are erased and usually compile down to plain objects; if you try to use reflection over them, you won’t see the fields that you get in the code editor.

One of the downsides is that this makes them extremely difficult (if not impossible) to work with in C#. On the flip side, they are extremely efficient – you can use erasing type providers to create type systems with thousands of types without any runtime overhead, since at runtime they’re just of type Object.

Generative type providers allow for run-time reflection, but are much less commonly used (and from what I understand, much harder to develop).

If you want to know more, download the free first chapter of Learn F# and see this Slideshare presentation. Don’t forget to save 37% with code fccabraham.

Learn F# for the masses


Anyone who reads my blog will have observed that I’ve not posted anything for several months now. In addition to my moving country and trying to build a company in 2016, I’ve also been writing a book.

I’m delighted to now share that Learn F# is now available on Manning’s MEAP program – hopefully the book will be content complete within the next couple of months.

The book is designed specifically for C# and VB .NET developers who are already familiar with Visual Studio and the .NET ecosystem to get up to speed with F#. Half the book focuses on core language features, whilst the second half looks at practical use cases of F# in areas such as web programming, data access and interoperability.

The book doesn’t focus on theoretical aspects of functional programming – so no discussion of monads or category theory here – but rather attempts to explain to the reader the core fundamentals of functional programming (at least, in my opinion) and apply them in a practical sense. In this sense I think that the book doesn’t overlap too much with many of the F# books out there – it doesn’t give you a hardcore understanding of the mathematical fundamentals of FP, and relates many concepts to those that the reader will already be familiar with in C# etc.. – but it will give you confidence to use, explore and learn more about F# alongside what you already know.

I’d like to think it will appeal to those developers that are already on the .NET platform and want to see how they can utilise and benefit from F# within their “day-to-day” without having to throw away everything they’ve learned so far. So you’ll see how to perform data access more easily without resorting to Entity Framework, how to perform error handling in F# in a more sane manner, parsing data files, and creating web APIs, whilst using FP & F#-specific language features directly in solving those problems.

I’ll blog about my experiences of writing a book when it’s done – for now, I hope that this book is well received and a useful addition to the excellent learning materials already available in the F# world.

F#, .NET and the Open Source situation


If you read my (generally sporadic) blog postings, you’ll know that in general I write about either F# and / or Azure, usually from a programmatic point-of-view. How easy is it to reason about a certain thing? How quickly can we make use of some Azure service from F#? And so on. In this post, as part of the annual F# Advent calendar, I want to switch focus and instead discuss an increasingly important areas of software development that I’ve been exposed to as part of my learning and adopting of F# – that of open source software development.

Disclaimer: There have been a number of debates on Twitter and GitHub regarding open source within the context of several Microsoft-led projects over the past few weeks. This post is not a response to those issues; this post was in fact authored in early November, well before any of these debates had been initiated.

The distinction between .NET communities

Coming from someone who has been a .NET developer almost exclusively throughout my career (aside from a brief stint with C++ and the obligatory various web and database languages that you get dragged into), one of the elements of working in F# that you can’t really ignore is the community involvement that permeates through just about everything that happens in and around F#, from the development of the language to key packages that are developed, through community meetups and user-groups, through to the tooling that is used on both “classic” .NET and Mono / CoreCLR stacks. Before I started working with F#, I had very little experience of open source development or tools etc. Thankfully, I’ve found the community to be extremely open, friendly, and always happy to help out others, as well as to continually improve the state of software development in F#. So if there’s a lack of tooling available for a specific purpose, or a tool that doesn’t quite do what is needed, the mentality in the F# community is to improve the existing tooling if possible, and where not, simply write a new tool from scratch.

This is a very different mentality in the rest of the .NET community, where we are by and large reliant on what Microsoft serves up both in terms of tooling, libraries and frameworks. This is not necessarily a failing of either the community or Microsoft – it’s just the way it is, and it’s something that has been fostered over the years by both parties. This model is slowly changing, and there are some popular OSS projects out there – JSON .NET being an example of something that is owned by the open source community and Microsoft themselves now use. Other examples of open source software within the .NET community include EventStore and Nancy. However, generally the average .NET developer is content to accept the pace of change, and direction, that Microsoft sets. This has historically not necessarily been a bad thing, but with the increasing rate of change of programming languages, frameworks and tools, it’s hard to see that remaining a successful model.

Misunderstanding the F# community

I think that these two different mindsets have (somewhat unfairly) earned the F# community a reputation of being somewhat disruptive and of not “going with the flow”. It’s true that some F# projects (notably type providers) are not compatible with C# and VB .NET. Yet it’s often forgotten (or not even known) that many open source projects that happen to have been written in F# work perfectly well in C# and VB (and often are explicitly designed with that in mind). Some examples include: –

  • Paket – a dependency manager designed to work well with NuGet packages and GitHub repositories
  • Fake – a build automation system using a DSL built in F#
  • FSCheck – a property-based testing framework based on Haskell’s QuickCheck
  • Project Scaffold – a build template with everything needed for successful organisation of code, tools and publishing.

It’s sometimes disappointing to hear well-known people throughout the .NET and Microsoft community often misconstrue projects like these as somehow to only be of use to F# users.

At the same time, there is an element of smugness and / or superiority that sometimes permeates from some corners of the F# community, which I think isn’t all that helpful. To an extent though, rather been being an essentially negative act, this is caused by the enthusiasm of the community and its desire to improve the state of software development within .NET, and perhaps also by the frustration caused by some of the misunderstandings and FUD about F# in the public domain e.g. it’s only for maths people, it’s too hard to learn, it’s not of use in line-of-business applications etc..

Defining open source software

I’d like to discuss a little about open source software now. Frankly, I would not consider myself an expert in defining what open source really is, but having now spent a few years now in the F# community contributing to a few different projects, I certainly have a better idea than I used to, and do feel qualified to at least put down some opinions. I’m taking a leap of faith here that my experiences in F# open source projects reflects positively on open source as a whole, because despite any  criticisms that can be leveled at the F# community and ecosystem, it really has a fantastic approach to open source, collaborative software development that can be learned from. Here are some points of interest – and misconceptions – that I’ve observed from looking at a number of projects across the .NET community, both in and outside of the F# community.

GitHub

GitHub is a great site, and has become the de-facto repository site for most F# open projects, as well as even Microsoft’s attempts to move into the open source world. However, creating a code repository on GitHub (or moving an existing code repository from CodePlex to GitHub) does not instantly make a project “open source”. I’ll say that again, because it’s worth making a point of this. Just the act of putting some code on GitHub does not mean it is an open source project. If it were, it’d be easy to make a project open-source. However, I’ve learned that working on truly open source software is much more than simply making the repository public for everyone to look at.

Cross Platform .NET

I also see this mention of “open source” as somehow being a catch-all term for software developed for Mac and Linux (particularly from some Microsoft quarters). This is a mistake. It suggests that software that is developed on (or for) Windows doesn’t need to be open source, or is somehow different in terms of how software is developed. It’s great that Microsoft are moving to a more open model, and adopting CoreCLR for running .NET code cross platform – but, again, just making software run on Mac and / or Linux does not immediately elevate a project to being successfully “open source”.

Sharing is Caring

But the biggest challenge I see in creating a successful open source project is to adopt a truly collaborative, pro-active approach to software development. This means being genuinely interested in external feedback on your project – even if it’s not what you want to hear. It means encouraging people to submit pull requests. It might also mean giving up complete ownership of the project where you might not agree with every feature proposed by the community on the project, or feel comfortable with strangers submitting PRs to your project written in a different coding style to what you are used to. It also means getting feedback in at an early a stage as possible – ideally at the planning stage of any feature – and accepting help from the community in shaping the design and implementation of said feature. This is not the same as working on a project for several months, releasing it to the public in GitHub, and then getting feedback on it retrospectively.

It’s only through adopting a really open, collaborative approach to software that you can have a cycle of < 48 hours from someone submitting a feature request, to several people giving feedback on the feature, to someone implementing it.

Microsoft, .NET and Open Source

Obviously, Microsoft have taken a big step in the large year or so to try to adopt “open source” development processes. This is to be applauded. But they’re not there yet, and there is some evidence which lead me to believe that it will take them a while to truly get open source – but they are trying. Some of the teams seem to have gotten the idea of getting feedback early e.g. the C# compiler team. Ironically, a mature programming language like C# is one type of project that you probably need more control and direction over than e.g. a library, where the barrier to entry is much lower. Conversely, there are also a number of teams at Microsoft developing frameworks and libraries that are ostensibly open source, yet I’m really not seeing that much in the way of a collaborative mentality except for the superficial steps of putting a repository onto GitHub and asking feedback after a period of time spent developing in isolation. Releases and features and still managed with a reactive rather than proactive mentality. PRs from non-Microsoft team members are few and far between. Direction in many of these projects is controlled almost entirely by Microsoft. In short, the general open source community aren’t yet empowered to contribute to these projects in a way that I would like to see.

Conclusion

An important element of the distinction between the F# community and the rest of .NET is likely due to the differences in terms of investment in both F# and C#/VB.NET stacks by Microsoft – the fact that F# has been forced to stand on its own feet has helped it mature extremely quickly. In addition, it should be mentioned that there have been a number of key individuals within the F# community who have helped both grow and nurture the community – I’m not sure just how F# would be looking  today without them. Ultimately, what we do have today with F# is a growing global community, and an ecosystem of tools and libraries which takes the best of existing .NET packages combined with others that harness the power of F# as well. It’s a community which includes the Microsoft Visual F# team as a valued member, rather than a all-controlling entity, and a community  which – whilst still growing and evolving – is open, confident and in control of its own destiny.

Thanks to the community for showing me all of this over the past few years, and allowing me contribute towards a fun, vibrant and growing ecosystem. If you want to learn about open source projects, and secondly about F# in general, you could do a lot worse than look at some of the up-for-grabs issues of the projects on http://fsprojects.github.io/.

What on earth has happened to NuGet?


After several months away from NuGet, I had to use it again recently in VS2015. I’m completely and utterly gobsmacked at how poor the current experience is. It’s confusing, inconsistent and hard to use. Worse than that, it enables workflows that should never, ever be permitted within a package management system.

First experiences with the NuGet dialog

When I first saw the previews of VS2015, I thought that the non-modal, integrated dialog into VS would be a good thing. Unfortunately, it suffers from not making it clear enough what the workflow is. You’re bombarded with dropdowns that sometimes have only one option in them, search boxes in non-intuitive positions, installation options that you don’t want to see etc.

Let’s start by adding a package to a solution. This is a common task so should be as easy as possible. Here I’m adding my old friend Unity Automapper to two projects in a solution (note that I’ve deliberately downgraded to an earlier version of the Automapper) :-

nuget1Why so many options? Why a dropdown for Action that only has one item in it? What does the “Show All” checkbox do – looks like nothing to me at this point. Are there projects hidden from the list? Why?

Working with packages

Next let’s try to do something to the package e.g. uninstall or update. You go to Manage Packages for Solution. You don’t see the packages that you’ve already – instead, you see all packages available, ordered by (I assume) most popular download. My installed package is nowhere to be seen. How do I find installed packages only? Oh. You have to click “Filter” and then select “Installed”. That’s not what I thought of doing when I selected the menu option. I thought it would just show it to me – it took me a good few seconds to realise what had happened!

Also notice that the default dependency behaviour is “lowest” – I don’t like this. Highest should be the default, or maybe Highest Minor. Nine out of ten times you’ll just have to upgrade them all immediately afterwards anyway. Which brings me nicely onto the subject of upgrading NuGet packages…

Upgrading dependencies

So now I’ve found my package, I want to upgrade it. So I choose to manage packages for solution, and change the action from Uninstall to Update. It picks the latest version available (good).

nuget2But then notice in the screenshot above it allows me to uncheck some of the projects within the solution. What?? Why would you want to explicitly upgrade only some of the projects within the solution! I can maybe think of some corner cases, but generally this is a definitely no-no – a recipe for absolute disaster. At best you’ll muddle on with some binding redirects, at worst you’ll get a runtime error at some indeterminate time in the future when you try to call a method that doesn’t exist. What happens if you have a breaking change between the two? Which version will get deployed? Can you be sure? Don’t do this. Ever. I can’t even understand why NuGet offers you as the user to do this.

Even worse, if you choose to update a NuGet dependency at the project level (by e.g. right clicking the references node in VS), VS will upgrade it on that project alone. The other projects that have that dependency will be left untouched. You’ll end up with different versions of a dependency in a solution without you even realising it.

But let’s see what happens if we decide to live dangerously and go ahead anyway. Firstly, NuGet will happily upgrade half our solution and leave the other half in an old state – no warnings of impending doom, no nothing. Your code might work, it might not. Maybe it’s work fine for some weeks, and then you’ll realise that the upgrade has a breaking change, but you only notice this when project A tries to call a method that no longer exists.

The next time you next enter the NuGet panel to work with your dependencies – e.g. to uninstall the dependency, you’ll see the following: –

Nuget3Why is only one of my two projects showing in the list of projects that I want to uninstall? Because it’s filtering it on that specific version of the dependency (1.1.0). I have to choose the version that I want to uninstall. I don’t want that – I just want to uninstall the entire dependency. But no, we have to do it version by version. If we now change the Action to “Update”, the Version dropdown suddenly has a different meaning. It no longer acts as a filter – it’s now stating what version we’ll upgrade to. Instead, you now filter based on the projects checked in the list below. So the meaning of each user control has changed based on the action you’re performing. This is not good UI design.

UI Thoughts

In fact, the UI as a whole is really, really confusing. There are also other issues – it’s slow. You can’t upgrade all projects through the UI yet, so either you have to fall back to the Package Manager console, or upgrade each package one by one. Of course, after you do any single update, the UI reverts back to showing “All” packages which involves loading the top results from NuGet. This takes a few seconds. Then you need to change the dropdown to show “Installed” packages, and start all over again.

Conclusion

I’m really worried having seen the new NuGet now. It’s been in VS2015 for a while – and developers are putting up with this? It’s slow. It’s not a well designed UI – even I can tell that. You don’t feel like you know what is actually going to get deployed. The workflows don’t really work. It’s not good.

And it’s not just the UI that concerns with me NuGet. It’s also the fundamental structure underneath it that is unchanged and needs to be fixed. NuGet should be managing dependencies across an entire solution as the default and ensuring that dependencies across projects are stable. Instead, we have individual package.config files on each project, each with their own versions of each dependency. This is not what you want! As illustrated above, it’s very possible – and in fact quite likely on a larger project – that you’ll quickly end up with dependencies across projects with different versions. I’ve seen it lots of times. You run the risk of getting runtime bugs or crashes, and worse still, ones that might only be identified when you go down a specific code path in your app, probably the day after it goes live.

I was hoping that NuGet in VS2015 would start to address these issues, but unfortunately it seems like a large step backwards at the moment.

On Type Inference


This is a comment I recently saw on a headline-grabbing article about Swift: –

I also don’t think that “type inferring” is of great use. If you cannot be bothered to hack in a variable’s data type, maybe you should not develop software in the first place.

I was so infuriated by this comment that I ended up writing this blog post. The thing that infuriated me so much was more the arrogance of the second sentence as much as the ignorance of why having type inference can be such a massive productivity gain.

Here are three versions of the same sentence written in three different ways – slightly contrived as usual to make my point…

  • Hello, I’m Java. I have a bag of apples. In this bag of apples there are five apples.
  • Hello, I’m C# 3. I have a big of apples that has five apples in it.
  • Hello, I’m F#. I have a bag of five apples.

These statements don’t actually translate directly to code but you get the idea. Why would you need to write out the above sentence in either of the latter ways? Are you learning the language from a school textbook? Or using it in a professional & fluent manner?

C# can declare locally-scope implict variables and create implicit arrays. F# can do full H-M type inference. Java can’t do much of anything really. Here’s the above sentences sort of written as code (the final example uses the same C#-style syntax to keep things consistent): –

The latter example is more in the F# vein, without braces, or keywords whose usage can be inferred.

The real power of type inference comes from the ability to chain multiple functions together in a succinct manner, and then at some point in the future changing the type of one of those functions and not having to change any type signatures of dependent functions. It’s quite addictive, and once you get used to it it’s quite difficult to go back to an explicitly-typed language.

Last minute update: It turns out that the individual who I quoted above at the start of this post had confused type inference with dynamic typing. I suspect that this won’t the first or last person who has done this.

Pattern Matching in C#?


As C# 6 previews have come out, I was not surprised to see pattern matching absent in the feature set. It’s a shame but I can understand why it’s not included – it’s far more powerful than switch/case but with both in the language, it’d probably be difficult to make the two work together without becoming a bit of a mish mash.

Existing Pattern Matching in C#

One of the things that C# devs often say to me is that they can’t see what pattern matching really gives over switch / case – the answer is that it’s the ability to not only branch on conditions but also to implicitly bind the result to a variable simultaneously. I then realised that C# actually does already support pattern matching for a specific use case: exception handling. Look at the two versions of a simple try / catch block: –

In the first example, notice how the compiler will automatically set the next line of execution appropriately depending on which exception was raised AND will automatically bind the exception (by this I mean cast and assign) to the ex variable as appropriate. “Yeah yeah yeah” you might say – this is basic exception handling. But imagine that we didn’t have this sort of program flow and had to do it all with just if / then statements as per the second sample – it’s a bit of a mess, with typecasts and whatnot.

Pattern Matching over collections

Now, imagine that we wanted to do some conditional logic over an array of numbers: –

  • If the array is 2 elements long and starts with “7”, do something
  • Otherwise, if it’s 2 elements long, do something else
  • Otherwise, do something else

Look at this with conventional if / then statements or with pattern matching (obviously with code that wouldn’t compile in C# but is somewhat similar to what can be achieved in F#): –

This illustrates how you can use pattern matching as a means of drastically simplifying code – even for a simple example like the one above. In F# it’s more effective because of the lightweight syntax, but even here you can see where we’re going.

Summary

This sort of pattern matching is intrinsic to F# where you can match over lists, arrays, types etc. etc. very easily, and another example of how branching and binding can easily simplify your code.

Trying F# – are you a developer or a mouse?


Since starting to deliver my “Power of F#” talk to user groups and companies (generally well received – I hope), and getting involved in a few twitter debates on F#, I’ve noticed a few common themes regarding why .NET teams aren’t considering trying out F# to add to their development stack. Part of this is the usual spiel of misinformation of what F# is and is not (“it’s not a general purpose language”), but another part of it comes from a conservatism that really surprised me. That is, either: –

  • There’s no / limited Resharper / CodeRush support for F#. Ergo, I won’t be able to develop “effectively” in it
  • If I start learning it, no-one else in my team will know what I’m doing and so we can’t start using it

Now allow me to attempt to debunk these two statements.

Resharper (or CR) support is a non-issue for me. Personally, I use CodeRush over Resharper, but let’s be honest about what both of these are: Tools to speed up development time. Now, some of the issues that they solve aren’t as big an issue in F# as in C#. Perhaps the one I do miss the most is rename symbol, but others like Extract to Method aren’t as big a problem in F# due to the extremely lightweight syntax, type inference and the fact that it’s an expression-oriented language. So, it’d be nice to have some support in the language for refactorings, for sure – but it should absolutely not be a barrier to entry.

The “team training” issue is a more serious one to my mind, primarily because it’s about the individual’s perception of learning a new language rather than some arbitrary piece of tooling. Trust me when I say you can be productive in F# in just a few days if you’re coming from C# or VB .NET – particularly if you’ve used LINQ and have a reasonable understanding of it (and it’s constituent parts in C#3).  Cast your mind back to when people started adopting .NET from, say, VB6. Was there no training issue then? Or from C++? Learning F# is far easier – the whole framework is the same as C# – it’s just the plumbing that orchestrates your calls to the framework that look a little different.

There are certainly a number of features in F# which don’t really have direct equivalents in C#, and to get the most out of the language you’ll need to do a bit of learning to understand new concepts – just like, say, C#2 to C#3 . I would say that F# is in a sense a higher-level superset of C# – you can do pretty much everything you would in C# albeit in a different, terser syntax, plus a whole load of other bits and pieces which you can opt into as you get more comfortable with the language.

 

As developers, we need to remain open minded about the art of programming. It’s what allows us to use things like LINQ rather than simple for each loops. It’s also what allows us to say “yes, using white space to indicate scope instead of curly braces isn’t necessarily a bad thing” or “yes, the kewords ‘let’ and ‘do’ aren’t inherently evil in a programming language”. Keep an open mind and try something new – you might actually be pleasantly surprised!