Learn F# for the masses


Anyone who reads my blog will have observed that I’ve not posted anything for several months now. In addition to my moving country and trying to build a company in 2016, I’ve also been writing a book.

I’m delighted to now share that Learn F# is now available on Manning’s MEAP program – hopefully the book will be content complete within the next couple of months.

The book is designed specifically for C# and VB .NET developers who are already familiar with Visual Studio and the .NET ecosystem to get up to speed with F#. Half the book focuses on core language features, whilst the second half looks at practical use cases of F# in areas such as web programming, data access and interoperability.

The book doesn’t focus on theoretical aspects of functional programming – so no discussion of monads or category theory here – but rather attempts to explain to the reader the core fundamentals of functional programming (at least, in my opinion) and apply them in a practical sense. In this sense I think that the book doesn’t overlap too much with many of the F# books out there – it doesn’t give you a hardcore understanding of the mathematical fundamentals of FP, and relates many concepts to those that the reader will already be familiar with in C# etc.. – but it will give you confidence to use, explore and learn more about F# alongside what you already know.

I’d like to think it will appeal to those developers that are already on the .NET platform and want to see how they can utilise and benefit from F# within their “day-to-day” without having to throw away everything they’ve learned so far. So you’ll see how to perform data access more easily without resorting to Entity Framework, how to perform error handling in F# in a more sane manner, parsing data files, and creating web APIs, whilst using FP & F#-specific language features directly in solving those problems.

I’ll blog about my experiences of writing a book when it’s done – for now, I hope that this book is well received and a useful addition to the excellent learning materials already available in the F# world.

Visual Studio Team Services and FAKE


What is VSTS?

Visual Studio Team Services (VSTS) is Microsoft’s cloud-based source control / CI build / work item tracking system (with a nice visual task board). It’s a platform that is evolving relatively quickly, with lots of new features being added all the time. It also comes with a number of plans including a free plan which entitles you to an unlimited number of private repositories with up to 5 users (and MSDN users do not count), plus a fixed number of hours for centralized builds.

The catch is that there is (at least, I couldn’t find!) any way to host a completely public repository with unlimited users – obviously a big problem if you want to have an open source project with lots of contributors. But for a private team, it might be a good model for you. You can also use the CI build facilities of VSTS with GitHub repositories – so in this sense you can treat it as a competitor to something like AppVeyor perhaps.

Contrary to common opinion, VSTS is completely compatible with Git as a source control repository. Yes, you can opt to use TFS as a source control model, but (in my opinion) you’d have to be crazy to do this or have a team that are really used to the TFS way of working – I find Git to be a much, much more effective source control system.

Why FAKE?

I wanted to try and see whether it was possible to get VSTS working with FAKE.

One of the best things about FAKE is, in addition to the ease of use, flexibility and power that you get by creating build tasks directly within F# (and therefore with the full .NET behind it), is that because you are not dependent on hosting a bespoke build server with custom tasks etc., such as Team City, it’s extremely rare (hopefully never) that you have a situation where a build runs locally but fails to run on the build server.

Unlike relying on e.g. Team City to orchestrate your build, you delegate the entire CI build steps to FAKE – MSBuild, Unit Tests, configuration file rewriting etc. etc.. So if the build fails, you don’t have to e.g. log into the Team City box, check some log files etc. – your FAKE script does all the heavy lifting so you can run the exact same steps locally.

Putting it all together

So my goal was to write a simple FAKE build script which pulled down any dependencies, performed a build and ran unit tests – all integrated within VSTS. As it turns out, it wasn’t very difficult at all.

Firstly, we hook up the build to source control. In our case, it’s the Git repository of the Team Project, so works straight out of the box, but you can point to another Git repository e.g. GitHub as well. You can also select multiple branches. We then set a trigger to occur on each commit.

1.pngSecondly, we have to set up the actual build steps. As we’re delegating to FAKE to perform the whole build + tests, we want to use as few “custom” VSTS tasks as possible. In fact, we actually only need two steps.

  1. Some way to download Paket or Nuget, and then initiate the FAKE build.
  2. Some way of tieing in the results of XUnit that we’re going to run in FAKE into the VSTS test reports.

Unlike old-school TFS etc., VSTS now has an extensible and rich set of build tasks that you can chain together – no need for Workflow Foundation etc. at all here: –

3.png

Notice the “Batch Script” task above – perfect for our needs, as we can use it to perform our first build task to download Paket and then start FAKE.

We can now see what the FAKE script does – this is probably nothing more than what you would do normally with FAKE anyway to clean the file system, perform a build and then run unit tests: –

Notice that when we run unit tests, we also emit the results as an XML file. This is where the second build task comes into VSTS (Publish Test Results), which is used to parse the XML results and tie into VSTS’ build report.

2.png

So when we next perform a commit, we’ll see a build report that looks something like this: –

4.png

Notice that the chart on the right shows that I’ve run 2 unit tests that were successful – this is the second build task parsing the XUnit output. Of course we can also drill into the different stages if needed to see the output: –

5.png

Conclusion

This post isn’t as much about either VSTS or FAKE features per se, as it is about illustrating how both VSTS and FAKE are flexible enough that we can plug the two together. What’s great about this approach is that we’re not locked in to VSTS as a build system – we’re just using FAKE and running it centrally – but if we’re using VSTS we also can benefit from the integration that VSTS offers with e.g. Visual Studio and the build system e.g. creating work items, associating commits to work items and viewing from VS etc. etc. – whilst still using FAKE for our build.

Lightweight websites with F#


There are several common approaches I’ve seen people take on the .NET platform when writing web-based applications that I want to review in terms of language and framework choice: –

  • Adopt a conventional MVC application approach. Write static HTML that is emitted from the server using e.g. Razor markup + C# / VB .NET, write your controllers and any back-end logic in C#.
  • As above, but replace your back-end logic with F#. This is a reasonable first step to take, because essentially all your data access “back-end” processing are performed in a language that it’s best suited for, whilst your C# is relegated to essentially thin controllers and some simple markup logic.
  • Adopt a “SPA”-style approach. But this I mean split your web application into two distinct applications – a client-side application that is self-managing, typically using Javascript and some framework like Knockout or AngularJS; meanwhile your back-end is a hosted WebAPI written in F#.
  • Write the entire application in F#. Surely you can’t write websites in F# can you? Well, actually, there are some (pretty sophisticated) frameworks like WebSharper out there that can do that, rewriting your F# into e.g. Typescript and the like.

I haven’t used WebSharper in depth so can’t comment on the effectiveness of writing your client-side code in F# and therefore not going to talk about the latter option today. but I have written WebAPIs in F# and want to talk about where I do think your separation of concerns should lie with respect to client and server side code.

As far as I’m concerned, if you’re a .NET developer today, writing websites, then you should be writing as much as of the CLR-side code as possible in F#. I am really pleased with the brevity that you can get from the combination of OWIN, Katana (Microsoft’s web-hosting OWIN framework), Web API and F#. This combination will allow you to create Web APIs that can be created simply and easily, and when combined with a SPA client-side website is a compelling architectural offering.

Sudoku in F#

Some months ago, I wrote a Sudoku solver in F# (I think that there’s a gist somewhere with the implementation). I wanted to try to write a website on top of it with a visual board that allowed you to quickly enter a puzzle and get the solution back. So, having borrowed some HTML and CSS from an existing website, I set about doing it. You can see the finished site here and the source code is here.

Untitled2

Client

  • HTML
  • AngularJS
  • Typescript (no native Javascript please!)

 Server

  • F#
  • F#
  • F#

Standard JSON is used to pass data between website and server. On the server side, we use OWIN, Katana and Web API to handle the web “stuff”. This then ties into the real processing with the minimum of effort. This was all done in a single solution and a single F# project.

OWIN with F#

I’m no Angular or Typescript expert so I’m not going to focus on it – suffice it to say that Typescript is a massive leap over standard Javascript whilst retaining backwards compatibility, and AngularJS is a decent MVC framework that runs in Javascript. What I’m more interested in talking about is how to host and run the entire site through a single F# project. Mark Seeman‘s excellent blog has already discussed creating ASP .NET websites through F#, and there are indeed some templates that you can download for Visual Studio that enable this. However, they still use ASP .NET and the full code-bloat that it comes with. Conversely, using OWIN and Katana, this all goes away. What I like about OWIN is that there’s no code-generation, no uber folder hierarchies or anything like that, you have full control over the request / response pipeline, plus you get the flexibility to change hosting mechanisms extremely easily. To startup, all we need is to download a (fair) few NuGet packages, and then create a Startup class with a Configuration method: –

Now you have that you can simply create Web API controllers: –

So two F# files, a web.config and you’re good to go from a server-side point of view. Talking of web.config – how do you create an F# web project? Mark Seeman’s blog gives full details on creating Visual Studio web projects that are F# compliant, but essentially just adding the “Project Type” GUID in the .fsproj file (I think it’s 349C5851-65DF-11DA-9384-00065B846F21) will do the job.

Combining client and server side assets

UntitledBecause this is a full .NET web project you can do all the things that you would normally do in C# web projects, such as serve up static files (perfect for a SPA) like HTML, Javascript and CSS, as well as generating from Typescript files (just add an project import for the Typescript msbuild target). If you appreciate the extra security you get from F# over other statically typed .NET languages, you’ll almost certainly want to use Typescript over raw Javascript as well, so this should be a given.

A single project that can serve up your web assets and the server side logic to go with it looks pretty simple – in this screenshot, in the api folder is my back-end logic – message contracts between client and server, the actual puzzle solver and the Web API controller.

Client side assets are few and far between – just a SudokuController.ts to hold the controller logic and Index.HTML + stylesheet for the presentation layer. It’s important to note that with a SPA framework like AngularJS, you serve static HTML and Javascript; the Javascript then essentially bootstraps, modifying the HTML dynamically, requesting JSON from the WebAPI and occasionally getting more static HTML. You never modify HTML on the server as you would do with something like Razor.

In addition, as it’s a “normal” website, with Visual F# 3.1.2, you can use Azure websites to deploy this easily – either through VS to manually publish out to Azure, or through Azure’s excellent source control integration to e.g. GitHub or BitBucket webhooks. It’s never been easier to get a CI deploy of a website out.

More flexibility with Web API

Another important thing about Owin is that it separates out the hosting element of the website from the actual project structure. So, after talking about all this nice website project integration, there’s actually nothing to stop you creating a standard F# library project, and then use either the Owin WebHost console application (available over NuGet), or create an empty website or Azure worker and then host it through that via the Owin Host. All this can be done without making any changes to your actual configuration class or actual controllers.

Conclusion

A common misconception around F# is that it’s great for use as a “computation engine” where you give it a number and it gives you back another number. Or perhaps a “data processing engine” where it can read some data from a flat file or a web service and do something to it. These are both true – however, there is very little reason why you can’t use it for full featured Web APIs using Owin (as there’s no code generation to miss out on from e.g. VS/C# projects), and with a minimum amount of effort, even as a full website host for a SPA that will consume that same Web API.

In my next post I want to replace the use of Typescript with F# using Funscript to illustrate how you can have a fully-managed end to end solution for both client and server in F#.

Better git integration for VS 2012


git seems to be everywhere these days doesn’t it. Everyone is using it, and looking for any excuse to blog today, I wanted to share both my early experiences with it, and the new Microsoft git plugin for VS2012.

Initial thoughts on git

Pretty much like everyone else, I imagine. Branching is much easier to do than with other source control systems that I’ve used – I’m talking TFS, SVN (and I suppose SourceSafe as well). There are many reasons for this I think – the biggest one is the fact that when you pull down code, you pull down the entire repository i.e. all history of the repository. This means that you can do things like rollbacks, checkins, branches and merges all locally – when you’re happy with your changes as an entire piece of work you can “push” the lot in one go to your remote repository (remote repository = TFS / SVN-style central source control server, although with git it doesn’t quite work like that).

It also encourages more frequent checkins since you’re doing things locally, so part of the “mental block” of checking in centrally is removed. You can do things like perform multiple local check ins, and then before pushing to the remote repository, convert those many check-ins into one check-in.

Because branches are much easier to control in git, you may find yourself doing things like feature branches more often. Oh, and there is no support for a “single check out” mode like you have with TFS – hopefully those days are beyond all of us!

git also performs quickly – check-ins happen pretty much instantly as they are local, and you can instantly switch back to an older revision of code with a single command – no complicated rolling back and so on. In fact, it’s so easy to do you might be surprised by it at first – you rollback or switch branches, and VS instantly says that files have been changed and updates.

This is all very nice, although I have also struggled with some aspects of git – firstly it blends concepts like merging and check-ins, so there’s a slight learning curve there, as well as introducing the idea of “rebasing” – which essentially is the merging of two branches into one so that they appear as though they were a single set of ordered check-ins. Secondly, I’ve had one or two issues when I’ve somehow completely trashed my local repository, forcing me to completely clean the repository and “start over”. Once I lost a couple of hours’ work – not much fun.

Overall, having used git, I must say that I do like the features it offers and the possibilities for helping teams of developers improve their day-to-day processes. It’s lightweight to set up, powerful, and fully embraces the “offline” mode of working rather than the “if-your-network connection-is-slow-then-VS-will-run-like-a-dog” way that TFS operates, which I nowadays find very frustrating.

Tooling

This is where things go a little awry. On the Windows platform, git has several options for managing your code: –

  • Command Line. This is how you talk to git under the bonnet. Many developers use this for their source control; personally I prefer something a little more accessible and easy-to-learn, but you can obviously do anything from here. There are several different variants, like command line, powershell etc. but to me they are all basically the same thing.
  • GitHub For Windows. This is GitHub’s version of a git source control front end. It’s a fine tool to use for basic operations, i.e. push and pull from remote repositories, check in, rollback etc. It also offers branching and merging, but if there’s any conflicts you’ll just get a “something went wrong!” sort of message. It worked fine for me for single person projects, but for anything more, you might struggle.
  • Git Extensions. This is a suite of tools including some Visual Studio integration points, as well as a GUI front-end over the command line like GFW, except this actually supports merge conflicts (via diff tools like KDiff etc.). It has some decent docs and support, so is well worth checking out.
  • Git Source Control Provider. This is a free, third-party VS source control provider that integrates pretty well with VS. It doesn’t support branching etc. (at least, I couldn’t find that) so you’ll need another tool to do that – but it does have context menu options in solution explorer to help you out.
  • Visual Studio Tools for git. This is still in preview, but is another VSSC provider, so it integrates with solution explorer etc. It also allows branching, integrates with the in-built VS2012 merge tool, and has decent support for viewing history etc. Somewhat annoyingly, it won’t automatically mark added files into a check-in – you have to explicitly do that.

There are simply too many options here for a complete newbie to know which one does what and when to use one or the other. Only the last one comes with a built-in diff tool (although Git Extensions does offer to install KDiff I believe). What you want is a one-stop shop for git really, or at most a couple of installs – one for the core git libraries etc. and another for the UI plugin.

Having used all of these over the past few weeks, I’m still struggling for the “sweet spot” tool. I think any VS dev using git on a daily basis will want VS Tools for git, as it makes 80% of what you will do a doddle i.e. pulling latest changes, checking in locally, pushing to remotes, branching and merging. You can do all that directly in VS. However, you’ll still probably want Git extensions for other, less commonly used tasks. And underneath all of that sits the command line tools.

Conclusion

In practical terms, I struggled initially to do some fairly basic operations like resolving merge conflicts, simply because I couldn’t figure out how to wire up a diff tool. Eventually after faffing with Git Extensions and installing a couple of diff tools I did manage it. Thankfully now VST for Git does make that easier.

I still think part of the work will be for devs that are experienced in TFS and SVN to come around to a different way of source control, but in order to do that, the tools need to be more streamlined and accessible. Those two source control systems have mature UIs – git just needs a bit more work on this front to lower the barrier to entry even more.

A brief word on namespacing for framework classes


I’ve spent a lot of time over the past few years working on multi-developer projects. It’s incredibly important that you ensure that the pit of success is large for other developers. Obviously there will always be a learning experience, which hopefully can be alleviated through pairing and / or decent developer documentation, either through test suites or wiki etc.. But a large part of making a framework discoverable is in choosing the namespaces correctly. I’m talking here about the dreaded “.Core” or “.Framework” areas e.g.

Company.App.Core.Services

  • ServiceBase
  • WcfHostHelper

Company.App.Framework.Logging

  • LoggingFacade
  • Log4NetWriter

Company.App.Common.DataAccess

  • RepositoryBase
  • ConnectionFactory
  • CommandBuilder

etc. etc.

What purpose does this extra “Core” give us? Absolutely nothing! The worst part about something like this is that it only serves to obfuscate the most common parts of your system instead of making them as easy to find as possible. And yet I see it time and time again, on one project after another.

The problem with Framework namespaces

Why is this a problem? Imagine you’re a developer working on some part of the system. Perhaps you’re coding a service that lives in a namespace like Company.App.Services e.g. Company.App.Services.CustomerService. Why should you, as a consumer of the framework, have to know about adding a using statement (or similar) to Company.App.Core.Services in order to use ServiceBase or similar? The answer is – you shouldn’t!

Intellisense should be able to present you with common types as soon as you tell it what you are working on, be it a service, or repository (which I hate – more on that in another post) or whatever else. How do you tell Intellisense what you are working on? By what namespace you’re in. Your core types should live in the same logical namespace as the most likely namespace for consumers of that type. ServiceBase should live in Company.App.Services because this is where your actual services live.

There’s a subtle difference between physical deployment – where framework classes are slowly changing and probably should live outside of the day-to-day changing business code (particularly when writing modular, pluggable code) – and logical namespaces where many types can live across many physical assemblies in that namespace without a problem.

Assembly naming

There’s also an oft-repeated mantra that says that your assemblies should be named after the namespaces that they live in. This is fine – in principle. With your framework assemblies, it makes no sense. I recommend that when you start writing common helper classes, core interfaces etc. in your core assemblies, and make the default namespace of those projects the highest that it can be e.g. Company.App. Then make folders in the project for each area that your framework goes across. The name of your assemblies do not need to follow the assembly-follows-namespace standard.

Conclusion

Framework classes should be carefully distributed across the entire namespace of your system. All your framework classes should not get bundled up into one uber-namespace, or bundled underneath .Framework. Framework types relating to Services should live in the same namespace that your team write their services in. Types relating to UI should live in the same namespace that your team write their views in.

Editing XAML files in Visual Studio 2012


Finally, after three attempts, Microsoft have gotten a decent XAML editor into Visual Studio. Well done!

I’ve been doing some Windows Phone development recently, and using VS2012 professional for it. I also have Blend installed – I’ve blogged in the past about the positive impact it can have on a developer’s productivity. That still applies today, but the built-in designer / editor in VS2012 is much, much closer to Blend.

For a start, it actually performs quite well, with a markup editor that is reasonably responsive, so you don’t have to resort to editing in plain XML. You also get the ability to modify templates directly in the designer rather than having to go to Blend or do them by hand, and you also get decent property editing like colours, brushes and styles directly in VS – very similar to the Blend property pages, actually.

In fact, I like it so much that for a few days now I’ve done without Blend and been happily doing MVVM-style development without really missing it that much. Not that I’ve been doing anything fancy at all – just some views with bindings, templates and styles really – but that’s what most of us probably do anyway most of the time.

So, if you’ve been put off XAML in the past because of the VS experience, give it another go – it’s definitely workable now.

Top Visual Studio 2010 Extensions


Been a while since I did this… here’s my current set of top extensions that I use on VS2010: –

NuGet Package Manager

The easiest way to bring in dependencies of third party packages bar none. Microsoft now use it as their primary way for new drops of EF. I use it for my Unity Automapper. Everyone uses it Smile

Power Commands for VS2010

A lot of useful additions to VS including removing & sorting using statements in a file on save, open a VS command prompt pointing to a solution folder, copy references from one project to another and also editing project files in one click.

Notify Property Weaver

If you’re a XAML developer using the MVVM pattern, this is an absolute godsend for basically removing all the boilerplate INotifyPropertyChanged code that you normally have to, by rewriting the IL to automatically publish the NotifyPropertyChanged event where required as a post-build task. Fantastic. It’s actually just a dll at the end of the day that you have to add into your MSBuild for each project, but this extension adds a menu item with a handy dialog to configure it just how you like.

Productivity Power Tools

Most of this is going to be incorporated into the next version of Visual Studio. Includes lots of handy tools like the Solution Navigator (basically a fusion of Solution Explorer and Class View plus search etc.), improved tab management etc.

VS Color Output

Actually makes the VS output windows useful! You supply regex expressions to the tool’s settings and mark them as e.g. “warning” or “error”. The tool parses each line of the output window and colours it appropriately.

GhostDoc

A great way to quickly write comments on your classes and methods in a standard format.

CodeRush

In my opinion, the greatest productivity add on to Visual Studio. If you’re the sort of person that likes to quickly be able to knock up code and navigate through it with the minimum effort, this is for you. There’s also a free version, CodeRush Xpress, available through the extension manager.