Scaffolding a modern web application with ASP.NET Core

This post is a walkthrough of putting together a new application in the latest web technologies. Perfect for starting a new project! It’s incredibly easy with the new ASP.NET Core SPA (single-page application) templates.

The stack in use is as follows:

  • Visual Studio 2017 or Visual Studio Code (optional, but nice for intellisense, debugging etc)
  • TypeScript (extends JavaScript with features like classes and types)
  • React (client library for rendering your UI) and Redux (for handling state, allowing debug features like time travel)
  • Webpack (for bundling/minifying your scripts, and hot module replacement)
  • ASP.NET Core – letting us write our server in lovely C#! (Notably, server pre-rendering is available due to the AspNetCore.SpaServices assembly which calls into Node to execute scripts on the server)

Firstly you need to install the .NET Core SDK. The current version is 1.1. You can either install Visual Studio (if you are licensed for that) or just the command line tools. You can later install Visual Studio Code, the free cross-platform Visual Studio, if you like.

Next you need to install Node. If you’ve already got it, update it to the latest version. I found that I had an old version and I was getting bizarre errors until I realised I had to uninstall and reinstall. Node seems like an odd requirement since it’s a web server in itself, and aren’t we using IIS? The reason it’s needed is because we can take advantage of its extensive library of packages, and additionally, ASP.NET Core can use Node silently in the background to execute JavaScript, on the server, for pre-rendering the page. This can be done when you’re using React or Angular 2.

Now comes the fun part. We will use the ASP.NET SPA Services (Single-page application) generator to produce a ready-to-go, pre-configured application based on the client library we want. So, install both SPA Services and the Yeoman generator:

npm install -g yo generator-aspnetcore-spa

Now create a folder for your application, cd into it, and run the generator:

cd c:\appdirectory
yo aspnetcore-spa

You’ll be given a choice about which framework to use in your project:

In my case I picked React with Redux. It then asks if you want the ‘project.json’ or ‘csproj’ project format. csproj is the one to go with if you’re using Visual Studio 2017. Type a project name and you’re done.

Once the process completes you’ll have a project setup and ready to go:

Before running, you will want to switch to development mode. This allows dev features like hot module reloading to work:


To run the project, you have two options. Either open the csproj in Visual Studio 2017 and hit F5 to start debugging, or back in your console, run:

dotnet run

Either way the server will start on port 5000, so have a browse of it. There’s a little basic application for you to muck about with:

Click ‘Counter’ and there’s a button that increments a variable:

Now open ClientApp/components/Counter.tsx. This is the TypeScript React component for the counter.  We’ll test out the hot module reloading by editing that component to add another element to the page:

Webpack at this point detects the file change, the client downloads it, and Redux is able to maintain the state and update the page:

Another major win when using a state manager like Redux is Time Travel. Time Travel is when the state of your app is recorded throughout the lifetime of the app, and you can use a timeline to move the state of the app backwards and forwards in time.

To get Time Travel working, you can use the Redux Dev Tools. The dev tools come in two formats: a JavaScript library that you can add to your app, or a Chrome browser extension. Check out this blog post for an explanation of it. Either way, the dev tools look like this:

To try it out, click the ‘Increment’ button a few times. The tools are already recording the state changes and you’ll see the ‘INCREMENT_COUNT’ action appearing along within a timestamp.

Make sure the slider is show by clicking the slider button:

And then just move the slider around to travel through time. You can also click on an individual action in the list on the left to jump to the state at that time.

Replacing an expired S2S (high-trust) certificate in SharePoint 2013

In SharePoint 2013, configuring your environment for high-trust apps involves a few manual steps. Part of this process is configuring a trusted token issuer in the form of a certificate, which is then used to create app tokens.

Then comes the day when your certificate expires. But don’t panic; it is fairly simple to replace your certificate. Of course, the ideal scenario is to complete this *before* the certificate expires, so, set a reminder for next time!

Firstly, create a new certificate. You may need to request this from your organisation, but a self-signed certificate is fine for development environments (which requires that you turn off HTTPS with AllowOAuthOverHttp = true). Either way, you need both the CER and PFX files for your certificate. Copy the .CER to your SharePoint system. At this point, let’s check for details of your existing Root Authorities by opening the SharePoint command prompt and running Get-SPTrustedRootAuthority. All of your trusted root authories will be listed. Scroll down the list until you find your expired certificate:


The expired certificate above is called ‘s2s’. You can delete that one with the following command:

Remove-SPTrustedRootAuthority -Identity s2s


You’ll receive a confirmation message; press Y and enter. If you run Get-SPTrustedRootAuthority again you’ll see it’s gone. The next step is to remove your old token issuer. Run the following command to get a list of your existing token issuers:



Note the name of your expired token issuer, and delete it by its name, pressing Y to confirm:

Remove-SPTrustedSecurityTokenIssuer -Identity "Custodian App"


Finally, it’s time to add your new certificate as both a trusted root authority and a token issuer. First, register the new trusted root authority:

$path = "C:\certs\s2s-certificate.cer"
$certificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($path)
New-SPTrustedRootAuthority -Name "S2S Certificate" -Certificate $certificate


And register the token issuer. Note that the the Issuer ID, the guid below, is specific to your app. Your Issuer ID is the same as your Client ID, if you have one token issuer per app.

$realm = Get-SPAuthenticationRealm
$issuerId = "4d25b859-5092-4306-8e7e-82fac0633413"
$fullIssuerId = $issuerId + '@' + $realm
New-SPTrustedSecurityTokenIssuer -Name "Custodian Cert" -Certificate $certificate -RegisteredIssuerName $fullIssuerId -IsTrustBroker


Now, configure your provider-hosted app with the new certificate. In Repstor custodian you need to modify the web.config to either include the path and password of your certificate PFX file, or preferably, the serial number. Your changes will be picked up within 24 hours, or immediately if you do an iisreset. Also, you may need to clear any existing user access tokens (based on the expired certificate) from your app cache if you have one.

And set a reminder for next year :-)

ASP.Net WebApi Error: “The controller for path … was not found or does not implement IController.”

I have an ASP.NET MVC 4 project and I’ve added a Web API controller to it. Nothing fancy, no custom configuration, nothing. It doesn’t work out of the box. I get a 404 HTTP error and this error stack in my IIS logs:

[HttpException]: The controller for path '/api/controllername/5' was not found or does not implement IController.
   at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType)
   at System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName)
   at System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory)
   at System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state)
   at System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state)
   at System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData)
   at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
   at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

Turns out to be a pretty non-obvious fix. The error mentions that it’s checking that the controller implements IController. Normal MVC controllers implement IController, but WebAPI controllers do not. So it’s trying to match our URL with a standard MVC controller.

Take a look in the RouteConfig file and you’ll see something like this:

	name: "Default",
	url: "{controller}/{action}/{id}",
	defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }

Does that match your URL? Yeah, it probably does. Your URL (if you’re following the normal WebApi pattern) starts with ‘/api/’, so the router is trying to find a standard controller called ‘api’

Open up the file Global.asax.cs. It’ll look something like this:

protected void Application_Start()

You will now notice that the RouteConfig (for normal controllers) is registered in the pipeline before the WebApiConfig routes. This means that the standard controllers take precedent. And because your Web Api URL matches the pattern defined in RouteConfig, the framework is attempting to use it.

The fix is just to move the WebApiConfig registration further up the pipeline, before RouteConfig:

protected void Application_Start()

“Unknown User” error when creating SharePoint 2013 MVC Apps

Do you get this error?

Unknown User

Unable to determine your identity. Please try again by launching the app installed on your site.

Well, it’s quite possibly nothing to do with users, identities, or anything else related to security. This is the default text in the Views/Shared/error.cshtml file in the web project that’s generated for you when you create a new SharePoint MVC project.

So, this error is shown whenever there’s any unhandled exception in your app….and that could be anything, really. So you’ve got to start debugging it. That’s easy if you’re debugging locally: simply turn on break on exceptions by going to Debug -> Exception and checking the Common Language Runtime Exceptions in the Thrown column:

Now, if you’re developing an Autohosted app for SharePoint 2013, then it’ll be running in Azure. And if you’re unfortunate enough that you’re seeing the “Unknown Error” page in Azure but not locally, then you need to disable that error page and make sure you’re able to see the full exception detail.

To do that, browse to your Views/Shared/Error.cshtml file and remove it from your project. If you’re not using source control, then you’ll probably want to back it up somewhere else to restore it later – this is only for debugging.

Once you’ve got rid of it, you need to switch on exceptions from remote machines. Open your project’s main Web.config (in your web project root) and find the <system.web> tag. Add in the tag <customErrors mode=”Off”/>. It should look something like this:

Now when you run your project and reproduce the error, you’ll see a stack trace which should put you on the right track.


Editing a Google Spreadsheet using PHP and CURL

I wanted to edit a Google spreadsheet, and in particular, change the title of a worksheet – although the same concepts would apply to any kind of spreadsheet edit. I am using PHP and since there’s no client API for this, I am building up the HTTP requests using CURL.

This was a nightmare to figure out. The documentation is a bit sparse to say the least, when it comes to making the raw requests using PHP. In saying that, it is important you read it all first and don’t just take my word for things.

Scroll down for the full-blown version including debugging and explanatory comments.

If you’re here via Google, I will not stand in the way of your copying and pasting the code any further:

$accessToken = "your access token";
$editUrl = "$accessToken";
$entry = "<?xml version='1.0' encoding='UTF-8'?><entry>... snip ... </entry>"; //This is the entry element for the worksheet, containing any required changes.
$fh = fopen('php://temp','rw+');
fwrite( $fh, $entry);

$handle = curl_init ($editUrl);
if ($handle) {
	$curlOptArr = array(
		CURLOPT_PUT => TRUE, //It's a PUT request.
		CURLOPT_INFILESIZE => strlen($entry), //Size of the uploaded content
		CURLOPT_INFILE => $fh, //Uploaded content
		CURLOPT_SSL_VERIFYPEER => FALSE, //Req'd if the google SSL certificate isn't installed
		CURLOPT_HTTPHEADER => Array("content-type: application/atom+xml")); //Include a content-type header!
	curl_setopt_array($handle, $curlOptArr);
	$ret = curl_exec($handle);

OK, so let’s break that down, and include the full example that I used to debug the whole thing and work out exactly what CURL was doing.

Firstly, since CURL accepts a file to upload, and we’ve only got an in-memory string, we need to use PHP’s ‘temp’ file access to emulate a file handle:

$fh = fopen('php://temp','rw+');
fwrite( $fh, $entry);

Create another file handle to accept debug logging:

$debugOutput = fopen('php://temp', 'rw+');

Then set up the CURL options with all of the debugging shenanigans:

$curlOptArr = array(
	CURLOPT_INFILESIZE => strlen($entry),
	CURLOPT_VERBOSE => TRUE, //Ensure lots of debug output
	CURLOPT_STDERR => $debugOutput, //Writes loads of stuff to the debug file
	CURLOPT_HTTPHEADER => Array("content-type: application/atom+xml")

Now create the CURL object, submit the request and output all the lovely debug info.

$handle = curl_init ($editUrl);
curl_setopt_array($handle, $curlOptArr);
$ret = curl_exec($handle);
$errRet = curl_error($handle);
print("Result: $ret<br>");
print("Error: $errRet<br>");

$verboseLog = stream_get_contents($debugOutput);
echo "Verbose information:\n<pre>", htmlspecialchars($verboseLog), "</pre>";

Don’t forget to close the handles.


Fun is Justification Enough

Today I watched a TED talk by a Korean author called Young-ha Kim and he was discussing the artist inside all of us. He mentioned concepts like a father playing with his children’s toys and finishing the Lego castle long after the child had become bored. Most of us have suppressed our artistic interests in favour of all the serious stuff in our lives… if it doesn’t make money then it isn’t worthwhile.

He also talked about the concept of the “artistic devil” – the notion that when you have an idea, you’ll pause and think about it for long enough that doubts start to creep in. The artistic devil is the voice inside our heads that provides the hundreds of reasons not to do something: there’s more important things to do, it’s a lame idea, people will laugh at you, EastEnders is on.

Although he was talking about art and specifically creative writing, there are so many parallels with software development that it’s unreal. It struck me straight away and held true throughout the talk.

We can earn a good living in this industry but we have so many opportunities to be creative. Indeed, developers are often very creative and get involved in open source and other types of community projects. But it’s hard to take those first steps…putting yourself in the public spotlight and subjecting yourself to scrutiny is difficult, as though it’s going to be a massive weakness to be “wrong”. There’ll always be people around to criticise. I am always coming up with project ideas but almost every time I’ll sit on it and think about it and come up with so many reasons to not go through with it. But if it’s a bit crap, who cares? The point is to do it because it’s fun.

I was writing a stack overflow question today. It’s actually the first one I’ve composed. I typed it out, thought about it, created a jsfiddle, thought about it some more, reworded it, and couldn’t help but think that I shouldn’t need to ask for help.

But actually, even engaging with a community like stack overflow can be a creative endeavour. Thinking carefully about how to form a question takes skill and the process can be fun. And of course, if I am stuck with a problem, someone else will be.

As Young-ha Kim points out, we’re all born artists. It’s obvious when you see your kids drawing on the walls with their crayons or building a sandcastle that will inevitably be washed away. There doesn’t need to be a point.

I’m going to start work on a little project of my own. It’s got nothing to do with work, it’s not going to make me rich, it might be bad art. It’s taken me a long time to realize that as long as I enjoy sitting writing the code, that’s the only justification I need. Anything else is a bonus.

The slow death of bookmarklets

The Content Security Policy specification, a technology to prevent cross-site scripting attacks, has advanced from Working Draft to Candidate Recommendation. Which is a good thing, but unfortunately has the side effect that bookmarklets are going to stop executing on any web page that implements it.

What’s a bookmarklet?

It’s a small piece of JavaScript embedded in a link. That link is then added to your browser’s bookmarks. When it’s clicked, the script is executed. A bookmarklet always takes the form:


the code can, if it wants, load code from any other site into the current pages’ DOM, and execute that instead.

One bookmarklet I use is Instapaper which submits the current page to your ‘read later’ list. And there are loads of bookmarklets to assist web designers.

What’s Content Security Policy?

A W3C specification, call it part of HTML5 if you want to. It’s a collection of new HTTP headers that a page can include to indicate a list of places from which JavaScript should be trusted. Any scripts which do not appear on that whitelist will not be executed, which means the site is well protected against XSS attacks (when the users have supported browsers).

For example, if I’ve got a bit of custom form validation code, then the current domain will need to be whitelisted, and if I’m running Google Analytics, I’ll trust Google too. To trust both locations the appropriate header would look like this:

Content-Security-Policy: script-src 'self'

But CSP does other things!

If you include a Content Security Policy header in your page, you’re also saying that the browser should adhere to a few additional security rules:

  • Inline scripts are banned (inside <script> tags in the page) to prevent injection attacks
  • ‘eval’ is ignored, and that includes its use within setTimeout/setInterval
  • The JavaScript: link format is ignored.
That last one is important because that’s what bookmarklets do. Additionally, if the bookmarklet loads an external script to run, that won’t work.

Current browser support

Firefox 4 and Chrome 16. Although, they are using X-Content-Security-Policy and X-WebKit-CSP respectively at the moment.

Current web uses

Twitter claim they’ve rolled it out on their mobile site, but looking at the headers, I can’t see any evidence of it. I found this site which is sending the X-Content-Security-Policy header (the Firefox one) and I can confirm my Instapaper bookmarklet is definitely dead there.

This post focuses on the JavaScript side of the CSP specification, but it can also apply to any other type of resource (fonts, images etc). Have a look at the HTMLRocks page for more info!