Visual Studio 2017 has been out for a little while now, and there’s some good things and not so good things. Though overall, it’s probably one of the nicest steps forward for Microsoft.
The Good: Docker support in Visual Studio 2017
One of the more exciting prospects of VS 2017 is that it offers full support for .NET Core. And, even better, that includes support for Docker. Launch your code in your target instances, a nice editor for Docker artifacts, and best of all, debugging!
The Bad: RIP project.json
I have been following .NET Core since its preview days. One of the biggest pluses in my book was how Microsoft decoupled themselves from the heavy-handed, overpresribed IDE. Gone were arcane MSBuild files containing compilation information, replaced by a simple, JSON-based declaration file, that elegantly described the entirety of your code project in a human-readable and human-editable format.
Well, that shoe has dropped with VS2017, as rumors of .csproj’s demise have been greatly exaggerated. I had enjoyed writing .NET core applications in a simple text editor. I will mourn that short-lived bygone era.
The Ugly: Changes made in VS2015 to the right-click menu inside the text editor are still there
Mainly, I’m talking about this nonsense…
It used to be that you could over over a selection item, and get to your “Using…” or other code generation options. In Visual Studio 2015, they decided “Quick Actions” meant “Adding a keystroke or mouse click to your workflow.” I am grateful that a keyboard shortcut has been added, but still…
In spite of a couple of odd decisions with regards to usability, this is another big improvement made by Microsoft in the tools they offer developers.
There are usually two schools of thought when it comes to dependency injection — inject dependencies into constructor parameters that set private fields, or inject dependencies into property setters. Here’s the rule of thumb when deciding which to use:
If the dependency is optional/can go with a default value, inject via property. If it is required, inject via constructor.
I probably could have Googled this wisdom (not like I’m the first to think of it), rather than learning the hard way after many classes full of ludicrous constructor overloads. But, so it goes.
Microsoft’s ASP.NET stack has always been a powerful application framework, but it long suffered from a dependency on IIS to operate. Which was always a shame, because having an HTTP listener within a stateful application can be a very handy interface for configuration or control. For a long time, the only way you could make that happen would be to implement your own/use a 3rd party stack, or even worse, make use of the background worker that is available in ASP.NET applications. Enter the Open Web Interface for .NET (OWIN). The purpose of the OWIN project was, borrowing a page from the Rack/Rails playbook, to abstract the entire ASP.NET stack away from the web server. By open-sourcing the ASP.NET stack, and providing a standard, open interface through OWIN, any web server could be made to host an application.
About a year ago, I was diagnosing a throughput issue we were having with a Windows service that consumed a REST API. The application in question would pull down data from the REST API, perform some work on that data, and then publish the result back to the API. These were small operations, so the application was set to run dozens of threads concurrently. We started seeing massive bottlenecks in these applications. There was high latency in connecting to the API, and it would compound as we boosted concurrency. Yet the application was consuming almost no CPU, and the API itself was barely being worked.
I wrote an article several months ago with an overview of how a dependency injection framework can demystify the process of standing up and configuring a new application. Not only can you eliminate writing out copious custom configuration code, you can also decouple configuration from your core application logic (if you’re really glutton for punishment, you can find the tripe I wrote here).
I recently solved a problem I faced, where I wanted to be able to store modules for an application outside of normal configuration space, and hoped to be able to leverage Unity to do it.
It seems as though Microsoft had been evolving to embrace open source rather than fight it for many years now (sorry I’m late to the party, I was on vacation last week. Sue me). Their more recent tools have all been provided with source on Codeplex with a very permissive license, and opening up the ASP.NET stack was pretty huge. It seems only natural that the entire CLR be migrated as well. I’m excited to see how this translates to cross-platform availability (no knock on Mono intended).
As this article says, they still have a way to go, but it’s a really refreshing direction they’re going in.
Starting with Team Foundation System 2010, the engine running build definitions went through an overhaul. Gone away is the MSBuild-based build script. Here to stay (presumably) is a WF/XAML-backed definition.
When I first encountered this change, my reaction was an irrational one. I had put a lot of effort into learning the intricacies of MSBuild, finding extensions to do what I wanted to do, writing custom build actions, etc. I was damn proud that I had morphed the twisted, arcane TFSBuild.proj into something that would filter out files not meant for deployment, make application configurations out of build output, upload NuGet packages, and deploy the build to a continuous integration environment (including configurations for that environment). It was an unholy mess of item groups, cross products, and generally trying to make a declarative language do imperative things. But it was mine. I was the king of that dung heap, and was not going to let it go. Luckily I saw the error of my ways, so consider this a high-level primer of what led me to that decision and why it may be good for you as well. I promise to write more detailed posts on the specifics of some of these things in the future.
My organization moves and transforms a lot of data around, in a rather interesting problem space. As a result, we have a great number of complex service applications doing a variety of different tasks. The status quo was to recreate all of the plumbing that went into building new applications, such as creating worker threads, hooking them up to Windows service control mechanisms, start/stop procedures, etc. These constructs would always require some awareness of the specifics of the logic being written to do the real work, as it made for more convenient deployment.