C# Profiling with Linux

With dotnet working rather flawlessly on Linux, sooner or later one stumbles upon the need to profile a dotnet application.

Microsoft provides a handy command-line tool for this: perfcollect. This tool measures dotnet performance indicators for a manual or pre-specified duration and generates a nice report about this.

Microsoft has documentation for perfcollect, or pav from Dots and Brackest wrote a post about profiling .Net core apps on Linux.

OAuth2: Authentication vs. Authorisation

Scott Brady makes a point that OAuth2 is not an authentication scheme, but an authorisation, or better yet, a delegation mechanism. He points out that tokens just provide validated access to any resource: Usually data of a user, but not necessarily; It may even only indicate that an application gets routine access to e.g. write to a log file.

He proposes to use OpenID Connect as the actual authentication mechanism built upon OAuth2.

Semantic Versioning for Clients

Semantic Versioning is a great way to denote the changes in your software that other software must pay attention to. This is especially true for libraries, as their sole purpose is to be used in other software.

However, how do you deal with software that is not consumed by other software, but by users? (Arguably, still a large part of existing and used software.) There isn’t really a public-facing interface to version, and even if you consider the user interface to be one, your users likely don’t care.

Brett Uglow proposes a nice solution to use semantic versioning for end-user applications: Consider the system administrator to be your interface consumer. Now your environment requirements becomes your public API:

  • Major version increments denote new requirements like other hardware, system configuration (firewall) or breaking changes like dropping support for a file format.
  • Minor increments are backwards-compatible: The new version can be installed over the older and will continue to work.
  • Patch versionprovide bug-fixes, without any new features.

While this is just one possibility to interpret semantic versions for client software, it is at least a useful one: It provides additional information to people that have to work with the software, without confusing end-users.

Low-effort legacy code refactoring

Llewellyn Falco shows in a video on Youtube how to apply three techniques to improve upon a piece of legacy code, using an example from the Guilded Rose Kata. He names three techniques to use:

  • Combination Testing, which gets you to full test coverage, just by reinspect which paths are not taken,
  • Code Coverage as Guideance, which helps you to find the bits that you have not yet considered, and
  • Provable Refactorings like automatic code transformations, which help you to be sure that you don’t break the code.

I have to admit that I am not yet fully convinced of this approach; however, it surely is a very useful first way to get at a (larger) legacy code base.

I have used a similar, albeit less formal, way to approach a larger legacy algorithm that I rewrote, and found it useful to inspect the existing behaviour with unit tests; sadly, I did not scaffold the infrastructure to keep the unit tests around, as the development environment was not very unit-test-friendly. It did make the approach manageable, but at some point I feel it to be necessary to add some functional knowledge to it.

Linkdump: People in Software Development

Alistair Cockburn wrote a fascinating article already back in 1999 titled “Characterizing people as non-linear, first-order components in software development”, which is still available through web.archive.org. This is a full-scale scientific paper, so get ready for some reading.

In essence, his message is that people are more important than processes:

The fundamental characteristics of “people” have a first-order effect on software development, not a lower-order effect.

He lists four characteristics of people that are relevant to software development:

1. People are communicating beings, doing best face-to-face, in person, with real-time question and answer.

2. People have trouble acting consistently over time.

3. People are highly variable, varying from day to day and place to place.

4. People generally want to be good citizens, are good at looking around, taking initiative, and doing “whatever is needed” to get the project to work.

Auto-Rebuild for Asp.Net Core

As one is probably used from Javascript development, whenever one saves a file, a separately started background process immediately rebuilds as much as is required, and refreshes a browser window, reloads a service or anything else.

Apparently, this is a feature for Asp.Net as well, and has been for quite some time. With .Net 5, it is even more comfortable, as it can be handled fully through the command that launches the Kestrel server.

Previously, I used this command to start:

dotnet run --project GoodNight.Service.Api

dotnet supplies the watch command, which represents the auto-rebuilding behaviour. It wants as arguments the dotnet command that should be supervised and restarted as required. For running the server, use this command:

dotnet watch --project GoodNight.Service.Api run

Observe that the run command as moved behind the --project flag: watch wants the project setting for itself, and will not launch without it. Luckily, it passes it on onto the child command.

Elephant Carpaccio Facilitation

Time for a fun exercise: The Elephant Carpaccio Facilitation! This exercise practices breaking down user stories in… smaller user stories, and even smaller user stories, and then even smaller user stories.

The main task is to take a simple program, in the example a simple receipt calculator, and break it down into as many small requirements as possible. Yes, if your first iteration is more than “an executable program doing nothing”, then you are thinking too big. Seems quite fun to do.

I like how these very small slices still must be user-focused: There is no “technical” slice just for setting up frameworks and stuff. Every slice delivers a (tiny) amount of value to users, and they can still turn up to be very, very small: The exercise allocates 7 minutes of implementation time per slice. That’s quick programing.

Null-checking in C# with classes with operator overloading

So you decided to go type safe and happen to use non-nullable reference types in C#. Nice! Furter, you use a library, which of course is not designed for this, but it still works, you add extra typechecks for it’s return values. Everything’s fine, you’re happy.

You do notice that it even denotes some properties as optional, e.g. a CustomClass? sits there somewhere. Of course, you check the property for null before you access it, so you write something like res.Prop != null. Not fearing anything, you compile, and end up with this error message for the res.Prop:

Possible null reference argument for parameter ‘left’ in ‘bool CustomClass.operator !=(CustomClass left, CustomClass right)’

Oh, and also one for the null:

Cannot convert null literal to non-nullable reference type.

What? Well, the res.Prop might be null, of course, that’s why we are doing this after all.

As it turns out, CustomClass has a custom comparison operator. This operator, as shown in the first error message, of course requests CustomClass objects, which may not be null (they don’t have the ?, right?).

Well. So you can’t compare the object with null by using !=, as that operator just does not allow null values. Luckily, ever since C# 7.0, C# has the is operator; with C# 9.0 it also has the not pattern for pattern matching. So you can replace the test with a res.Prop is not null, the comparison operator is not called, and everything is fine.

It’s those fine details that make you love a language, no?