Semantic Versioning for Clients

Semantic Versioning is a great way to denote the changes in your software that other software must pay attention to. This is especially true for libraries, as their sole purpose is to be used in other software.

However, how do you deal with software that is not consumed by other software, but by users? (Arguably, still a large part of existing and used software.) There isn’t really a public-facing interface to version, and even if you consider the user interface to be one, your users likely don’t care.

Brett Uglow proposes a nice solution to use semantic versioning for end-user applications: Consider the system administrator to be your interface consumer. Now your environment requirements becomes your public API:

  • Major version increments denote new requirements like other hardware, system configuration (firewall) or breaking changes like dropping support for a file format.
  • Minor increments are backwards-compatible: The new version can be installed over the older and will continue to work.
  • Patch versionprovide bug-fixes, without any new features.

While this is just one possibility to interpret semantic versions for client software, it is at least a useful one: It provides additional information to people that have to work with the software, without confusing end-users.

Low-effort legacy code refactoring

Llewellyn Falco shows in a video on Youtube how to apply three techniques to improve upon a piece of legacy code, using an example from the Guilded Rose Kata. He names three techniques to use:

  • Combination Testing, which gets you to full test coverage, just by reinspect which paths are not taken,
  • Code Coverage as Guideance, which helps you to find the bits that you have not yet considered, and
  • Provable Refactorings like automatic code transformations, which help you to be sure that you don’t break the code.

I have to admit that I am not yet fully convinced of this approach; however, it surely is a very useful first way to get at a (larger) legacy code base.

I have used a similar, albeit less formal, way to approach a larger legacy algorithm that I rewrote, and found it useful to inspect the existing behaviour with unit tests; sadly, I did not scaffold the infrastructure to keep the unit tests around, as the development environment was not very unit-test-friendly. It did make the approach manageable, but at some point I feel it to be necessary to add some functional knowledge to it.

Linkdump: People in Software Development

Alistair Cockburn wrote a fascinating article already back in 1999 titled “Characterizing people as non-linear, first-order components in software development”, which is still available through web.archive.org. This is a full-scale scientific paper, so get ready for some reading.

In essence, his message is that people are more important than processes:

The fundamental characteristics of “people” have a first-order effect on software development, not a lower-order effect.

He lists four characteristics of people that are relevant to software development:

1. People are communicating beings, doing best face-to-face, in person, with real-time question and answer.

2. People have trouble acting consistently over time.

3. People are highly variable, varying from day to day and place to place.

4. People generally want to be good citizens, are good at looking around, taking initiative, and doing “whatever is needed” to get the project to work.

Auto-Rebuild for Asp.Net Core

As one is probably used from Javascript development, whenever one saves a file, a separately started background process immediately rebuilds as much as is required, and refreshes a browser window, reloads a service or anything else.

Apparently, this is a feature for Asp.Net as well, and has been for quite some time. With .Net 5, it is even more comfortable, as it can be handled fully through the command that launches the Kestrel server.

Previously, I used this command to start:

dotnet run --project GoodNight.Service.Api

dotnet supplies the watch command, which represents the auto-rebuilding behaviour. It wants as arguments the dotnet command that should be supervised and restarted as required. For running the server, use this command:

dotnet watch --project GoodNight.Service.Api run

Observe that the run command as moved behind the --project flag: watch wants the project setting for itself, and will not launch without it. Luckily, it passes it on onto the child command.

Elephant Carpaccio Facilitation

Time for a fun exercise: The Elephant Carpaccio Facilitation! This exercise practices breaking down user stories in… smaller user stories, and even smaller user stories, and then even smaller user stories.

The main task is to take a simple program, in the example a simple receipt calculator, and break it down into as many small requirements as possible. Yes, if your first iteration is more than “an executable program doing nothing”, then you are thinking too big. Seems quite fun to do.

I like how these very small slices still must be user-focused: There is no “technical” slice just for setting up frameworks and stuff. Every slice delivers a (tiny) amount of value to users, and they can still turn up to be very, very small: The exercise allocates 7 minutes of implementation time per slice. That’s quick programing.

Null-checking in C# with classes with operator overloading

So you decided to go type safe and happen to use non-nullable reference types in C#. Nice! Furter, you use a library, which of course is not designed for this, but it still works, you add extra typechecks for it’s return values. Everything’s fine, you’re happy.

You do notice that it even denotes some properties as optional, e.g. a CustomClass? sits there somewhere. Of course, you check the property for null before you access it, so you write something like res.Prop != null. Not fearing anything, you compile, and end up with this error message for the res.Prop:

Possible null reference argument for parameter ‘left’ in ‘bool CustomClass.operator !=(CustomClass left, CustomClass right)’

Oh, and also one for the null:

Cannot convert null literal to non-nullable reference type.

What? Well, the res.Prop might be null, of course, that’s why we are doing this after all.

As it turns out, CustomClass has a custom comparison operator. This operator, as shown in the first error message, of course requests CustomClass objects, which may not be null (they don’t have the ?, right?).

Well. So you can’t compare the object with null by using !=, as that operator just does not allow null values. Luckily, ever since C# 7.0, C# has the is operator; with C# 9.0 it also has the not pattern for pattern matching. So you can replace the test with a res.Prop is not null, the comparison operator is not called, and everything is fine.

It’s those fine details that make you love a language, no?

Sidenote: Multi-language projects

Many programming teams I know operate in two (natural) languages most of the time: English most things online, German for inner-Team discussion, or in fact any non-technical communication. This especially holds true for communication with people that are not part of the team.

Luckily, I found another bit of argument for my point: Gherkin, a language (heh) to write tests as behaviours, advocates to write the behaviours in the user’s language:

The language you choose for Gherkin should be the same language your users and domain experts use when they talk about the domain. Translating between two languages should be avoided.

They are obviously serious about this: Gherkin scripts can be written in 70 natural languages.

Dependency Injection in Frameworks

Dependency Injection as a means of inversion of control has permeated at least the opinions of most software developers (although there may be a company full of old, stubborn developers somewhere…). However, how much Dependency Injection does one really need?

When I wrote the Scala-version of GoodNight using Akka Play, I found it to support an implementation style that does not require a DI framework; it even did not require any dependency injection. They call this “Compile Time Dependency Injection“, which is a very fancy term for the fact that you have an entrypoint function where you manually create your objects and stitch them together:

class GoodnightComponents(context: Context) {
  lazy val database = slickApi.[...]
  lazy val silhouette = new SilhouetteProvider([...])
  lazy val authSignUp = new SignUp(database, silhouette, [...])

Interestingly, this is in fact still the concept of Dependency Injection: The invidiual classes are still loosely coupled, and do not create each other, but request each other (or their interfaces) in their constructors.

Turns out, this is also what Mark Seemann suggests how you should write a library (in this case for C#), in order to have it be easily composable with any kind of DI framework.

Now, on to inject (heh) this into the Asp.Net Core DI container. What fun.

Results: Mark wrote a separate post about how to write DI friendly frameworks. He suggests an extremely simple approach: For each interface that a framework user can implement, provide a factory interface. The user does know how to create the concrete types and through the factory interface, the framework can easily request that creation.

C#: Not-nullable reference types

As it turns out, C# has support for non-nullable reference types, that is, variables with reference types (typically object types) that may not be assigned null. The following thus is illegal:

object foo = null;

This requires a compiler switch to be set at the project level, to enable a so-called “nullable annotation context”. This tells the compiler to infer nullable state for all variables, and warn in any situations where a regular reference variable may turn null. A typical example might be a declaration without assignment, which would get null default-assigned. Of course, you can declare a variable to possibly be null just like for value types by appending ?.

It’s a pitty this is turned into a compiler flag: Now, when reading C# code, you can not really be sure if it was written with a certain nullability checking in mind. While I see the useful reason of backwards compatibility, having this enforced for everyone everywhere would clear up this possible missunderstanding. In fact, as the checks only yield warnings, not errors, this would not even be a blocker. Of course, you can convert these warnings to errors, if you are so inclined.

You can enable the setting by adding the Nullable setting to all .csproj files:

  <PropertyGroup>
    <Nullable>enable</Nullable>

Apparently, they don’t mix well with structs and arrays; I’d recommend to stick to references and lists.

Also, parameters can receive a rather detailed set of attributes to denote behaviour wrt. nullability. For example, a method can declare that it will not return null when a specific parameter is not null as well.

Addendum 1: You can even restrict generic type parameters to be not null, and this will work for reference types as well. The constraint must be like where T : notnull.