OpenID Connect, as a technology for authentication, has a set of libraries in many languages. One in particular is a library for Typescript, and is, appropriately, called OpenYOLO: “You only login once”.
dotnet working rather flawlessly on Linux, sooner or later one stumbles upon the need to profile a dotnet application.
Microsoft provides a handy command-line tool for this:
perfcollect. This tool measures dotnet performance indicators for a manual or pre-specified duration and generates a nice report about this.
Scott Brady makes a point that OAuth2 is not an authentication scheme, but an authorisation, or better yet, a delegation mechanism. He points out that tokens just provide validated access to any resource: Usually data of a user, but not necessarily; It may even only indicate that an application gets routine access to e.g. write to a log file.
He proposes to use OpenID Connect as the actual authentication mechanism built upon OAuth2.
What’s it mean to be a Software Engineer? Or, for that matter, a Software Architect?
Brian Webb gives a possible answer on the title distinction, classifying programmers, engineers and architects. The separation in these three groups might seem useful; more investigation necessary, as usual.
Semantic Versioning is a great way to denote the changes in your software that other software must pay attention to. This is especially true for libraries, as their sole purpose is to be used in other software.
However, how do you deal with software that is not consumed by other software, but by users? (Arguably, still a large part of existing and used software.) There isn’t really a public-facing interface to version, and even if you consider the user interface to be one, your users likely don’t care.
Brett Uglow proposes a nice solution to use semantic versioning for end-user applications: Consider the system administrator to be your interface consumer. Now your environment requirements becomes your public API:
- Major version increments denote new requirements like other hardware, system configuration (firewall) or breaking changes like dropping support for a file format.
- Minor increments are backwards-compatible: The new version can be installed over the older and will continue to work.
- Patch versionprovide bug-fixes, without any new features.
While this is just one possibility to interpret semantic versions for client software, it is at least a useful one: It provides additional information to people that have to work with the software, without confusing end-users.
- Combination Testing, which gets you to full test coverage, just by reinspect which paths are not taken,
- Code Coverage as Guideance, which helps you to find the bits that you have not yet considered, and
- Provable Refactorings like automatic code transformations, which help you to be sure that you don’t break the code.
I have to admit that I am not yet fully convinced of this approach; however, it surely is a very useful first way to get at a (larger) legacy code base.
I have used a similar, albeit less formal, way to approach a larger legacy algorithm that I rewrote, and found it useful to inspect the existing behaviour with unit tests; sadly, I did not scaffold the infrastructure to keep the unit tests around, as the development environment was not very unit-test-friendly. It did make the approach manageable, but at some point I feel it to be necessary to add some functional knowledge to it.
Alistair Cockburn wrote a fascinating article already back in 1999 titled “Characterizing people as non-linear, first-order components in software development”, which is still available through web.archive.org. This is a full-scale scientific paper, so get ready for some reading.
In essence, his message is that people are more important than processes:
The fundamental characteristics of “people” have a first-order effect on software development, not a lower-order effect.
He lists four characteristics of people that are relevant to software development:
1. People are communicating beings, doing best face-to-face, in person, with real-time question and answer.
2. People have trouble acting consistently over time.
3. People are highly variable, varying from day to day and place to place.
4. People generally want to be good citizens, are good at looking around, taking initiative, and doing “whatever is needed” to get the project to work.
Apparently, this is a feature for Asp.Net as well, and has been for quite some time. With .Net 5, it is even more comfortable, as it can be handled fully through the command that launches the Kestrel server.
Previously, I used this command to start:
dotnet run --project GoodNight.Service.Api
command, which represents the auto-rebuilding behaviour. It wants as arguments the
dotnet command that should be supervised and restarted as required. For running the server, use this command:
dotnet watch --project GoodNight.Service.Api run
Observe that the
run command as moved behind the
--project flag: watch wants the project setting for itself, and will not launch without it. Luckily, it passes it on onto the child command.
Time for a fun exercise: The Elephant Carpaccio Facilitation! This exercise practices breaking down user stories in… smaller user stories, and even smaller user stories, and then even smaller user stories.
The main task is to take a simple program, in the example a simple receipt calculator, and break it down into as many small requirements as possible. Yes, if your first iteration is more than “an executable program doing nothing”, then you are thinking too big. Seems quite fun to do.
I like how these very small slices still must be user-focused: There is no “technical” slice just for setting up frameworks and stuff. Every slice delivers a (tiny) amount of value to users, and they can still turn up to be very, very small: The exercise allocates 7 minutes of implementation time per slice. That’s quick programing.
So you decided to go type safe and happen to use non-nullable reference types in C#. Nice! Furter, you use a library, which of course is not designed for this, but it still works, you add extra typechecks for it’s return values. Everything’s fine, you’re happy.
You do notice that it even denotes some properties as optional, e.g. a
CustomClass? sits there somewhere. Of course, you check the property for null before you access it, so you write something like
res.Prop != null. Not fearing anything, you compile, and end up with this error message for the
Possible null reference argument for parameter ‘left’ in ‘bool CustomClass.operator !=(CustomClass left, CustomClass right)’
Oh, and also one for the
Cannot convert null literal to non-nullable reference type.
What? Well, the
res.Prop might be null, of course, that’s why we are doing this after all.
As it turns out,
CustomClass has a custom comparison operator. This operator, as shown in the first error message, of course requests
CustomClass objects, which may not be null (they don’t have the
Well. So you can’t compare the object with null by using
!=, as that operator just does not allow
null values. Luckily, ever since C# 7.0, C# has the
is operator; with C# 9.0 it also has the
not pattern for pattern matching. So you can replace the test with a
res.Prop is not null, the comparison operator is not called, and everything is fine.
It’s those fine details that make you love a language, no?