Ten Usability Heuristics

Jakob Nielsen writes about the 10 Usability Heuristics for UI Design. These are rules of thumb, which most often (but not always) hold true:

  1. System status should always be visible.
  2. The system should use the same words as the users.
  3. All actions should have easy “I did not want to do that” functions like undo.
  4. The system should adhere to standards and be consistent in its presentation.
  5. The system should avoid erroneous states with constraints and useful defaults.
  6. Make actions, elements and options visible to replace recall with recognition.
  7. Frequent actions should be personalisable.
  8. Reduce the interface to required information; form follows function.
  9. Support users in detecting, diagnosing and recovering from errors.
  10. Provide useful, searchable help.

The heuristics have persisted since 1996, that is, the time back when interface design was mostly clunky and grey; even several revisions have kept them mostly intact. I guess there seems to be something to them if they survive that long.

Patterns 42, from Arc42

The team of Arc 42 (the architecture documentation template) collects a set of patterns on a seperate subsite: https://patterns.arc42.org/home/

The site seems to be somewhat current, with most content on the corrensponding GitHub repository from about 2018.

The Pattern cover mostly large scale system patterns, but include some of the better known GoF patterns as well. Patterns include:

  • MVC/MVVM
  • Hexagonal Architecture
  • MicroServices
  • Anti-Corruption-Layers
  • Design Patterns from GoF

Remote Working Guide at GitLab

The folks at GitLab have been working in a full-remote scenario not only since the current epidemic, but ever since they started. They gathered a lot of experience in remote working, and have created a quite voluminous Remote Playbook (which you can download there) as well as equally detailed guide to working all-remote.

They abstract their philosophy with a Remote Manifesto similar to the agile one:

  1. Hire people from everywhere
  2. Enable flexible working hours
  3. Prefer written documentation to oral
    • This is a little surprising to me. While I do know that documentation is important, obviously, I currently feel that verbal communication seems the more natural, and thus the easier way.
  4. Prefer written processes to guided training
    • Similar to above.
  5. Public sharing of information
  6. Let anyone edit any document
  7. Communicate async over synchronous channels
  8. Results over work time
  9. Prefer formal communication channels
    • Curious about this as well, especially how they define the difference between formal and informal channels. Does it suffice to have a company slack server?

Guide Lines für Code Reviews von Gitlab

Der Gitlab-Blog schreibt zum Thema Code Reviews:

https://about.gitlab.com/blog/2020/06/08/better-code-reviews/

Wichtige Tools eines Code Reviews:

  • Self Reviews: Vor dem Zuweisen eines PRs selbst in eine Reviewer-Rolle schlüpfen.
  • Checkliste: für den Author:
    • Jede Code-Zeile nochmal lesen
    • Code lokal testen
    • Für jede Änderung einen Test schreiben
    • Klare Beschreibung schreiben und nach jedem Feedback aktualisieren
    • Mindestens einen Screenshot pro PR
    • Vorgreifend potentielle Fragen beantworten
  • Conventionale Kommentare: Emotion/Intention in Kommentaren ausdrücken. Siehe https://conventionalcomments.org/
  • Patch Files: Kommentare mit Patches versehen, um Vorschläge zu vereinfachen
  • Fairness: Sowohl als Author als auch als Reviewer:
    • Sei belastbar, zuverlässig, fair und respektvoll.
    • Suche stets Wege, um zu jedem fair zu sein.
    • Als Author: (unter anderem) Versuche Fragen in dem PR zuvorzukommen, und seltsame Teile vorgreifend zu erläutern
    • Als Reviewer: Berücksichtige “unbewusste Voreingenommenheit“: Jedes “sollte” oder “muss”/”müsste” muss eine Begründung (Link zu Dokumentation/Konvention) haben, ansonsten ist es nur eine persönliche Präferenz!
    • Wenn du anderer Meinung bist: Frage warum, anstatt eine andere Lösung zu verlangen.
  • Follow Up: Verschiebe größere Anmerkungen/Diskussionen in andere PRs oder Besprechungen.
    • Stelle sicher, das diese auch stattfinden!
  • Die Kunst des GIFs: Nutze GIFs, um PRs emotionaler und menschlicher zu gestalten.
  • Kleine Iterationen: Breche PRs in kleinere Teile auf. Je kleiner, desto besser zu reviewen!

BDD Testing for .Net

So I decided to add tests to my current project. I had Behaviour-Driven Tests (BDT) in the previous Scala-based implementation, so I wanted to see if I can get something similar for .Net.

I found Xunit.Gherkin.Quick, which uses Gherkin files to specify test cases (which they call features). These are natural-language style scripts, which translate to a set of function calls implemented in an accompanying test class. We’ll see how this works out.

Thus, I wrote the following feature file:

Feature: StoryNames
  As the client
  I can generate Url Names for Stories,
  so that I can use those in Urls.

  Scenario: Generate simple Url Name
    Given a scenario named "Helms Schlund"
    When I generate the urlname
    Then the urlname should be helms-schlund

And added this implementation class:

using Xunit;
using Xunit.Gherkin.Quick;

namespace GoodNight.Service.Domain.Test
{
  [FeatureFile("StoryNames.feature")]
  public sealed class StoryNames : Feature
  {
    private Domain.Story.Story story;

    private string urlname;

    [Given(@"a scenario named ""(.*)""")]
    public void CreateScenarioNamed(string name)
    {
      story = new Domain.Story.Story(name);
    }

    [When(@"I generate the urlname")]
    public void GenerateTheUrlname()
    {
      urlname = story.Urlname;
    }
  
    [Then(@"the urlname should be (.*)")]
    public void TheUrlnameShouldBe(string expected)
    {
      Assert.Equal(expected, this.urlname);
    }
  }
}

The test runner finds the implementation class, which references the feature file through the FeatureFile annotation. The feature parts get run through the various methods on the class.

I’m rather curious if this works out for larger test bases. The feature files to seem to promise a better overview over existing test cases.

Addendum 1: One advantage are the smaller amounts of repetition. It is much simpler to repeat “Given an input of “[…]” When parsed by the parser” than repeating the bit of code that stores the input, creates and calls the parser and checks the result for null. Possibly more savings in larger code examples, I’d wager.