Work

Learning to love the Capability Maturity Model

I had a job where the management were enamoured of the Capability Maturity Model (CMM) and all future planning had to be mapped onto the stages of the maturity model. I didn’t enjoy the exercise very much because in addition to five documented stages there was generally a sixth which was stagnation and decay as the “continually improving” part of the Optimising stage was generally forgotten about in my experience.

Instead budgets for ongoing maintenance and iteration were cut to the bone so that the greatest amount of money could be extracted from the customers paying for the product.

Some government departments I have had dealings with had a similar approach where they would budget capital investment for the initial development of software or services and then allocate nothing for the upkeep of them except fixed costs such as on-premise hosting for 20 years (because why would you want to do anything other than run your own racks?).

This meant that five years into this allegedly ongoing-cost-free paradise services were breaking down, no budget was available to address security problems, none of the original development team were available to discuss the issues with the services and the bit rot of the codebase was making a rewrite the only feasible response to the problem which undercut the entire budgetary argument for amortisation.

A helpful model misapplied

So generally I’ve not had a good experience with people who use the model. And that’s a shame because recently I’ve been appreciating it more and more. If you bring an Agile mindset to the application of CMM: seeing it as a way of describing the lifecycle of a digital product within a wider concept of cyclical renewal and growing understanding of your problem space then it is a very powerful tool.

In particular some product delivery practices have an assumption on the underlying state of maturity in the business process. Lets take one of the classics: the product owner or subject matter expert. Both Scrum and Domain Driven Design make the assumption that there is someone who understands how the business is meant to work and can explain it clearly in a way that can be modelled or turned into clear requirements.

However this can only be true at Level 2 (Repeatable) at the earliest and generally the assumption of a lot of Agile delivery methods is that the business is at Level 4 (Managed). Any time a method asks for clear requirements or the ability to quantify the value returned through metrics you are in the later stages of the maturity model.

Lean Startup is one of the few that actually addresses the problems and uncertainty of a Level 1 (Initial) business. It focuses on learning and trying to lay down foundations that are demonstrated to be consistent and repeatable. In the past I’ve heard a lot of argument about the failings of the Minimum Viable Product and the need for Minimum Loveable, Marketable or some more developed concept Product. Often people who make these arguments seem confused about where they are in terms of business maturity.

The Loveable Product often tries to jump to Level 3 (Defined), enshrining a particular view of the business or process based on the initial results. Sometimes this works but it as just a likely to get you to a dangerous cul de sac where the product is too tailored to a small initial audience and needs to be reworked if it is meet the needs of the larger potential target audience.

John Cutler talks about making bets in product strategy and this seems a much more accurate way to describe product delivery in the early maturity levels. Committing more effort without validation is a bigger bet, often in an early stage business you can’t do that much validation, therefore if you want to manage risk it has to be through the commitment you’re making.

Go to market phases are tough partly because they explicitly exist in these low levels of capability maturity, often you as an organisation and your customers are in the process of trying to put together a way of working with few historic touchpoints to reference. Its natural that this situation is going to be a bit chaotic and ad-hoc. That’s why techniques that focus on generating understanding and learning are so valuable at this stage.

The rewards of maturity

Even techniques like Key Performance Indicators are highly dependent on the underlying maturity. When people talk about the need to instrument a business process they often have an unspoken assumption that there is one that just needs to be translated into a digital product strategy of some kind. That assumption can often be badly wrong and it turns out the first task is actually traditional business analysis to standardise what should be happening and only then instrumenting it.

In small businesses in particular there is often no process than the mental models of a few key staff members. The key task is to try and surface that mental model (which might be very successful and profitable, don’t think immature means not valuable) into external artefacts that are robust enough to go through continuous improvement processes.

A lot of businesses jump into Objective Key Results and as an alignment tool that can be really powerful but when it comes to Key Results if you are not at that Level 4 (Managed) space then the Key Results often seem to boil down to activities completed rather than outcomes. In fairness at Level 5 (Optimising) the two can often be the same, Intel’s original OKRs seem very prescriptive compared to what I’ve encountered in most businesses but they had a level of insight into what was required to deliver their product that most businesses don’t.

If you do get to that Level 5 (Optimising) space then you can start to apply a lot of buzzy processes with great results. You can genuinely be data-driven, you can do multi-variant testing, you can apply RICE, you can drive KPIs with confidence that small gains are sustainable and real.

Before you’re there though you need to look at how to split your efforts between maturing process, enabling consistency and not just doing digital product delivery.

Things that work across maturity stages

Some basic techniques like continual improvement (particularly expressed through methods like total quality), basic business intelligence that quantifies what is happening without necessarily being able to analyse or compare it and creating focus work at every stage of maturity.

However until you get to Level 2 (Repeatable) then the value of most techniques based on value return or performance improvement are going to be almost impossible to assess. To some extent the value of a digital product in Level 1 (Initial) is to offer a formal definition of a process and subject it to analysis and revision. Expressing a process in code and seeing what doesn’t work in the real world is a modelling exercise in itself (but sadly a potentially expensive one).

Learning to love the model

The CMM is a valuable way of understanding a business and used as a tool for understanding rather than cost-saving it can help you understand whether certain agile techniques are going to work or not. It also helps understand when you should be relying more on your understanding and expertise rather than data.

But please see it as a circle rather than a purely linear progression. As soon as your technology or business context changes you may be experiencing a disruptive change that might mean rethinking your processes rather than patching and adapting your current ones. Make sure to reassess your maturity against your actual outputs.

And please always challenge people who argue that product or process maturity is an excuse to strip away the capacity to continually optimise because that simply isn’t a valid implementation of the model.

Standard
Work

March 2024 month notes

Dependabot under the hood

I spent a lot more time this month than I was expecting with one of my favourite tools Github’s Dependabot. It started when I noticed that some of the projects were not getting security updates that others were. I know it possible for updates to be suspended on projects that neglect their updates for too long (I should really archive some of my old projects) but checking the project settings confirmed that everything was setup correctly and there was nothing that needed enabling.

Digging in I wondered how you are meant to view what Dependabot is doing, you might think it is implemented as an Action or something similar but in fact you access the information through the Insights tab.

Once I found it though I discovered that the jobs had indeed been failing silently (I’m still not sure if there’s a way to get alerted about this) because we had upgraded our Node version to 20 but had set the option engine-strict on. It turns out that Dependabot runs on its own images and those were running Node 18. It may seem tempting to insist that your CI uses the same version as your production app but in the case of CI actions there’s no need to be that strict, after all they are just performing actions in your repository management that aren’t going to hit your build chain directly.

Some old dependencies also caused problems in trying to reconcile their target version, the package.json Node engine and the runtime Node version. Fortunately these just highlighted some dependency cruft and depreciated projects that we just needed to cut out of the project.

It took a surprising amount of time to work through the emergent issues but it was gratifying to see the dependency bundles flowing again.

Rust

I started doing the Rustlings tutorial again after maybe a year in which I’d forgotten about it (having spent more time with Typescript recently). This is a brilliant structured tutorial of bite-sized introductions to various Rust concepts. Rust isn’t that complicated as a language (apart from its memory management) but I’ve found the need to have everything right for the code to compile means that you tend to need to devote dedicated time to learning it and it is easy to hit some hard walls that can be discouraging.

Rustlings allows you to focus on just one concept and scaffolds all the rest of the code for you so you’re not battling a general lack of understanding of the language structure and just focus on one thing like data structures or library code.

Replacing JSX

Whatever the merits of JSX it introduces a lot of complexity and magic into your frontend tooling and I’ve seen a lot of recommendations that it simply isn’t necessary with the availability of tagged string literals. I came back to an old Preact project this month that I had built with Parcel. The installation had a load of associated security alerts so on whim I tried it with ViteJS which mostly worked except for the JSX compilation.

Sensing a yak to shave I started to look at adding in the required JSX plugin but then decided to see if I really needed it. The Preact website mentioned htm as an alternative that had no dependencies. It took me a few hours to understand and convert my code and I can’t help but feel that eliminating a dependency like this is probably just generally a good idea.

The weirdest thing about htm is how faithful it is to the JSX structure, I was expecting something a bit more, well, HTML-ly but props and components pretty much work exactly how they do in JSX.

Postgres news

A Postgres contributer found a backdoor into SSH that required an extensive amount of social engineering to achieve. If you read his analysis of how he discovered it then it seems improbable that it would have been discovered. Some people have said this is a counterpoint to “many eyes make bugs shallow” but the really problem seems to be how we should be maintaining mature opensource projects that are essentially “done” and just need care and oversight rather than investment. Without wanting to centralise open source it feels like foundations actually do a good job here by allowing these kind of projects to be brought together and have consistent oversight and change management applied to them.

I read the announcement of pgroll which claims to distil best practice for Postgres migrations regarding locks, interim compatibility and continuous deployment. That all sounds great but the custom definition format made me feel that I wanted to understand it a little better and as above, who is going to maintain this if it is a single company’s tool?

Postgres was also compiled into WASM and made available as an in-memory database in the browser, which feels a bit crazy but is also awesome for things like testing. It is also a reminder of how Web Assembly opens up the horizons of what browsers can do.

Hamstack

Another year, another stack. I felt Hamstack was tongue in check but the rediscovery of hypermedia does feel real. There’s always going to be a wedge of React developers, just like there will be Spring developers, Angular developers or anything else that had a hot moment at some point in tech history. However it feels like there is more space to explore web native solutions now than there was in the late 2010s.

This article also introduced me to the delightful term “modulith” which perfects describes the pattern that I think most software teams should follow until the hit the problems that lead to other solution designs.

Standard
Programming

Redis: not one fork but two

Redis made a license change (see Hashicorp before them) and as day follows night forks duly appeared. Although excitingly this time there are two alternatives to choose from Valkey, which seems to have more corporate support and Redict which is more independent and is being championed by the person behind Sourcehut who is more than a bit of a Marmite person.

It was also interesting to see that both projects opted for an “io” domain despite the ethical issues associated with it (a balanced summary if you’re unfamiliar). It is a shame that the “dev” domain hasn’t proved a bit more popular.

Standard
Work

2023: Year in review

2023 felt like a very chaotic year with big changes in what investors were looking for, layoffs that often felt on step away from panic, a push from business to return to the office but often without thinking what that would look like and a re-evaluation of technical truisms of the last decade. So much happened I think that’s why its taken so long to try and process it: it feels like lots of mini-years packed into one.

A few themes for the year…

Typescript/Javascript

So I think 2023 might be the year of Peak React and Facebook frontend in general. I think with Yarn finally quiet-quitting and a confused React roadmap that can’t seem to pose a meaningful answer to its critics we’re finally getting to place where we can start to reconsider what frontend development should look like.

The core Node/NPM combination seems to have responded to the challenges better than the alternative runtimes and also seem to be sorting out their community governance at a better clip.

Of course while we might have got to the point that not everyone should be copying Facebook we do seem to have a major problem with getting too excited about tooling provided by companies backed by VC money and with unclear goals and benefits. If developers had genuinely learned anything then they might be more critical of Vercel and Bun.

I tried Deno, I quite liked it. I’d be happy to use it. But if you’re deploying Javascript to NodeJS servers then Typescript is a complex type hinter that is transpiling to a convention that is increasingly out of step with Vanilla Javascript. The trick of using JSDoc’s ts-check seems like it could provide the checking benefits of Typescript along with the Intellisense experience in VSCode that developers love but without the need to actually transpile between languages and all the pain that brings.

It’s also good news the Javascript is evolving and moving forwards. Things seems to have significantly improved in terms of practical development for server-side Javascript this year and the competition in the ecosystem is actually driving improvement in the core which is very healthy for a language community.

Ever improving web standards

I attended State of the Browser again this year and was struck by how many improvements there have been to the adoption of new standards like Web Components, incremental improvements in CSS so that more and more functionality is now better achieved with standards-based approaches and how many historic hacks are counter-productive now.

It is easy to get used to the ubiquity of things like Grid or the enhanced Flexbox model but these are huge achievements and the work going on to allow slot use in both your own templates and the default HTML elements is really impressive and thoughtful.

Maybe the darker side of this was the steady erosion of browser choice but even here the Open Web Advocacy group has been doing excellent, often thankless work to keep Google and Apple accountable and pushing to provide greater choice to consumers in both the UK and EU.

Overall I feel very optimistic that people understand the value of the open web and that the work going on in the foundations of it are better than ever.

Go

The aphorism about chess that says the game is easy to learn but hard to master applies equally well to Go in my view. It is easy to start writing code and the breadth of the language is comparatively small. However the lack of batteries included means that you are often left with having to implement relatively straight-forward things like sets yourself or having to navigate what the approved third-parties are for the codebase you’re working on.

The fact that everyone builds their web services from very low-level primitives and then each shop has their own conventions about middleware and cross-cutting concerns is really wearisome if you are used to language communities with more mature conventions.

The type system is also really anaemic, it feels barely there. A million types of int and float, string and “thing”. Some of the actual type signatures in the codebases have felt like takes a thing and a thing and returns a thing. Structs are basically the same as their C counterparts except there’s a more explicit syntax about pointers and references.

I have concerns that the language doesn’t have good community leadership and guidance, it still looks to Google and Google do not feel like good stewards of the project. The fact that Google is funding Rust for its critical work (such as Android’s operating layer) and hasn’t managed to retire C++ from its blessed languages is not a good look.

That said though most projects that might have been done in Java are probably going to be easier and quicker in Go and most of the teams I know that have made the transition seem to have been pretty effective compared to the classic Spring web app.

It is also an easier language to work with than C, so its not all bad.

The economy

I’m not sure the economy is necessarily in that bad a shape, particularly compared to 2008 or 2001 but what is definitely true is that we had gotten very used to near-zero interest rates and we did not adapt to 5% interest rates very well at all.

It feels like a whole bunch of common-place practices are in the process of being re-evaluated. Can’t get by without your Borg clone? Maybe you can get by with FTP-ing the PHP files to the server.

Salaries were under-pressure due to the layoffs but inflation was in the double-digits so people’s ability to take a pay cut wasn’t huge. I think the net result is that fewer people are now responsible for a lot more than they were and organisations with limited capacity tend to be more fragile when situations change. There’s the old saw about being just one sick day from disaster and it will be interesting to see whether outages become more frequent and more acceptable for the associated costs.

Smaller teams and smaller budgets are the things that feel like they are most profoundly going to reshape the development world in the next five years. Historically there’s been a bit of an attitude of “more with less” but I feel that this time it is about setting realistic goals for the capacity you have but trying to have more certainty about achieving them.

Month notes

I started experimenting with months notes in 2023, I first saw week notes be really effective when I was working in government but it was really hard to write them when working at a small company where lots of things were commercially sensitive. It is still a bit of a balance to try and focus on things that you’re personally learning rather work when often the two can easily be conflated but I think its been worth the effort.

If nothing else then the act of either noting things down as they seem relevant and then the separate act of distillation helps reflect on the month and what you’ve been doing and why.

Standard
Web Applications

Alternative Mastodon frontends

Mastodon servers provide a CORS-based API that allows people to develop completely local alternative frontends for it that you can freely try with your existing accounts.

This means that you actually have a lot of options if you don’t like the default Mastodon web experience (which I feel is quite a few people). I’ve highlighted a few that I’ve been using in this post.

With these frontends you sign in using OAuth but the token is stored locally so you may need to authenticate multiple times across different devices and you can just clear local storage to stop using the frontend, no server accounts should be involved.

Pinafore

Pinafore (Github) has been one of my favourite interfaces being very simple and clear with a very pure central column.

However it has sadly been discontinued for active development but still works pretty great in practice and I continue to prefer to use it for posting. It’s worth reading the article to see how stressful it can be to maintain open-source projects and also how easy it is to end up in a dead end when choosing frontend technologies.

Phanpy

Phanpy (Github) does a really good job of rendering threads and also periodically highlights posts based on Boosts in the timeline allowing you to pick up on conversations that you might have missed out on.

I’m not sure I’m getting the best out of it currently but I have started it using it more on the weekends to try and catch up on accounts I don’t post on that frequently.

Phanpy seems to have a lot of positive buzz but it hasn’t been an immediate hit for me and I can’t quite articulate why that it is. It definitely makes it easier to follow conversations between people you’re following but there is maybe something in the post layout of the alternatives that I prefer.

Elk

Elk (Github) is a kind of eternal-alpha, I’ve dipped in and out a little bit. It is has a clearer design from my perspective to the default Mastodon experience but with images it really shines and seems to do a much better job at displaying pictures in the timeline, getting heights right and doing a better job of highlighting multiple pictures in a post.

It’s definitely my preferred way of looking at nature and travel photography posts.

Standard
Work

February 2024 month notes

Postgres

Cool thing of the month is pgmem which is a NodeJS in-memory database with a Postgres compatible API. It makes it easy to create very complete integration or unit tests covering both statement testing and object definitions. So far everything that has worked with pgmem has been flawless in both Docker-ised Postgres instances and CloudSQL Postgres.

The library readme says that containers for testing are overkill and it has delivered on that claim for me. Highly recommended.

Less good has been adventures in CloudSQL’s IAM world. A set of overlapping work requirements means that the conventional practices of using roles and superuser permissions is effectively impossible so I’ve been diving deeper than I’ve ever expected to go into the world of Postgres’s permission model.

My least favourite discovery this month has been that it is possible to successfully grant a set of permissions to a set of users that generates no errors (admittedly via a Terraform module; I need to check whether the Postgres directly complains about this) but also gets denied by the permission system.

The heart of the problem seems to be that the owner of the database objects defines the superset of permissions that can be accessed by other users but that you can happily grant other users permissions outside of that superset without error except when you try to use that permission.

The error thrown was reported on a table providing a foreign key constraint so there were more than a few hours spent wondering why the user could read the other table but then get permission denied on it. The answer seemingly being that the insert into the child table triggers the permission violation but that the validation of the constraint in the constraining table triggers the permission system.

I’m not sure any of this knowledge will ever be useful again because this setup is so atypical. I might try and write a DevTo article to provide something for a future me to Google but I’m not quite sure how to phrase it to match the query.

Eager initialisation

I learnt something very strange about the Javascript test data generation FakerJS this month but it just a specific example of libraries that don’t make an effort to lazy load their functionality. I’ve come across this issue in Python where it affected start times in on-demand code, Java where the assumption that initialisation is a one-time cost meant that multiple deployments a day meant the price was never amortised and now I’ve encountered it in Javascript.

My takeaways are that it is important to [set aggressive timeouts](https://nodejs.org/api/cli.html#–test-timeout) on your testing suite rather than take the default of no timeouts.. This only surfaced because some fairly trivial tests using the Faker data couldn’t run in under a second which seemed very odd behaviour.

Setting timeouts also helps surface broken asynchronous testing and makes it less tedious to wait for the test suite to fail or hang.

Standard
Television

Are hackers technology cynics?

My wife and I recently watched the confused Murder at the end of the World which talks a lot about hacking while not always being clear about what that means in the show (perhaps just using computers). One of the characters is openly delighted by augmented reality, robotic construction systems, AI assistants and surprisingly okay about pervasive surveillance. My wife asked if hackers would genuinely be so excited by technology if it is clear to a non-technical person what the downsides are. She expected a hacker to be much cynical about emergent technology.

Generally I’ve found that people who work in technology are very excited and optimistic about it. There is a general pro-sentiment to new things and a general willingness to overlook the problems that come with them. As a simple example while we’ve started to talk about sustainability on the web as a community we’re nowhere near ready to talk about the massive inefficiency and power consumption of most conventional Machine Learning and AI techniques.

Another interesting example is climate change where most technologists and engineers believe that a technological solution to the problem will be invented, even if they personally have no idea how that might come about.

There are technologists who are more sceptical though and I would say that it is often through the power of these, often marginalised and determined, individuals that I’ve been made aware of problems in current and proposed systems. These people rarely think that technological progress or scientific advances are bad. It is that they recognise that history indicates that not every invention is benign and that one cannot suspend critical thinking and give “progress” a free-pass.

Beyond these archetypes though there also seems to be a more profound divide between those technologists with empathy and those who think of themselves as having some higher insight into technology than most. If you think that you might suffer at the hands of defects of a technology, such as non-white people and facial recognition, then you are much more likely to be critical in your assessment of it.

If you think the problems with a technology can be blamed on people not being smart enough to understand it (such as cryptocurrency) then you judge the effect of the new development on how it effects you rather than society as whole.

Take robots; a technologist is unlikely to impacted by the consequences of more advanced automation and therefore will happily share videos of dancing robots who are intended for military or policing purposes. Those robots are never going to replace a technologist’s job and they are unlikely to hunt down and kill them. Their perception of the impact versus the benefit is going to be wildly different.

Overall then I think that the show was probably right in its depiction of technologists as being delighted by emergent technology and blind to or even surprised by the negative consequences of its adoption. The lesson to take is that maybe we should cherish our cynics more.

Standard
Programming

How to call instance methods by name on a class in Typescript

I recently wanted to parameterise a test so that it included the method to test as a parameter.

This is easy in Javascript:

const myClass = new MyClass();

['methodA', 'methodB'].forEach((methodName) => myClass[methodName]();

But when you try this naively in Typescript it fails with a message that the class cannot be indexed by the type string.

The method interfaces of the class actually form a type check that needs to be satisfied and this led me to the keyof operator which forms this type.

As I was working on a test I didn’t need a strict type check so I could simply declare my string as keyof(MyClass) and this resolved the type complaint.

If the code was actually in the production paths then I would be a bit warier of simply casting and would probably try to avoid dynamic programming because it feels like it working around the type-checking that I wanted by using Typescript in the first place.

I’m not sure how I expected this to work but I was kind of expecting the type-checker to be able to use the class definition to make the check rather than using a more generic reflection that works for objects too but at the cost of having to have more annotation of your intent.

Standard
Work

January 2024 month notes

Water CSS

I started giving this minimal element template a go after years of using various versions of Bootstrap. It is substantially lighter in terms of the components it offers with probably the navigation bar being the one component that I definitely miss. The basic forms and typography are proving fine for prototyping basic applications though.

Node test runner

Node now has a default test runner and testing framework. I’ve been eager to give it a go as I’ve heard that it is both fast and lightweight, avoiding the need to select and include libraries for testing, mocking and assertions. I got the chance to introduce it in a project that didn’t have any tests and I thought it was pretty good although it’s default text output felt a little unusual and the alternative dot notation might be a bit more familiar.

It’s interesting to see that the basic unit of testing is the assertion, something is shares with Go. It also doesn’t support parameterised tests which again is like Go which has a pattern of table-driven tests implemented with for loops except that Go allows more control of the dynamic test case naming.

I’d previously moved to the Ava library and I’m not sure there is a good reason not to use the built-in alternative.

Flask blueprints

In my personal projects I’ve tended to use quite a few cut and paste modules and over the years they tend to drift and get out of sync so I’ve been making a conscious effort to learn about and start adopting Flask Blueprints. Ultimately I want to try and turn these into personal module dependencies that I can update once and use in all the projects. For the moment though it is interesting how the blueprints format is pushing me to do some things like logging better (to understand what is happening in the blueprint) and also structuring the different areas of the application so that they are quite close to Django apps with various pieces of functionality now starting to be associated with a url prefix that makes it a bit easier to create middleware that is registered as part of the Blueprint rather than relying on imports and decorators.

Web components

I’ve been making a bit of progress with learning about web components. I realised that I was trying to do too much initially which is why they were proving complicated. Breaking things down a bit has helped with an initial focus on event listeners within the component. I’m also not bringing in external libraries at the moment but have got as far as breaking things up into [ESM modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules) which has mostly worked out so far.

Standard
Programming, Work

December 2023 month notes

Web Components

I really want to try and understand these better as I think they are offering a standards-based, no-build solution for components combined with a better way of dropping in lightweight vanilla JS interactivity to a page where I might have used AlpineJS before now.

I’m still at the basic learning stage but I’ve been hopping around the Lean Web Club tutorials to get a sense of the basics. One of the things that is already interesting is that Web Components wrap their child HTML is quite a clear and scoped way so you can use them quite easily to mix server rendered content with runtime dynamic content. I haven’t found an elegant way to do that with other frameworks.

Scoping and Shaping

I attended an online course by John Cutler which was a pretty good introduction to idea of enabling constraints. Most times I like to attend courses and classes to learn something but every now and then it feels good to calibrate on what seems obvious and easy and understand other people’s struggles with what seems basic stuff.

A few takeaways: being a good stakeholder is an underrated skill and being clear about the boundaries of what you’re willing to accept is important to allow teams working on problems to be successful. If someone says they can’t work with your constraints then its not a good fit, if no-one can work with your constraints then you either need to do the work yourself or give up on it.

The most insightful piece of the meeting for me came around the psychology of leaders in the new economy where profits are more important than growth and experimentation. John’s theory is that this pressure makes it harder for executive teams to sign off on decisions or to give teams a lot of leeway in approaching the problem. To provide meaningful feedback to executing teams senior stakeholders feel they need more information and understanding about the decisions they are making and the more hierarchical an organisation the more information needs to go up the chain before decisions can come back down.

Before zero interest rates there used to be a principle that it wasn’t worth discussing something that wouldn’t make back the cost of discussing it. Maybe rather than doing more with less we should be trying to get back to simple not doing things unless they offer a strong and obvious return.

How I learned to love JS classes

I have never really liked or seen the point in Javascript’s class functionality. Javascript is still a prototype-based language so the class syntax is basically complex syntax sugar. React’s class-based implementation was complex in terms of how the class lifecycle and scope interacted with the component equivalent so I was glad to see it replaced by stateless components. However classes are pretty much the only way that you can work with Web Components so I’ve been doing a lot more of them recently than previously.

I’ve also been dropping them into work projects although it raises some interesting questions when you’re using Typescript as the difference between a class and an interface is quite blurry there. Presumably classes should either have static elements or also encapsulate behaviour to make the inheritance meaningful otherwise it’s simply an interface that the implementing class needs to provide.

Standard