akeno tech radar
Discover the tools and technologies that shape how we build at akeno.
Last update: January 2026
Docker Compose instead of being cloud-native
While our tech stack in many aspects is fairly standard and matches what many other web-based applications use (e.g. React, Nest.js, Tailwind CSS, PostgreSQL), the sensitive nature of our customers’ data mandates that our product runs within each customer’s intranet … and therefore in whatever environment the customer already has. Instead of being just one more tenant within a shared cloud-based solution, each customer site has their own isolated instance of our product. So by necessity, our product has to be agnostic to AWS, Azure, Google Cloud, and their likes.
So far, we’re able to use what is likely the simplest solution possible: docker compose. Our customers provide us with one or more VMs and can choose if they want the databases to be managed by us or simply use pre-existing database servers they have in their infrastructure. However, we're also in the process of creating helm files as a template for customers who want to manage everything by themselves.
Not being able to use cloud-native functions (e.g. Lambda, RDS, CloudTrail, SQS) might seem like a big drawback at first. But what we find is that not having the added complexity of a cloud environment and all its services allows engineers to focus on things like database modeling and great UX vs pure infrastructure topics.
Scalability in data vs scalability in users
While we handle large amounts of data and perform complex calculations on them, the number of users per customer site is relatively small compared to for example a publicly accessible e-commerce site. And since we have dedicated systems per site, we can avoid many of the scaling headaches and orchestration problems other companies constantly deal with. As a result, scaling is only relevant for us in the number of customer sites (i.e. how easy is the product to set up and maintain) and amounts of data (i.e. how reactive is our application while showing complex data, how long do calculations take), not in number of users.
We therefore simply don’t have the typical scaling challenges cloud-based SaaS products have. We do not need to optimize render performance for SEO, we’re not exposed to denial-of-service attacks, system loads do not suddenly spike, database run out of cache or storage space, and so on. However, we always need to be able to visualize and process large amounts of complex data without slowing our users down. Being able to make quick, informed decisions is key for our users as the consequences of those decisions can quickly sum up to millions of Euros generated or saved.
Reliability & trust
One of the key concerns for us is preserving the high level of trust we have earned with our customers. Automatic testing of course plays a key role for this and we follow a strategy of end-2-end and API level integration tests to ensure correct behavior rather than excessive component and unit testing (which tend to focus on irrelevant implementation details).
A nice side effect of our "story based e2e test" approach is that they automatically serve as documentation for onboarding new engineers and easy way to get the local development environment into a specific configuration that can be used for developing new features. E.g. it is often quicker to run a specific e2e test that generates certain data than trying to find or manually construct test data that could be used for feature development.
We also found that reliability is often a question of avoiding unnecessary complexity in the first place. If you don’t need a Kafka cluster, you don’t need a Kafka cluster … even if it is fun to have one. We usually opt for tried-and-tested frameworks rather than going for the bleeding edge. This is combined with a healthy level of pragmatism that keeps us focused on continuously improving our product for our users rather than spending time over-optimizing pipeline speeds, build systems, or page load times that are already satisfactory.
Hasura as GraphQL middleware
From an early stage, akeno has built upon Hasura to accellerate our development speed. It basically acts as no-code backend-for-fronted or API gateway in that our frontend always talks to Hasura and Hasura then either directly talks to our PostgreSQL database or, for some more specialised things Hasura can't do, to one of our NestJS-based REST services. The result is that about 80% of the typical backend code one would find in a web based application simply doesn't exist in our system.
We recently saw a trend that we started moving more and more logic out of the frontend and into REST services in our backend. With GraphQL in general and Hasura in particular, it is often all too easy to just expose data and do business logic in the frontend. Especially when it comes to data aggregation and calculations though, it is often both simpler and more performent to do these on an SQL/DB-level. Another use case are scenarios of heavy concurrency. It is tricky to design access patterns build on top of GraphQL that preserve data integrity and DB transactions are a much more natural fit. As a result, our use of Kysely and directly accessing the DB without going via Hasura increases steadily. At the same time, we are defining more custom types and queries/mutations via Hasura Actions and thus more explictly modeling our domain and its operations.