[Interview] Tackling Complex Architecture: Do’s and Don’ts

Image credit: Pexels

What approach should be used when working on complex architecture? How do you build software that is easy to support and scale, and what are the mistakes to avoid?

I had the opportunity to sit down with Sergiy Kukunin, a full-stack developer at Spotlight Labs with 10+ years of experience, and talk to him about these issues in greater detail.

The first question is, what should good software look like?

Let’s first decide what the software is, and what it looks like from a non-programmer perspective. We are used to thinking about software as a tool for solving business tasks and problems.

Imagine yourself as an entrepreneur who needs to solve such a task. You have two options: choose software that is available right now and will correctly solve your problem, but is entirely unmaintainable, or a tool that is not entirely suitable in your case, but will be easy to change as you want it. In many cases the second option will be better — the primary value of software is that it is soft, and you can change it. Even in the gaming industry, there are patches and fixes, though the game is usually perceived as a fully finished product.

There are also important facts to consider about programming. The task itself is:

  • complicated — solving complex business tasks require a lot of mental effort,
  • risky — there are a lot of stories of companies developing software and ending up unable to solve the problem,
  • expensive — a lot of money can be spent on such projects.

That’s why we should care about what we write. So the primary goal for any programmer is to create maintainable and efficient software.

What are the requirements the piece of software should meet so that we are able to call it “good”?

There are several such aspects, including:

Maintainability, i.e., an ability to make changes in the software without using up a lot of resources.

Rigidity — this indicates how many places we need to touch to make a change. Say we have a Rails application with a database, an active record model for validation, and strong params in the active record and form mapped in a one-to-one way (it is simpler). Now let’s imagine an amount of work needs to be done to change the field name in the database. We’ll need to perform a migration, which means going down to the controller to update strong params, while the UI is also involved in changing the displayed field name. This is rigidity.

Another essential factor is fragility. The software tends to break in many places every time it is changed. A Rails framework is fragile by its nature, so you need to test everything multiple times to keep the fragility at an acceptable level.

Immobility, i.e., the inability to reuse software from other projects or parts of the same project. Many Ruby gems could serve as an example of immobility. Often, they have a lot of useful dependencies in Rails, which prevent using the same code, say, in Sinatra.

Viscosity — which may come in two forms: viscosity of the design, and the viscosity of the environment. For example in Ruby, the viscosity of the design is high, as I do not want to work on better architecture using it all the time and sometimes it is simpler for me to just monkey-patch something and go further. It is so easy to do it like this, and still end up with working software, that programmers are urged to do not think about better solutions. The viscosity of the environment is about limitations the platform applies to the programmer, like time-consuming tests which result in them being performed less often than is needed.

The fifth criterion is independent deployability. This means an ability to deliver a small change without causing a whole new release cycle. If you need to run Q&A, staging, or user acceptance tests to make a minor change, this is bad. Also, deployability is the ability for different teams to work on the same project. You need to split it somehow.

And the last thing is understandability, which means how easy it is to understand the code written without extra effort.

And what do you do to meet these criteria?

Hierarchy of abstractions is key. Every application has multiple aspects (and team members working on them) including UI, content, infrastructure optimizations, business rules and interactions like shipping. But the application part should have only one reason to change: if you need to change content, you should be able to change the content only without touching UI or business rules. The brain has a limited ability to load context, so a good application shouldn’t contain parts that require too broad a context to think about.

In practice, there are examples of such separation. In a Ruby app you can have semantic HTML, MVC, a localization file, and a repository pattern.

If we need to update the content, then we go to the localization file, and that’s it. If we need to change something in the way the database stores data, we can do it in the repository. This approach reduces rigidity, fragility, and viscosity, allows independent developability, and increases understandability.

So, if we follow these guidelines, are we safe and will we get good software?

Don’t get too excited. The main idea is to use best practices when you are at the level that they are applicable. Until you do not face the problems of not following the guidelines, it may be a bad idea to implement them.

For example, we were talking about how using abstractions is good. But the wrong abstraction may cause higher rigidity, viscosity, and worse, understandability. Also, more abstractions require more integration testing. In other words, you may get too much of an overhead trying to follow all the best practices.

For example, a couple of years ago I was working on a project — we needed to create an app using Ruby. I was the only developer on the project, at least at the beginning. It looked like a chance. For the first time in my life, I was able to choose any technology I wanted, and try all possible approaches to building architecture. That was inspiring! I dug into best practices, articles of Edsger W. Dijkstra, and so on, and tried to implement these ideas.

But finally, we ended up with a huge overhead; the code might have been done “right,” but it was still too much hard work this way to achieve results. A change that typically takes a day took a week for us to do. The irony was that we were a single team, so there was no other reason other than ‘it was a true way’ behind our architecture.

We had to roll lots of newly introduced things back.

What advice can you give, and what is your current approach to building complex infrastructure which will help to avoid such situations?

The number one piece of advice is to be pragmatic, not a purist. You don’t need to follow all the hyping approaches until you have got the problems they solve. Otherwise, it might make everything worse. A rule of thumb is to split the application according to the teams that work on it. A single team has the ability to change any aspect of an application, so a typical Rails approach is pretty sufficient.

When you try something new, you should care about consistency. If your software is like layers created with different approaches during its lifetime, after some time spent in such a style of development, you will end up with a complete mess which will be hard to deal with.

Also, take care to ensure you are knowledge sharing. When implementing new approaches to software development, you should make sure you are not the only person who knows what is going on. Any change should be aligned with the team. And, finally, you should always be ready to kill an approach. If you’ve tried something, and it does not work, often the best decision is to kill it and go further. Set goals before introducing a new approach, so you can prevent the “wait a little, it will work, you will see” bias.

Don’t rush to implement everything from the beginning; it is safer to be more iterative. It’s harder to roll back a wrong approach than continue from scratch.

read original article here