Striving for better code

Recently we refactored our Android’s application navigation system – we replaced fragment transactions with Android Jetpack navigation component. We already had a good experience with migrating from Model View Presenter architecture to Model View View Model architecture (using Android Jetpack Architecture software components). During refactoring we had to convert our existing presenters and views into viewmodels and views with binding expressions. There was a risk of loosing time due doing unplanned activity, therefore before making the decision of migration we carefully estimated the risks related to refactoring. And the main question we were facing with was: will we see a benefit of a refactoring before the app’s release? (We already saw the benefit in a long term, but were not sure about the immediate gain). Calculations showed that even doing refactoring we would gain in development speed. And indeed it turned out that within a month our invested time started to pay off and we started faster implement new use cases even spending additional time on refactoring the existing use cases.

So we started to wonder if it is a worthwhile endeavor to switch to new navigation system. While estimating time for refactoring and immediate benefits I started to think why we started to consider this idea at all. What exactly drives us in making such decisions as going back to already written code and refactor it? And I decided to put some thoughts in this blog about the motivation behind the desire for better code.

I already wrote about clean architecture and design. And in this blog I will return to this topic but from a different angle. Let me first mention that in my experience clean code is very opinionated term because it is not strictly defined by rules but rather by feelings – if the given code is easy to read, perceive, extend, test and maintain it must be a clean code. And different developers this feeling can achieve by different coding approaches. Therefore there will always be disputes about the best language, framework or paradigm. And quite often different developers perceive the same code differently – the one might be convinced that the code is clean because from his perspective it is easy to understand and extend and at the same time another developer will claim that taking a different approach would lead to better code.

Therefore from my point of view it is pointless to argue only about the approaches in coding, but the languages, frameworks or paradigms should be discussed together with the measurable results these different approaches yield. And only then we can talk about the clean code.

And one such measurable indicator that we looked at, while were considering migration to the new Jetpack components, was overall development speed.

And here I want to emphasize the word overall, because initially coding in dirty way is much faster than structuring code and only after awhile the well written code starts to shine and only then the terms testable, extendable, maintainable really start to make sense for you. Actually I don’t see anything wrong in writing one big chunk of spaghetti code if all developers who work now and will work on such code in the future can understand it, test it and extend it as fast as very well structured code. But this is never the case, unless the developer is some kind of genius, and even then you need to make sure that every developer in a team is a same level genius. So that is why the abstraction levels were raised, that we don’t have to write mathematical expressions and well known business logic in machine code. That is why different paradigms were invented that we faster and easier grasp the problems we are coding. And that is why patterns and best practices are used that we can solve the problems faster and every developer can understand the solution. From the other hand the computer or the phone that runs our software doesn’t care about the code we wrote because it doesn’t use our code but the code produced by compiler or interpreter. The very bad code (from developers point of view) can be converted to the same machine code, that is produced from very well written code. Unfortunately in most cases the only people who care about the code quality are developers. No one else really care about it – stake holders, managers and users care only about the end result. As long as the application meets the requirements it is good enough to use it. But for us developers the well written code can greatly shorten development time and bring a peace in our mind.

But how come we get so much code which doesn’t follow the clean code principles? One obvious reason is lack of experience. At some point of time we developers all have written poor code, just because we didn’t now how to write it better, but along the time we learned and reached a certain level of mastery. Shouldn’t we write now only a clean code? Unfortunately no, and there are several reasons for it. The one we learned while migrating to Jetpack components is: it doesn’t matter how good the code is now there is always a room for improvement, we just don’t know yet how to do it.

FitRadar application versioning

In the last post, I wrote about FitRadar application’s release process and what kind of releases we are planning to have. And since every release is having a unique version number, I think it is a good time to talk about the release of versioning schemes in this post.

In our app we are planning to use the Semantic versioning schema that follows this pattern: Major.Minor.Patch, where

  • Major number will be increased by 1 when breaking changes will be introduced that are incompatible with the previous Major version
  • Minor number will be increased by 1 when a new functionality will be introduced that is backward compatible (at the same time patch number will be reset to zero)
  • Patch number will be increased by 1 when a bug or set of bugs will be fixed that is backward compatible

But before we can start to apply Semantic versioning schema we still have to reach the phase when the application is stable and ready for production and that will be version 1.0.0. But until then when we are still in the development phase we are using a little bit different versioning schema that follows the pattern 0.x.y.

Our versioning schema during the development phase is tightly coupled with the application development life-cycle. In the very beginning of our project, we decided to adopt some of the Scrum project management aspects, particularly sprints or iterations. The length of the sprint is such that our team can deliver at least one use case from the user requirement list. Mostly but not always after every sprint we release an application with new major version x. Every application release can contain one or more use cases. During a sprint beside a new user story, we have to fix bugs or make improvements in the previous user story. When a bug was fixed or improvement made we release a new version with increased minor number y. It means that our application started with version 0.1.0 when the first use case was implemented.

Once we started to release our application to Google Play Store tracks we extended application version with an additional label denoting the track like this 0.x.y-track_name.

As you can see this versioning schema is not quite suitable for the production where each release should be stable and finalized. But during the development phase when each use case is iterated several times and after each iteration, we should have an application that we can install, demonstrate and test, this kind of versioning schema serves us very well.

http://fitradar.me/

 

FitRadar application’s way to Google Play Store

Finally we reached the point where our application has taken such a form that we can start to present it to the stakeholders and give it out to our testers. And this could be a good time to talk about our application’s release process. Although we still have a quite long way to go before we release the application to open public, however at this stage we already released our application for testers on the Google Play Store. To make sure that the application to general audience arrives stable, bug-free and with excellent user experience we follow the well known Software release cycle described here. It means that with every release we will open our application to the more wider public. And using Google Play Store release tracks as application distribution channels helps us to follow software release cycle best practices. There are following release tracks on Google Play Store:

  • Internal test track
  • Closed track (Alpha closed)
  • Open track (Beta open)
  • Production track

In each of these release tracks we can roll-out several versions of our application. One can start from any of the release tracks. Sometimes pre-alpha versions are useful to send to testers directly by e-mail or distribute over private distribution channels, and only on alpha or beta phase start to roll-out an application to Google Play Store, but since Google Play Store has a special track (Internal test track) for pre-alpha versions, we decided to roll-out our application as soon as we had a working application with enough features. The main advantage of distributing application over Google Play Store is the ease of installation and update, which we really missed when we had to download the application’s latest version manually.

At this phase only our company QA (Quality Assurance) people have access to the application on Google Play Store. Our QA team have worked together with developers and the business idea authors at application’s design stage and set up application acceptance requirements. And now when they have a real app at their hands they can make sure the application is not only working but works according the previously set requirements. Once our testers will make sure that our application is stable, bug-free and works according the requirements we will make our application available to the small group of trusted people outside our team. For this purpose the closed alpha release will be used. Entering this phase we don’t expect all features be present in our app, but a user should be able to perform the main actions and fulfill most of the use cases. The main purpose of this release is to get a quick feedback from real users who see the application first time and see if there is anything our team has missed on earlier stages of application development. As a communication channel on this stage will be used dedicated email box. When there won’t be any complains, like bugs, crashes, from the trusted users within few days we will move to the next release – beta release. At this stage we expect our application to have all features we are planing to put in the production version. And this will be the first time we will open our application to the general public. The purpose of beta release will still be gathering feedback from users. But this time audience of users will be much bigger and diverse. Dedicated bug tracking system will be used as a main communication channel. After that we will be ready to put our application on production track.

Visit our website and join the mailing list:

http://fitradar.me/

Code generation

One of the technologies that helps us in our project to follow the Inversion of Control (IoC) principle is Dagger framework (https://google.github.io/dagger/). It allows to implement Dependency Injection – one of the IoC forms. Most Dependency Injection frameworks relay on reflection that is used to scan a code for annotations, and this was the way how the early Android DI frameworks like Guice were implemented. Contrary to back-end solutions were the extra time and memory required for DI frameworks usually is not a problem, because dependencies are resolved and created at application’s start time and not during a request time, and user experience is not degraded, on mobile devices this extra time and memory might be crucial and can significantly impact user experience. Therefor Dagger which is recommended DI framework (https://developer.android.com/topic/performance/memory), takes a different approach – it resolves dependencies at compile time by generating extra classes. The code generation is something that most of our team members have skeptical attitude, since very rarely generated code is of the same quality as the one written by seasoned developer. And we decided to take a little bit closer look at the area of code generation and whether it should be our concern.

The whole history of programming languages is about raising the abstraction level and generating the lower level code. It started with assembly languages when developers didn’t have to write the program in binary code (zeros and ones) anymore but were able to use human readable keywords. And assembler was used to convert the assembly code in to the machine language. Since the conversion was quite straight forward developers didn’t worried much about the resulting machine code but mostly about the assembly code. The different story was with the next generation programming languages like Fortran, Cobol, Pascal, C known as high-level programming languages. The main advantage of high-level languages over low-level languages is that they are easier to read, write, and maintain. But the process of translation (done by compiler) of such language in to machine code is much more complex and therefore the generated machine code was not that optimal as in the case of assembly language. But after awhile the compilers improved and most of developers stopped to bother with the compiled code and focused solely on high-level programming language code.

The main conclusion we made from the history of the programming languages is that as long as the generated code is not the part of the code base that developers have to read, extend and maintain and it does not impose performance penalty, there is no risk of using generated code. And from the other hand the benefit it brings is significant – it saves time a lot of time for boilerplate code and allows to focus on the application’s main logic.

Abstraction layer can always help

In my last article about the good software architecture I wrote that in order to have a good architecture we should apply Object Oriented programming principles (some of them are known as SOLID principles) and design patterns. Let’s discuss today some of these principles, and let’s start with Inversion of control and see how it shapes the design and contributes to good architecture. According to Martin, Robert C principle states:

A. High level modules should not depend upon low level modules. Both should depend upon abstractions.

B. Abstractions should not depend upon details. Details should depend upon abstractions.

With other words a class dependencies should be expressed as interfaces and not implementations. This simple principle leads to significant benefit in design – loose coupling. And loose coupling in turn leads us to

  • extensible code. Do you remember the traits of poor design from my previous article? When you come to the point when you don’t know where to put your new code, because there is no place where it would fit well, that could be the sign that code is not extensible. Think about this principle next time, when you will be puzzled. Maybe Inversion of control is the solution how to make your architecture extendable.
  • testability – dependencies can be easily swaped with mocks thus unit tested
  • parallel development – the only thing you have to take care now is the interface definitions between modules and work on modules can be done in parallel.

As you can see even by applying this one OO principle, we can improve the design in several areas. But don’t rush to provide an interface for every object. So when is it better to not abstract an implementation behind an interface? Most of the time we are writing a code by extending existing framework (in our case it is Android) using the classes and the methods of the framework and calling the third party libraries. In order to abstract away these kind of classes we would need to take another extra step of wrapping them in our own classes and only then abstract our class behind the interface. In this case, the only benefit we would gain is testability, but in case of Android with the help of Mockito we already can easily mock framework classes. Another case are models: ViewModels, DTO, Database Models and static methods (once you made them static, you must be counted on the fact that they won’t change the signature). The one thing in common for all those cases is that you are not planning or you can’t swap the existing method implementation. And so there is no need for abstraction.

Once we make necessary dependencies abstract, we still need to provide an implementation for them but in the way that the module is not aware of the implementation. Remember module sees only abstractions! And here at hand comes another design principle Inversion of Control (IoC) sometimes known as Hollywood Principle. The general philosophy is that control is handed over. For example in a plugin framework, you’re expected to override some call back methods. Let’s take the Android’s Activity class and it’s lifecycle methods. The class doesn’t have the control over when the methods get called, Android framework decides when to call them. The control is inverted. Dependency Injection (DI) is another example of IoC – the class doesn’t create its dependencies but instead gets them from someone else. By using DI we can follow one more principle – Single responsibility principle (SRP):

1) object creation and its lifetime management is delegated to DI container and thus

these responsibilities are taken away from the module and we are one step closer to SRP

2) if too many arguments are in the constructor it could be signal that we are violating the SRP. In this case, we should consider grouping dependencies and hiding them behind a Facade.