FitRadar application versioning

In the last post, I wrote about FitRadar application’s release process and what kind of releases we are planning to have. And since every release is having a unique version number, I think it is a good time to talk about the release of versioning schemes in this post.

In our app we are planning to use the Semantic versioning schema that follows this pattern: Major.Minor.Patch, where

  • Major number will be increased by 1 when breaking changes will be introduced that are incompatible with the previous Major version
  • Minor number will be increased by 1 when a new functionality will be introduced that is backward compatible (at the same time patch number will be reset to zero)
  • Patch number will be increased by 1 when a bug or set of bugs will be fixed that is backward compatible

But before we can start to apply Semantic versioning schema we still have to reach the phase when the application is stable and ready for production and that will be version 1.0.0. But until then when we are still in the development phase we are using a little bit different versioning schema that follows the pattern 0.x.y.

Our versioning schema during the development phase is tightly coupled with the application development life-cycle. In the very beginning of our project, we decided to adopt some of the Scrum project management aspects, particularly sprints or iterations. The length of the sprint is such that our team can deliver at least one use case from the user requirement list. Mostly but not always after every sprint we release an application with new major version x. Every application release can contain one or more use cases. During a sprint beside a new user story, we have to fix bugs or make improvements in the previous user story. When a bug was fixed or improvement made we release a new version with increased minor number y. It means that our application started with version 0.1.0 when the first use case was implemented.

Once we started to release our application to Google Play Store tracks we extended application version with an additional label denoting the track like this 0.x.y-track_name.

As you can see this versioning schema is not quite suitable for the production where each release should be stable and finalized. But during the development phase when each use case is iterated several times and after each iteration, we should have an application that we can install, demonstrate and test, this kind of versioning schema serves us very well.


FitRadar application’s way to Google Play Store

Finally we reached the point where our application has taken such a form that we can start to present it to the stakeholders and give it out to our testers. And this could be a good time to talk about our application’s release process. Although we still have a quite long way to go before we release the application to open public, however at this stage we already released our application for testers on the Google Play Store. To make sure that the application to general audience arrives stable, bug-free and with excellent user experience we follow the well known Software release cycle described here. It means that with every release we will open our application to the more wider public. And using Google Play Store release tracks as application distribution channels helps us to follow software release cycle best practices. There are following release tracks on Google Play Store:

  • Internal test track
  • Closed track (Alpha closed)
  • Open track (Beta open)
  • Production track

In each of these release tracks we can roll-out several versions of our application. One can start from any of the release tracks. Sometimes pre-alpha versions are useful to send to testers directly by e-mail or distribute over private distribution channels, and only on alpha or beta phase start to roll-out an application to Google Play Store, but since Google Play Store has a special track (Internal test track) for pre-alpha versions, we decided to roll-out our application as soon as we had a working application with enough features. The main advantage of distributing application over Google Play Store is the ease of installation and update, which we really missed when we had to download the application’s latest version manually.

At this phase only our company QA (Quality Assurance) people have access to the application on Google Play Store. Our QA team have worked together with developers and the business idea authors at application’s design stage and set up application acceptance requirements. And now when they have a real app at their hands they can make sure the application is not only working but works according the previously set requirements. Once our testers will make sure that our application is stable, bug-free and works according the requirements we will make our application available to the small group of trusted people outside our team. For this purpose the closed alpha release will be used. Entering this phase we don’t expect all features be present in our app, but a user should be able to perform the main actions and fulfill most of the use cases. The main purpose of this release is to get a quick feedback from real users who see the application first time and see if there is anything our team has missed on earlier stages of application development. As a communication channel on this stage will be used dedicated email box. When there won’t be any complains, like bugs, crashes, from the trusted users within few days we will move to the next release – beta release. At this stage we expect our application to have all features we are planing to put in the production version. And this will be the first time we will open our application to the general public. The purpose of beta release will still be gathering feedback from users. But this time audience of users will be much bigger and diverse. Dedicated bug tracking system will be used as a main communication channel. After that we will be ready to put our application on production track.

Visit our website and join the mailing list:

Code generation

One of the technologies that helps us in our project to follow the Inversion of Control (IoC) principle is Dagger framework ( It allows to implement Dependency Injection – one of the IoC forms. Most Dependency Injection frameworks relay on reflection that is used to scan a code for annotations, and this was the way how the early Android DI frameworks like Guice were implemented. Contrary to back-end solutions were the extra time and memory required for DI frameworks usually is not a problem, because dependencies are resolved and created at application’s start time and not during a request time, and user experience is not degraded, on mobile devices this extra time and memory might be crucial and can significantly impact user experience. Therefor Dagger which is recommended DI framework (, takes a different approach – it resolves dependencies at compile time by generating extra classes. The code generation is something that most of our team members have skeptical attitude, since very rarely generated code is of the same quality as the one written by seasoned developer. And we decided to take a little bit closer look at the area of code generation and whether it should be our concern.

The whole history of programming languages is about raising the abstraction level and generating the lower level code. It started with assembly languages when developers didn’t have to write the program in binary code (zeros and ones) anymore but were able to use human readable keywords. And assembler was used to convert the assembly code in to the machine language. Since the conversion was quite straight forward developers didn’t worried much about the resulting machine code but mostly about the assembly code. The different story was with the next generation programming languages like Fortran, Cobol, Pascal, C known as high-level programming languages. The main advantage of high-level languages over low-level languages is that they are easier to read, write, and maintain. But the process of translation (done by compiler) of such language in to machine code is much more complex and therefore the generated machine code was not that optimal as in the case of assembly language. But after awhile the compilers improved and most of developers stopped to bother with the compiled code and focused solely on high-level programming language code.

The main conclusion we made from the history of the programming languages is that as long as the generated code is not the part of the code base that developers have to read, extend and maintain and it does not impose performance penalty, there is no risk of using generated code. And from the other hand the benefit it brings is significant – it saves time a lot of time for boilerplate code and allows to focus on the application’s main logic.

Abstraction layer can always help

In my last article about the good software architecture I wrote that in order to have a good architecture we should apply Object Oriented programming principles (some of them are known as SOLID principles) and design patterns. Let’s discuss today some of these principles, and let’s start with Inversion of control and see how it shapes the design and contributes to good architecture. According to Martin, Robert C principle states:

A. High level modules should not depend upon low level modules. Both should depend upon abstractions.

B. Abstractions should not depend upon details. Details should depend upon abstractions.

With other words a class dependencies should be expressed as interfaces and not implementations. This simple principle leads to significant benefit in design – loose coupling. And loose coupling in turn leads us to

  • extensible code. Do you remember the traits of poor design from my previous article? When you come to the point when you don’t know where to put your new code, because there is no place where it would fit well, that could be the sign that code is not extensible. Think about this principle next time, when you will be puzzled. Maybe Inversion of control is the solution how to make your architecture extendable.
  • testability – dependencies can be easily swaped with mocks thus unit tested
  • parallel development – the only thing you have to take care now is the interface definitions between modules and work on modules can be done in parallel.

As you can see even by applying this one OO principle, we can improve the design in several areas. But don’t rush to provide an interface for every object. So when is it better to not abstract an implementation behind an interface? Most of the time we are writing a code by extending existing framework (in our case it is Android) using the classes and the methods of the framework and calling the third party libraries. In order to abstract away these kind of classes we would need to take another extra step of wrapping them in our own classes and only then abstract our class behind the interface. In this case, the only benefit we would gain is testability, but in case of Android with the help of Mockito we already can easily mock framework classes. Another case are models: ViewModels, DTO, Database Models and static methods (once you made them static, you must be counted on the fact that they won’t change the signature). The one thing in common for all those cases is that you are not planning or you can’t swap the existing method implementation. And so there is no need for abstraction.

Once we make necessary dependencies abstract, we still need to provide an implementation for them but in the way that the module is not aware of the implementation. Remember module sees only abstractions! And here at hand comes another design principle Inversion of Control (IoC) sometimes known as Hollywood Principle. The general philosophy is that control is handed over. For example in a plugin framework, you’re expected to override some call back methods. Let’s take the Android’s Activity class and it’s lifecycle methods. The class doesn’t have the control over when the methods get called, Android framework decides when to call them. The control is inverted. Dependency Injection (DI) is another example of IoC – the class doesn’t create its dependencies but instead gets them from someone else. By using DI we can follow one more principle – Single responsibility principle (SRP):

1) object creation and its lifetime management is delegated to DI container and thus

these responsibilities are taken away from the module and we are one step closer to SRP

2) if too many arguments are in the constructor it could be signal that we are violating the SRP. In this case, we should consider grouping dependencies and hiding them behind a Facade.

Our way to good software architecture

In this article, I wanted to share an experience of how we arrived at the architecture we are using now in our Android application and why we consider it well suited for our project. From my point of view, it is easier to describe what makes software design poor than list features that make it good. And to know what is poor software design one must have some experience with projects where the results were not very satisfying but the lessons learned were valuable. Therefore let me share some things I learned that stay on the way to good software architecture:

  • hard to add new functionality or feature:

    • there is no code to reuse for functionality that shares common traits with already implemented functionality and you are forced to copy existing code to implement a new feature

    • the only solution how to implement a feature is to rewrite existing code to fit your needs

    • it is easier to use a hack to implement a feature rather than find a place for the feature in the existing project structure

    • when you add a new code, you break existing functionality

    • it is hard for a team to work simultaneously on the same codebase

    • it is hard to read the code, even if only one developer works on code

  • hard to test added functionality:

    • you can’t isolate software units for tests

    • it is easier to test application manually as whole rather than write unit and integration tests for separate peace of software

As result writing and the testing code becomes more time consuming and more expensive.

Once we start to notice one of these things it is time to start to think about improving or introducing the architecture in our software. The architecture probably is not a big deal if you are building a 1000 line app or writing a prove of concept app, but since our project is neither of those cases, it was clear from the very beginning that we need a solid design for our application. I already wrote about the technologies we decided to use and how we arrived at those particular conclusions. And now when we are close to the finish we can look back and see whether our choices paid off. But first let me make clear, it is not possible to make a perfect design with the first time unless you already have developed an application with very similar user requirements and even then you should revise the technologies used. And that was one of the first lessons we learned about the app’s architecture in our project. Once we noticed that adding new features and testing them became much harder than the ones we implemented in the past we started to investigate the existing design. And if we found a better way how to organize our application we adjusted the architecture accordingly. Some of the biggest changes we made in our application was when we switched from Model View Presenter (MVP) pattern to Model View View Model (MVVM) and upgraded Dagger to the latest version. The other lesson we learned was, there might be cases when there isn’t only one good solution. That was the case when we were designing the database and data access layer in our Android app. We had several options like:

  • Room
  • Squidb
  • Sqlbrite

We were not sure which would fit better in our design and even now we think any of them would work well in our application.

At the time we started to design our application Android didn’t have yet Architecture Components nor Guide to App Architecture, so we had to start somewhere else but eventually we arrived to almost the same architecture model as suggested here

Contrary to ASP.NET Core (we use it for our back end) where developer can choose a project template in Visual Studio and start to work with well established patterns just by extending the template, in Android world we had to create the project structure by ourselves by choosing the most appropriate technologies and architectural patterns. Nowdays you can start with Architecture Components and recommended app architecture (, since most likely it will become most common way how to build complex applications in Android. But this is just a reference model – the backbone, base – where we can start to add our first classes. The next step was to extend our backbone in such way that we could keep avoiding pitfalls and shortcomings of poor design (the things I mentioned in the beginning). And this is the place where the classic Object Oriented Programming skills come in to play. First to make it easy to extend our application with new features we applied SOLID principles. Where S stands for Single-responsibility principle, O – Open-closed principle, L – Liskov substitution principle, I – Interface segregation principle, D – Dependency Inversion Principle. Next when we noticed technologies that allowed us to follow these principles we included it in our project. For example Dagger was one of such libraries that allowed us to follow Dependency Inversion Principle and Android data binding library was another one that made it easier to follow Single responsibility principle. Once we started to apply SOLID principles we were able to identify the standard design patterns in our application code and so we started to organize the parts of the code around the design patterns (

To make a good architecture for complex application is not an easy task. It requires experience, good understanding of Object Oriented programming and knowledge of technologies in platform you are developing for.

The way to data binding in Android

Today I wanted to share our experience on the way to MVVM (Model View View Model) pattern in Android project.

When the data binding library was introduced at Google I/O in 2015, it looked very promising technology and very awaited in our team. We have developers who developed in the past Windows applications using .NET framework using MVVM framework. They loved that pattern. And there were good reasons for it:

  • MVVM gives a clean separation between the UI and the rest of the application. Although it is still possible to bring the business logic in to the UI, because of powerful data binding expressions, but now the framework is on developers side to keep the things separate. And in most cases it is only up to developers to produce a clean code.

  • it improves test-ability. More code from UI is moved to easy testable View Model classes

  • it allows to work independently on UI and business parts. And such responsibility delegation becomes more clearer

But until 2015 MVVM was available mostly for Windows application developers, because it turns out that in order to implement MVVM pattern we know it now we need UI markup language and data binding expressions. Without these two components most common patterns developers were left with were MVP(Model–view–presenter) and MVC(Model–view–controller). In the web world things progressed faster, since already from the very beginning the main way how to create the UI was markup language (HTML) and some time later dynamic web emerged and along the way different engines that allowed developers merge data with the markup. But in the desktop and mobile application world markup languages and data bindings emerged only recently. First Microsoft introduced WPF (Windows Presentation Foundation) subsystem with XAML as a part of .NET framework. And later came out Android with its own markup language. About the same time when WPF was announced Microsoft architects Ken Cooper and Ted Peters announced MVVM pattern on their blog. And so MVVM era started in the Windows world, but Android developers had to wait little bit, before they could start to use MVVM framework.

So when the Android data binding ( and Architecture components ( were released, we decided to  give it a try. Until then we were already striving to separate UI from the rest of the application and therefore we were using MVP pattern in combination with Butter knife ( And the first thing we noticed is that we don’t have to write so much boilerplate code to keep the UI and business logic separate. Now instead of Presenter, View interface and Activity that implements the View we wrote very thin Activity and Model View and made slight changes in layout files. So some of the user stories we could write faster. But it was not always the case. When we had to work with more complex views like map view we could not bind the data directly in the layout file, because of the lack of bindings. And so data binding was done in the Activity or Fragment. But from our perspective it was still  step forward because most of the views we used had bindings and if they didn’t we could implement simple Binding Adapters. Essentially we started to use data binding in layout markup where it was easy to use it and left the UI processing code in Activity or Fragment where it will have taken a lot of time to find a way how to implement the binding for the layout markup. So for our project MVVM worked out very well, but we should admit that there are projects where it will not bring much advantage. Therefore before you take a decision about the using of MVVM and data binding spend some time to investigate views and available bindings for them you are going to use in a project.

Choosing technologies while designing FitRadar app

When the idea about the FitRadar mobile app was clear and first user requirement draft created our team started to think about the software design. And one of the first questions that rose up was – what platforms should FitRadar app target. After analyzing the market we came to conclusion that the first version should be available on Android and iOS devices. Very naturally first idea was to go for cross-platform solution by using mobile phone WebView and implement the app as HTML5, CSS, Javascript app. We started to explore this approach and discovered that it will not allow us to cover all the user stories, and we rejected this approach, but we turned to Cordova mobile development framework and Xamarin, and very quickly realized some potential pitfalls, that could stop or slow down customer base growth:

  • right now the user requirements are mostly set by business owners and testers and not that much by potential users. But when the app will acquire the first customers their feedback will play big role in defining the app’s next version feature set. And even if cross-platform solution could satisfy are needs now, in the future we could run in to the situation where cross-platform solution won’t be able to provide the features, performance or design that users are asking for. And that in turn will lead the project to stagnation.
  • our app is not just an attachment to the main business but it drives the business, therefore it should meet the highest user expectations.
  • native apps usually have greater visibility at app stores and often get better user ratings and recommendations. This is essential for a good APP Store Optimization (ASO) positioning. Mostly because designing and building an app for multiple platforms with platform-specific user experience is too difficult

Once we agreed on the native approach we needed to decide with which platform to start and how to share the work done for one platform with other platform. We started to explore different approaches and technologies that would allow us to reuse the artifacts to some extent. As for now there is no magic technology that would allow us to convert Android app into iOS app or vice versa, therefore the approach we chose is to separate as much as possible the platform dependent code like UI from platform independent code like view models, business models and logic and data access code. And in such way we hoped that conversion of one language code base to another one could be be done faster. After exploring and comparing several approaches and technologies the several stood out among the others as the ones that work on both platforms:

1) Clean Architecture (

2) MVVM and databinding (,

3) RxAndroid / RxSwift (,

The main idea was to make the platform specific code as lean as possible and shared code as similar as possible on both platforms. It would allow us easy and quickly convert the shared code from one platform language to another. It turns out that the solution structure, app layers and the idea of common code and platform specific code separation was similar to one used by Xamarian ( The idea that we could speed up development just by using one common language in this case C# was very attractive. And therefor we decided to revise our decision regarding the native approach. We talked to other software development teams that have experience in delivering apps using Xamarin and the here is what they shared with us: 

  • it works – you can create cross-platform app using one code base, but whether it saves time and money really depends on the project, because
  • the code you share between platforms is only about 50-70% from all the code, the rest is platform specific code and it has to be written for each platform separately
  • the development time for non out of the box platform specific features increases, since not all nice and useful third party libraries have Xamarin bind/port.
  • developers have much less help at their disposal to tackle a problem faced during the development. The Xamarin community is much smaller than Android or iOS. It means it could take more time.
  • the deployment of app even to a real device is much slower comparing with native app deployment. And again it means that the development process is slower.

And once again we came to the same conclusion – the cross-platform benefits in our project will not outweigh the potential risks it could introduce.