Keeping the code growth under control

The other day I had a discussion with one of our team members about how to keep classes readable and don’t end up with huge files containing thousands of lines and hundreds of methods. It turns out that this question deals with the very basics of Object Oriented programming and I decided to give my view on some of the OOP principles that help me keep a code growth maintainable and get my code to comply with good design principles. One of the first principles that comes to my mind is Single responsibility principle (SRP) that states “Each software module should have one and only one reason to change”. Frankly for a long time I had a hard time to apply this description to daily code and therefore I came up with my own steps derived from other OOP principles that help me to follow this principle.

Starting Point

Since lately I was developing either webservices, websites or mobile applications then my starting point is one of the widely acknowledge architectural patterns It gives me a good starting point with project structure and files, where I can start to add a code. For this moment these patterns are well established and I really suggest to go with one of these design patterns unless you are developing a very simple application and don’t have plans to evolve the project. And if you know the project will be something more than delegating CRUD requests to the database, then it is worth to start already with layered project structure where Model in MVC, MVP or MVVM is organized in Business Models, Services and Repositories. And maybe even consider the whole Domain Driven Development approach. But how to estimate the starting architecture for the application is a topic for another article.

Single responsibility principle for methods

Then next I start to fill provided methods (actions in controllers in case of web-services or activity’s lifecycle methods in case of Android application) and observe how my starting methods evolve. These methods are placed where I start to apply the Single responsibility principle for methods. Once I have methods that have only one reason to change I switch my focus to classes. And here are some rules I follow to achieve SRP in my methods:

  • DRY (don’t repeat yourself). If I discover that several methods share a common piece of code I extract the common code in a separate method and make it reusable for other methods. I think this is one of the first principles of clean code most of the developers learn. And since this principle is so fundamental then many IDEs included the method extraction as part of their refactoring tool set.
  • I check whether there are common variables more than one method is operating on. If there are such variables I make them class level private fields. I repeat this step every time when a new method is extracted. And if several classes have common fields or extracted methods then it is time for new base class.
  • I make sure the methods I extract are doing what its name suggests. If a method’s name contains one verb then I make sure the method is either command that changes the state of the object or it returns data. And if the method name contains more than one verb it is obvious that the method is doing more than one thing. Sometimes it is acceptable. For example in cases when I write logs along the method’s basic logic. If tools or frameworks allow handling such method side behavior I extract it as an aspect.
  • I respect the levels of abstraction and try to keep method statements on the same abstraction level. One sign of that I might be violating this rule are a long loop and if bodies. The statements in long bodies most likely belong to lower level abstraction than the statements outside the loop or if. But sometimes on high-level method I have to call a single line of lower abstraction code, then I leave it.
  • I keep the number of method arguments short. If I need more than 2 arguments, then maybe it is time for new class, and instead of several primitive types, I should be passing a class as a method argument.
  • And finally, I use cohesion level description to match my methods against different cohesion types and see that I am avoiding Procedural, Logical, and Coincidental Cohesion.

Single responsibility principle for classes

As code evolves I start to have more and more methods in classes provided by initial architecture and fields in those classes. Now I check if it is not the time to split my classes. For a long time, I had a hard time to choose the right class for a method. And one of the reason were examples I was reading about in books and articles on the Internet. Those examples were focusing mainly on the names of methods and classes and how by names estimate the relationships between methods. But soon I discovered that using only names can lead to subjective decisions. Although many OOP principles are subjective from my point of view anyway (in case of SRP someone could argue that keeping all methods in one class is more convenient than creating a hierarchy of classes, and one should use an IDE for navigation between the methods instead of file system navigator) I wanted to use something measurable that would allow me to estimate how tight the relationship between a method and its containing class is. And one of such metrics I found is cohesion. Low cohesion means methods inside the class are independent of each other. On the other hand, high cohesion means methods in the class are strongly related. But how can we express this relationship in numbers? It turns out there are several cohesion metrics that give a developer insight into relationships between methods. And sometimes distributing methods to classes according to the cohesion level among them, one can discover new classes, he didn’t even think about before. And here are the description of some of the cohesion metrics. The metric I am using the most is “Lack of Cohesion of Methods” (LCOM) metric, that for each field in a class counts the number of methods that reference it. Then it sums up the number of methods and divides the result with a count of methods times the count of fields and subtract the result from one, like this: 1 – (NumberOfMethodsReferencingFields /(NumberOfMethods * NumberOfFields)). The metric ranges from 0 to 1, where 0 means high cohesion and 1 low cohesion.

Following these rules, I was able to achieve the Single Responsibility principle in my modules.

P.S. Visit our website and join the mailing list! Our app is coming soon: http://fitradar.me/

Another programming language!

During the last year while searching and exploring code examples of Android’s new technologies like Jet pack architecture components our team noticed that more and more code on the git hub is posted only in Kotlin – the new programming language for Android platform. And that raised a question in our team, is this new kid on the block something that can speed up our app development even more and we should start to prepare for another transition? And as usually we started to explore and see what benefits the new language can bring. But for me all this recalled my own experience with programming languages and how I was switching between different languages.

My first encounter with programming happened at high school and the first language I learned was BASIC. The friendship with BASIC lasted only a year. When I started to study at university I switched to Pascal. Yes, that was quite long time ago. At that time most of the software I was producing was dealing with mathematics and were command line utilities. I didn’t see a much difference between those two languages except that now in Pascal along the variables I had to declare their types. The course at university was organised so that students had to start with some high level language and gradually switching to C/C++. So far in the length of time I was switching from simple to more complex languages, and at that point of time it didn’t seem that switching to more complex language would ease the programming process, on the contrary the C/C++ comparing with BASIC made it harder to write software that solve mathematical problems. And that was the first time when I learned about the Programming domain. C/C++ gave me the ability to write software for almost any domain. But at the same time for some tasks more high level languages were preferable. After C/C++ followed SQL and Java, then C# and JavaScript. When I switched from C++ to Java the same task could take 10-25% less coding. Now the operations with text were easier and all pointers were gone. When I learned C# and that was the time when Microsoft released C# version 2.0, but Oracle had Java 1.5 I noticed in cases where callbacks and events where used in C# I could type 5-10% less code. But this increase in productivity was only in certain programming domains, like Windows applications. In other areas like embedded systems or OS C was still the main option, and the arise of new languages didn’t really affect those fields. The main conclusion I made for myself was that if a new language emerges, then most likely it speeds up development only in certain domain.

And what about Kotlin? The creators of Kotlin JetBrains claim that replacing Java with Kotlin will allow developers be more productive. After playing with Kotlin little bit and converting some existing Android classes into Kotlin we saw that it could really be the case. Then we looked at oficial Kotlin’s documentation where Kotlin is compared with Java and although some of the features we are really missing in Java like Data classes, Null-safety and Extensions we didn’t really see anything groundbreaking. And we were about to put Kotlin aside for this project since :

  • our team doesn’t feel yet so comfortable with Kotlin that we could produce code that is easy readable and follow best practices. To master the language requires time.
  • The productivity we would gain would not outweigh the time we would spent on learning the new language

We assumed if we want to switch to the new language we have to convert the whole application code base to the new language and keep coding in Kotlin, but then we noticed this tutorial saying that we can mix Java and Kotlin in one project. And we saw this as a good opportunity some of the files write in Kotline. And one of the first candidates for Kotlin we chose entities and DTO. It requires to learn very few new syntax and classes can be made much smaller thanks to getting rid of getters and setters. We decided to spend some time on learning Kotlin and once we learn the feature that really reduces the number of code lines and brings something valuable in code comparing with Java we will write that peace of code in Kotlin. In such way we hope we can keep the same development speed while learning new language and at some point of time start to reduce the development time comparing with Java. It worked well in time when I switched from C/C++ to Java and now it looks like it is time to switch from Java to Kotlin.

http://fitradar.me/

Migrating to the new navigation system

In my last article I mentioned that we migrated to the new Android’s navigation system that is a part of Android Jetpack components. This time I want to share some implementation details. Usually in my posts I don’t really like to share the actual code, but only concepts, since there are plenty of implementation examples for the particular concept, and I don’t want to repeat what the others already wrote, I rather prefer to explain the reasoning behind the one or another decision. But while migrating to the new navigation system our team has faced a few challenges, that was not possible to resolve just by reading available documentation and copying code examples, therefore I think some implementation details we came up with might help someone who is striving to master the new navigation system.

Problem

After all our Fragment and Activity analysis we came to conclusion that we should have several activities In our application which would serve as hosts for set of Fragments. And the features we used to group by the fragments were:

  • common master layout
  • common UI logic
  • common use case

So we started to explore the documentation and look for code examples, that would help us to understand how to implement the navigation from one navigation graph to the other graph hosted in different activities. And very quickly we realized that the documentation and examples available on the Internet are heavily focused around a single Activity application and navigation between the fragments hosted in one navigation graph. Although official Android documentation mentions that the new navigation system supports Activities as destinations. The biggest challenge was to find a way how to assign the start destination to navigation graph at runtime depending on various parameters. In our app user should be able to create a sport event and later edit it or delete it. All fragments related to sport event we decided to put in one navigation graph. But the start destination in this graph depends on whether user wants to create a new sport event, edit previously created sport event or see other user’s sport event. So we could not assign the start destination at design time in navigation graph but we had to do it at runtime. Since we could not find enough information how to do it, we started to consider other solutions.

Solution

Spending hours on reading documentation and exploring examples we started to think maybe we should use only one Activity. And put all UI logic from the other Activities in this one and show/hide the parts in UI which are needed/not needed for active fragment. But very soon it became clear that this universal hosting activity is getting very big and it is hard to read it and we have to add additional logic for hiding and showing views in Activity. The hosting activity became something opposite we were striving for – smaller and more manageable classes. So we dropped this idea and started to consider moving the views from Activities to Fragments, that Fragments would contain all the necessary views and UI logic and hosting Activity would be left only with NavHostFragment. That would mean that views like BottomNavigationView and Toolbar now would be located in several Fragments. We could ofcourse use include tag but that still would not remove the need to repeat the same include tags in several fragments. So we didn’t really like this idea either, since it looked like we are violating the DRY (Don’t Repeat Yourself) principle. And we got back to the idea about several activities. This time in order to find a solution we started to dig in to the navigation components source code, and after awhile we came up with the following implementation that allows assign a start destination to a navigation graph at runtime:


private void initNavGraph() {
    NavHostFragment navHostFragment = (NavHostFragment) getSupportFragmentManager()
            .findFragmentById(R.id.sport_activity_nav_fragment);
    NavInflater inflater = navHostFragment.getNavController().getNavInflater();
    NavGraph graph = inflater.inflate(R.navigation.nav_sport_event);
    String sportEventId = getIntent().getStringExtra(EXTRA_SPORT_EVENT_ID);
    long calendarEventId = getIntent().getLongExtra(CalendarContract.Events._ID, -1);
    if (calendarEventId == -1 && sportEventId == null) {
        graph.setStartDestination(R.id.editMySportEventFragment);
    } else {
        graph.setStartDestination(R.id.bookSportEventFragment2);
    }
    this.mNavController = navHostFragment.getNavController();
    graph.addDefaultArguments(getIntent().getExtras());
    this.mNavController.setGraph(graph);
}

The above method creates new navigation graph and sets the needed start destination, and passes arguments we set in Intent when started the hosting Activity. In such way we can pass needed arguments further to the start destination. And finally we set this new navigation graph with defined start destination and arguments in navigation controller.

Striving for better code

Recently we refactored our Android’s application navigation system – we replaced fragment transactions with Android Jetpack navigation component. We already had a good experience with migrating from Model View Presenter architecture to Model View View Model architecture (using Android Jetpack Architecture software components). During refactoring we had to convert our existing presenters and views into viewmodels and views with binding expressions. There was a risk of loosing time due doing unplanned activity, therefore before making the decision of migration we carefully estimated the risks related to refactoring. And the main question we were facing with was: will we see a benefit of a refactoring before the app’s release? (We already saw the benefit in a long term, but were not sure about the immediate gain). Calculations showed that even doing refactoring we would gain in development speed. And indeed it turned out that within a month our invested time started to pay off and we started faster implement new use cases even spending additional time on refactoring the existing use cases.

So we started to wonder if it is a worthwhile endeavor to switch to new navigation system. While estimating time for refactoring and immediate benefits I started to think why we started to consider this idea at all. What exactly drives us in making such decisions as going back to already written code and refactor it? And I decided to put some thoughts in this blog about the motivation behind the desire for better code.

I already wrote about clean architecture and design. And in this blog I will return to this topic but from a different angle. Let me first mention that in my experience clean code is very opinionated term because it is not strictly defined by rules but rather by feelings – if the given code is easy to read, perceive, extend, test and maintain it must be a clean code. And different developers this feeling can achieve by different coding approaches. Therefore there will always be disputes about the best language, framework or paradigm. And quite often different developers perceive the same code differently – the one might be convinced that the code is clean because from his perspective it is easy to understand and extend and at the same time another developer will claim that taking a different approach would lead to better code.

Therefore from my point of view it is pointless to argue only about the approaches in coding, but the languages, frameworks or paradigms should be discussed together with the measurable results these different approaches yield. And only then we can talk about the clean code.

And one such measurable indicator that we looked at, while were considering migration to the new Jetpack components, was overall development speed.

And here I want to emphasize the word overall, because initially coding in dirty way is much faster than structuring code and only after awhile the well written code starts to shine and only then the terms testable, extendable, maintainable really start to make sense for you. Actually I don’t see anything wrong in writing one big chunk of spaghetti code if all developers who work now and will work on such code in the future can understand it, test it and extend it as fast as very well structured code. But this is never the case, unless the developer is some kind of genius, and even then you need to make sure that every developer in a team is a same level genius. So that is why the abstraction levels were raised, that we don’t have to write mathematical expressions and well known business logic in machine code. That is why different paradigms were invented that we faster and easier grasp the problems we are coding. And that is why patterns and best practices are used that we can solve the problems faster and every developer can understand the solution. From the other hand the computer or the phone that runs our software doesn’t care about the code we wrote because it doesn’t use our code but the code produced by compiler or interpreter. The very bad code (from developers point of view) can be converted to the same machine code, that is produced from very well written code. Unfortunately in most cases the only people who care about the code quality are developers. No one else really care about it – stake holders, managers and users care only about the end result. As long as the application meets the requirements it is good enough to use it. But for us developers the well written code can greatly shorten development time and bring a peace in our mind.

But how come we get so much code which doesn’t follow the clean code principles? One obvious reason is lack of experience. At some point of time we developers all have written poor code, just because we didn’t now how to write it better, but along the time we learned and reached a certain level of mastery. Shouldn’t we write now only a clean code? Unfortunately no, and there are several reasons for it. The one we learned while migrating to Jetpack components is: it doesn’t matter how good the code is now there is always a room for improvement, we just don’t know yet how to do it.

FitRadar application versioning

In the last post, I wrote about FitRadar application’s release process and what kind of releases we are planning to have. And since every release is having a unique version number, I think it is a good time to talk about the release of versioning schemes in this post.

In our app we are planning to use the Semantic versioning schema that follows this pattern: Major.Minor.Patch, where

  • Major number will be increased by 1 when breaking changes will be introduced that are incompatible with the previous Major version
  • Minor number will be increased by 1 when a new functionality will be introduced that is backward compatible (at the same time patch number will be reset to zero)
  • Patch number will be increased by 1 when a bug or set of bugs will be fixed that is backward compatible

But before we can start to apply Semantic versioning schema we still have to reach the phase when the application is stable and ready for production and that will be version 1.0.0. But until then when we are still in the development phase we are using a little bit different versioning schema that follows the pattern 0.x.y.

Our versioning schema during the development phase is tightly coupled with the application development life-cycle. In the very beginning of our project, we decided to adopt some of the Scrum project management aspects, particularly sprints or iterations. The length of the sprint is such that our team can deliver at least one use case from the user requirement list. Mostly but not always after every sprint we release an application with new major version x. Every application release can contain one or more use cases. During a sprint beside a new user story, we have to fix bugs or make improvements in the previous user story. When a bug was fixed or improvement made we release a new version with increased minor number y. It means that our application started with version 0.1.0 when the first use case was implemented.

Once we started to release our application to Google Play Store tracks we extended application version with an additional label denoting the track like this 0.x.y-track_name.

As you can see this versioning schema is not quite suitable for the production where each release should be stable and finalized. But during the development phase when each use case is iterated several times and after each iteration, we should have an application that we can install, demonstrate and test, this kind of versioning schema serves us very well.

http://fitradar.me/

 

FitRadar application’s way to Google Play Store

Finally we reached the point where our application has taken such a form that we can start to present it to the stakeholders and give it out to our testers. And this could be a good time to talk about our application’s release process. Although we still have a quite long way to go before we release the application to open public, however at this stage we already released our application for testers on the Google Play Store. To make sure that the application to general audience arrives stable, bug-free and with excellent user experience we follow the well known Software release cycle described here. It means that with every release we will open our application to the more wider public. And using Google Play Store release tracks as application distribution channels helps us to follow software release cycle best practices. There are following release tracks on Google Play Store:

  • Internal test track
  • Closed track (Alpha closed)
  • Open track (Beta open)
  • Production track

In each of these release tracks we can roll-out several versions of our application. One can start from any of the release tracks. Sometimes pre-alpha versions are useful to send to testers directly by e-mail or distribute over private distribution channels, and only on alpha or beta phase start to roll-out an application to Google Play Store, but since Google Play Store has a special track (Internal test track) for pre-alpha versions, we decided to roll-out our application as soon as we had a working application with enough features. The main advantage of distributing application over Google Play Store is the ease of installation and update, which we really missed when we had to download the application’s latest version manually.

At this phase only our company QA (Quality Assurance) people have access to the application on Google Play Store. Our QA team have worked together with developers and the business idea authors at application’s design stage and set up application acceptance requirements. And now when they have a real app at their hands they can make sure the application is not only working but works according the previously set requirements. Once our testers will make sure that our application is stable, bug-free and works according the requirements we will make our application available to the small group of trusted people outside our team. For this purpose the closed alpha release will be used. Entering this phase we don’t expect all features be present in our app, but a user should be able to perform the main actions and fulfill most of the use cases. The main purpose of this release is to get a quick feedback from real users who see the application first time and see if there is anything our team has missed on earlier stages of application development. As a communication channel on this stage will be used dedicated email box. When there won’t be any complains, like bugs, crashes, from the trusted users within few days we will move to the next release – beta release. At this stage we expect our application to have all features we are planing to put in the production version. And this will be the first time we will open our application to the general public. The purpose of beta release will still be gathering feedback from users. But this time audience of users will be much bigger and diverse. Dedicated bug tracking system will be used as a main communication channel. After that we will be ready to put our application on production track.

Visit our website and join the mailing list:

http://fitradar.me/

Code generation

One of the technologies that helps us in our project to follow the Inversion of Control (IoC) principle is Dagger framework (https://google.github.io/dagger/). It allows to implement Dependency Injection – one of the IoC forms. Most Dependency Injection frameworks relay on reflection that is used to scan a code for annotations, and this was the way how the early Android DI frameworks like Guice were implemented. Contrary to back-end solutions were the extra time and memory required for DI frameworks usually is not a problem, because dependencies are resolved and created at application’s start time and not during a request time, and user experience is not degraded, on mobile devices this extra time and memory might be crucial and can significantly impact user experience. Therefor Dagger which is recommended DI framework (https://developer.android.com/topic/performance/memory), takes a different approach – it resolves dependencies at compile time by generating extra classes. The code generation is something that most of our team members have skeptical attitude, since very rarely generated code is of the same quality as the one written by seasoned developer. And we decided to take a little bit closer look at the area of code generation and whether it should be our concern.

The whole history of programming languages is about raising the abstraction level and generating the lower level code. It started with assembly languages when developers didn’t have to write the program in binary code (zeros and ones) anymore but were able to use human readable keywords. And assembler was used to convert the assembly code in to the machine language. Since the conversion was quite straight forward developers didn’t worried much about the resulting machine code but mostly about the assembly code. The different story was with the next generation programming languages like Fortran, Cobol, Pascal, C known as high-level programming languages. The main advantage of high-level languages over low-level languages is that they are easier to read, write, and maintain. But the process of translation (done by compiler) of such language in to machine code is much more complex and therefore the generated machine code was not that optimal as in the case of assembly language. But after awhile the compilers improved and most of developers stopped to bother with the compiled code and focused solely on high-level programming language code.

The main conclusion we made from the history of the programming languages is that as long as the generated code is not the part of the code base that developers have to read, extend and maintain and it does not impose performance penalty, there is no risk of using generated code. And from the other hand the benefit it brings is significant – it saves time a lot of time for boilerplate code and allows to focus on the application’s main logic.

Abstraction layer can always help

In my last article about the good software architecture I wrote that in order to have a good architecture we should apply Object Oriented programming principles (some of them are known as SOLID principles) and design patterns. Let’s discuss today some of these principles, and let’s start with Inversion of control and see how it shapes the design and contributes to good architecture. According to Martin, Robert C principle states:

A. High level modules should not depend upon low level modules. Both should depend upon abstractions.

B. Abstractions should not depend upon details. Details should depend upon abstractions.

With other words a class dependencies should be expressed as interfaces and not implementations. This simple principle leads to significant benefit in design – loose coupling. And loose coupling in turn leads us to

  • extensible code. Do you remember the traits of poor design from my previous article? When you come to the point when you don’t know where to put your new code, because there is no place where it would fit well, that could be the sign that code is not extensible. Think about this principle next time, when you will be puzzled. Maybe Inversion of control is the solution how to make your architecture extendable.
  • testability – dependencies can be easily swaped with mocks thus unit tested
  • parallel development – the only thing you have to take care now is the interface definitions between modules and work on modules can be done in parallel.

As you can see even by applying this one OO principle, we can improve the design in several areas. But don’t rush to provide an interface for every object. So when is it better to not abstract an implementation behind an interface? Most of the time we are writing a code by extending existing framework (in our case it is Android) using the classes and the methods of the framework and calling the third party libraries. In order to abstract away these kind of classes we would need to take another extra step of wrapping them in our own classes and only then abstract our class behind the interface. In this case, the only benefit we would gain is testability, but in case of Android with the help of Mockito we already can easily mock framework classes. Another case are models: ViewModels, DTO, Database Models and static methods (once you made them static, you must be counted on the fact that they won’t change the signature). The one thing in common for all those cases is that you are not planning or you can’t swap the existing method implementation. And so there is no need for abstraction.

Once we make necessary dependencies abstract, we still need to provide an implementation for them but in the way that the module is not aware of the implementation. Remember module sees only abstractions! And here at hand comes another design principle Inversion of Control (IoC) sometimes known as Hollywood Principle. The general philosophy is that control is handed over. For example in a plugin framework, you’re expected to override some call back methods. Let’s take the Android’s Activity class and it’s lifecycle methods. The class doesn’t have the control over when the methods get called, Android framework decides when to call them. The control is inverted. Dependency Injection (DI) is another example of IoC – the class doesn’t create its dependencies but instead gets them from someone else. By using DI we can follow one more principle – Single responsibility principle (SRP):

1) object creation and its lifetime management is delegated to DI container and thus

these responsibilities are taken away from the module and we are one step closer to SRP

2) if too many arguments are in the constructor it could be signal that we are violating the SRP. In this case, we should consider grouping dependencies and hiding them behind a Facade.

Our way to good software architecture

In this article, I wanted to share an experience of how we arrived at the architecture we are using now in our Android application and why we consider it well suited for our project. From my point of view, it is easier to describe what makes software design poor than list features that make it good. And to know what is poor software design one must have some experience with projects where the results were not very satisfying but the lessons learned were valuable. Therefore let me share some things I learned that stay on the way to good software architecture:

  • hard to add new functionality or feature:

    • there is no code to reuse for functionality that shares common traits with already implemented functionality and you are forced to copy existing code to implement a new feature

    • the only solution how to implement a feature is to rewrite existing code to fit your needs

    • it is easier to use a hack to implement a feature rather than find a place for the feature in the existing project structure

    • when you add a new code, you break existing functionality

    • it is hard for a team to work simultaneously on the same codebase

    • it is hard to read the code, even if only one developer works on code

  • hard to test added functionality:

    • you can’t isolate software units for tests

    • it is easier to test application manually as whole rather than write unit and integration tests for separate peace of software

As result writing and the testing code becomes more time consuming and more expensive.

Once we start to notice one of these things it is time to start to think about improving or introducing the architecture in our software. The architecture probably is not a big deal if you are building a 1000 line app or writing a prove of concept app, but since our project is neither of those cases, it was clear from the very beginning that we need a solid design for our application. I already wrote about the technologies we decided to use and how we arrived at those particular conclusions. And now when we are close to the finish we can look back and see whether our choices paid off. But first let me make clear, it is not possible to make a perfect design with the first time unless you already have developed an application with very similar user requirements and even then you should revise the technologies used. And that was one of the first lessons we learned about the app’s architecture in our project. Once we noticed that adding new features and testing them became much harder than the ones we implemented in the past we started to investigate the existing design. And if we found a better way how to organize our application we adjusted the architecture accordingly. Some of the biggest changes we made in our application was when we switched from Model View Presenter (MVP) pattern to Model View View Model (MVVM) and upgraded Dagger to the latest version. The other lesson we learned was, there might be cases when there isn’t only one good solution. That was the case when we were designing the database and data access layer in our Android app. We had several options like:

  • Room
  • Squidb
  • Sqlbrite

We were not sure which would fit better in our design and even now we think any of them would work well in our application.

At the time we started to design our application Android didn’t have yet Architecture Components nor Guide to App Architecture, so we had to start somewhere else but eventually we arrived to almost the same architecture model as suggested here https://developer.android.com/jetpack/docs/guide

Contrary to ASP.NET Core (we use it for our back end) where developer can choose a project template in Visual Studio and start to work with well established patterns just by extending the template, in Android world we had to create the project structure by ourselves by choosing the most appropriate technologies and architectural patterns. Nowdays you can start with Architecture Components and recommended app architecture (https://developer.android.com/jetpack/docs/guide), since most likely it will become most common way how to build complex applications in Android. But this is just a reference model – the backbone, base – where we can start to add our first classes. The next step was to extend our backbone in such way that we could keep avoiding pitfalls and shortcomings of poor design (the things I mentioned in the beginning). And this is the place where the classic Object Oriented Programming skills come in to play. First to make it easy to extend our application with new features we applied SOLID principles. Where S stands for Single-responsibility principle, O – Open-closed principle, L – Liskov substitution principle, I – Interface segregation principle, D – Dependency Inversion Principle. Next when we noticed technologies that allowed us to follow these principles we included it in our project. For example Dagger was one of such libraries that allowed us to follow Dependency Inversion Principle and Android data binding library was another one that made it easier to follow Single responsibility principle. Once we started to apply SOLID principles we were able to identify the standard design patterns in our application code and so we started to organize the parts of the code around the design patterns (https://www.geeksforgeeks.org/software-design-patterns/).

To make a good architecture for complex application is not an easy task. It requires experience, good understanding of Object Oriented programming and knowledge of technologies in platform you are developing for.

The way to data binding in Android

Today I wanted to share our experience on the way to MVVM (Model View View Model) pattern in Android project.

When the data binding library was introduced at Google I/O in 2015, it looked very promising technology and very awaited in our team. We have developers who developed in the past Windows applications using .NET framework using MVVM framework. They loved that pattern. And there were good reasons for it:

  • MVVM gives a clean separation between the UI and the rest of the application. Although it is still possible to bring the business logic in to the UI, because of powerful data binding expressions, but now the framework is on developers side to keep the things separate. And in most cases it is only up to developers to produce a clean code.

  • it improves test-ability. More code from UI is moved to easy testable View Model classes

  • it allows to work independently on UI and business parts. And such responsibility delegation becomes more clearer

But until 2015 MVVM was available mostly for Windows application developers, because it turns out that in order to implement MVVM pattern we know it now we need UI markup language and data binding expressions. Without these two components most common patterns developers were left with were MVP(Model–view–presenter) and MVC(Model–view–controller). In the web world things progressed faster, since already from the very beginning the main way how to create the UI was markup language (HTML) and some time later dynamic web emerged and along the way different engines that allowed developers merge data with the markup. But in the desktop and mobile application world markup languages and data bindings emerged only recently. First Microsoft introduced WPF (Windows Presentation Foundation) subsystem with XAML as a part of .NET framework. And later came out Android with its own markup language. About the same time when WPF was announced Microsoft architects Ken Cooper and Ted Peters announced MVVM pattern on their blog. And so MVVM era started in the Windows world, but Android developers had to wait little bit, before they could start to use MVVM framework.

So when the Android data binding (https://developer.android.com/topic/libraries/data-binding/index.html) and Architecture components (https://developer.android.com/topic/libraries/architecture/adding-components.html) were released, we decided to  give it a try. Until then we were already striving to separate UI from the rest of the application and therefore we were using MVP pattern in combination with Butter knife (http://jakewharton.github.io/butterknife/). And the first thing we noticed is that we don’t have to write so much boilerplate code to keep the UI and business logic separate. Now instead of Presenter, View interface and Activity that implements the View we wrote very thin Activity and Model View and made slight changes in layout files. So some of the user stories we could write faster. But it was not always the case. When we had to work with more complex views like map view we could not bind the data directly in the layout file, because of the lack of bindings. And so data binding was done in the Activity or Fragment. But from our perspective it was still  step forward because most of the views we used had bindings and if they didn’t we could implement simple Binding Adapters. Essentially we started to use data binding in layout markup where it was easy to use it and left the UI processing code in Activity or Fragment where it will have taken a lot of time to find a way how to implement the binding for the layout markup. So for our project MVVM worked out very well, but we should admit that there are projects where it will not bring much advantage. Therefore before you take a decision about the using of MVVM and data binding spend some time to investigate views and available bindings for them you are going to use in a project.