Migrating to the new navigation system

In my last article I mentioned that we migrated to the new Android’s navigation system that is a part of Android Jetpack components. This time I want to share some implementation details. Usually in my posts I don’t really like to share the actual code, but only concepts, since there are plenty of implementation examples for the particular concept, and I don’t want to repeat what the others already wrote, I rather prefer to explain the reasoning behind the one or another decision. But while migrating to the new navigation system our team has faced a few challenges, that was not possible to resolve just by reading available documentation and copying code examples, therefore I think some implementation details we came up with might help someone who is striving to master the new navigation system.

Problem

After all our Fragment and Activity analysis we came to conclusion that we should have several activities In our application which would serve as hosts for set of Fragments. And the features we used to group by the fragments were:

  • common master layout
  • common UI logic
  • common use case

So we started to explore the documentation and look for code examples, that would help us to understand how to implement the navigation from one navigation graph to the other graph hosted in different activities. And very quickly we realized that the documentation and examples available on the Internet are heavily focused around a single Activity application and navigation between the fragments hosted in one navigation graph. Although official Android documentation mentions that the new navigation system supports Activities as destinations. The biggest challenge was to find a way how to assign the start destination to navigation graph at runtime depending on various parameters. In our app user should be able to create a sport event and later edit it or delete it. All fragments related to sport event we decided to put in one navigation graph. But the start destination in this graph depends on whether user wants to create a new sport event, edit previously created sport event or see other user’s sport event. So we could not assign the start destination at design time in navigation graph but we had to do it at runtime. Since we could not find enough information how to do it, we started to consider other solutions.

Solution

Spending hours on reading documentation and exploring examples we started to think maybe we should use only one Activity. And put all UI logic from the other Activities in this one and show/hide the parts in UI which are needed/not needed for active fragment. But very soon it became clear that this universal hosting activity is getting very big and it is hard to read it and we have to add additional logic for hiding and showing views in Activity. The hosting activity became something opposite we were striving for – smaller and more manageable classes. So we dropped this idea and started to consider moving the views from Activities to Fragments, that Fragments would contain all the necessary views and UI logic and hosting Activity would be left only with NavHostFragment. That would mean that views like BottomNavigationView and Toolbar now would be located in several Fragments. We could ofcourse use include tag but that still would not remove the need to repeat the same include tags in several fragments. So we didn’t really like this idea either, since it looked like we are violating the DRY (Don’t Repeat Yourself) principle. And we got back to the idea about several activities. This time in order to find a solution we started to dig in to the navigation components source code, and after awhile we came up with the following implementation that allows assign a start destination to a navigation graph at runtime:


private void initNavGraph() {
    NavHostFragment navHostFragment = (NavHostFragment) getSupportFragmentManager()
            .findFragmentById(R.id.sport_activity_nav_fragment);
    NavInflater inflater = navHostFragment.getNavController().getNavInflater();
    NavGraph graph = inflater.inflate(R.navigation.nav_sport_event);
    String sportEventId = getIntent().getStringExtra(EXTRA_SPORT_EVENT_ID);
    long calendarEventId = getIntent().getLongExtra(CalendarContract.Events._ID, -1);
    if (calendarEventId == -1 && sportEventId == null) {
        graph.setStartDestination(R.id.editMySportEventFragment);
    } else {
        graph.setStartDestination(R.id.bookSportEventFragment2);
    }
    this.mNavController = navHostFragment.getNavController();
    graph.addDefaultArguments(getIntent().getExtras());
    this.mNavController.setGraph(graph);
}

The above method creates new navigation graph and sets the needed start destination, and passes arguments we set in Intent when started the hosting Activity. In such way we can pass needed arguments further to the start destination. And finally we set this new navigation graph with defined start destination and arguments in navigation controller.

Striving for better code

Recently we refactored our Android’s application navigation system – we replaced fragment transactions with Android Jetpack navigation component. We already had a good experience with migrating from Model View Presenter architecture to Model View View Model architecture (using Android Jetpack Architecture software components). During refactoring we had to convert our existing presenters and views into viewmodels and views with binding expressions. There was a risk of loosing time due doing unplanned activity, therefore before making the decision of migration we carefully estimated the risks related to refactoring. And the main question we were facing with was: will we see a benefit of a refactoring before the app’s release? (We already saw the benefit in a long term, but were not sure about the immediate gain). Calculations showed that even doing refactoring we would gain in development speed. And indeed it turned out that within a month our invested time started to pay off and we started faster implement new use cases even spending additional time on refactoring the existing use cases.

So we started to wonder if it is a worthwhile endeavor to switch to new navigation system. While estimating time for refactoring and immediate benefits I started to think why we started to consider this idea at all. What exactly drives us in making such decisions as going back to already written code and refactor it? And I decided to put some thoughts in this blog about the motivation behind the desire for better code.

I already wrote about clean architecture and design. And in this blog I will return to this topic but from a different angle. Let me first mention that in my experience clean code is very opinionated term because it is not strictly defined by rules but rather by feelings – if the given code is easy to read, perceive, extend, test and maintain it must be a clean code. And different developers this feeling can achieve by different coding approaches. Therefore there will always be disputes about the best language, framework or paradigm. And quite often different developers perceive the same code differently – the one might be convinced that the code is clean because from his perspective it is easy to understand and extend and at the same time another developer will claim that taking a different approach would lead to better code.

Therefore from my point of view it is pointless to argue only about the approaches in coding, but the languages, frameworks or paradigms should be discussed together with the measurable results these different approaches yield. And only then we can talk about the clean code.

And one such measurable indicator that we looked at, while were considering migration to the new Jetpack components, was overall development speed.

And here I want to emphasize the word overall, because initially coding in dirty way is much faster than structuring code and only after awhile the well written code starts to shine and only then the terms testable, extendable, maintainable really start to make sense for you. Actually I don’t see anything wrong in writing one big chunk of spaghetti code if all developers who work now and will work on such code in the future can understand it, test it and extend it as fast as very well structured code. But this is never the case, unless the developer is some kind of genius, and even then you need to make sure that every developer in a team is a same level genius. So that is why the abstraction levels were raised, that we don’t have to write mathematical expressions and well known business logic in machine code. That is why different paradigms were invented that we faster and easier grasp the problems we are coding. And that is why patterns and best practices are used that we can solve the problems faster and every developer can understand the solution. From the other hand the computer or the phone that runs our software doesn’t care about the code we wrote because it doesn’t use our code but the code produced by compiler or interpreter. The very bad code (from developers point of view) can be converted to the same machine code, that is produced from very well written code. Unfortunately in most cases the only people who care about the code quality are developers. No one else really care about it – stake holders, managers and users care only about the end result. As long as the application meets the requirements it is good enough to use it. But for us developers the well written code can greatly shorten development time and bring a peace in our mind.

But how come we get so much code which doesn’t follow the clean code principles? One obvious reason is lack of experience. At some point of time we developers all have written poor code, just because we didn’t now how to write it better, but along the time we learned and reached a certain level of mastery. Shouldn’t we write now only a clean code? Unfortunately no, and there are several reasons for it. The one we learned while migrating to Jetpack components is: it doesn’t matter how good the code is now there is always a room for improvement, we just don’t know yet how to do it.

FitRadar application versioning

In the last post, I wrote about FitRadar application’s release process and what kind of releases we are planning to have. And since every release is having a unique version number, I think it is a good time to talk about the release of versioning schemes in this post.

In our app we are planning to use the Semantic versioning schema that follows this pattern: Major.Minor.Patch, where

  • Major number will be increased by 1 when breaking changes will be introduced that are incompatible with the previous Major version
  • Minor number will be increased by 1 when a new functionality will be introduced that is backward compatible (at the same time patch number will be reset to zero)
  • Patch number will be increased by 1 when a bug or set of bugs will be fixed that is backward compatible

But before we can start to apply Semantic versioning schema we still have to reach the phase when the application is stable and ready for production and that will be version 1.0.0. But until then when we are still in the development phase we are using a little bit different versioning schema that follows the pattern 0.x.y.

Our versioning schema during the development phase is tightly coupled with the application development life-cycle. In the very beginning of our project, we decided to adopt some of the Scrum project management aspects, particularly sprints or iterations. The length of the sprint is such that our team can deliver at least one use case from the user requirement list. Mostly but not always after every sprint we release an application with new major version x. Every application release can contain one or more use cases. During a sprint beside a new user story, we have to fix bugs or make improvements in the previous user story. When a bug was fixed or improvement made we release a new version with increased minor number y. It means that our application started with version 0.1.0 when the first use case was implemented.

Once we started to release our application to Google Play Store tracks we extended application version with an additional label denoting the track like this 0.x.y-track_name.

As you can see this versioning schema is not quite suitable for the production where each release should be stable and finalized. But during the development phase when each use case is iterated several times and after each iteration, we should have an application that we can install, demonstrate and test, this kind of versioning schema serves us very well.

http://fitradar.me/

 

FitRadar application’s way to Google Play Store

Finally we reached the point where our application has taken such a form that we can start to present it to the stakeholders and give it out to our testers. And this could be a good time to talk about our application’s release process. Although we still have a quite long way to go before we release the application to open public, however at this stage we already released our application for testers on the Google Play Store. To make sure that the application to general audience arrives stable, bug-free and with excellent user experience we follow the well known Software release cycle described here. It means that with every release we will open our application to the more wider public. And using Google Play Store release tracks as application distribution channels helps us to follow software release cycle best practices. There are following release tracks on Google Play Store:

  • Internal test track
  • Closed track (Alpha closed)
  • Open track (Beta open)
  • Production track

In each of these release tracks we can roll-out several versions of our application. One can start from any of the release tracks. Sometimes pre-alpha versions are useful to send to testers directly by e-mail or distribute over private distribution channels, and only on alpha or beta phase start to roll-out an application to Google Play Store, but since Google Play Store has a special track (Internal test track) for pre-alpha versions, we decided to roll-out our application as soon as we had a working application with enough features. The main advantage of distributing application over Google Play Store is the ease of installation and update, which we really missed when we had to download the application’s latest version manually.

At this phase only our company QA (Quality Assurance) people have access to the application on Google Play Store. Our QA team have worked together with developers and the business idea authors at application’s design stage and set up application acceptance requirements. And now when they have a real app at their hands they can make sure the application is not only working but works according the previously set requirements. Once our testers will make sure that our application is stable, bug-free and works according the requirements we will make our application available to the small group of trusted people outside our team. For this purpose the closed alpha release will be used. Entering this phase we don’t expect all features be present in our app, but a user should be able to perform the main actions and fulfill most of the use cases. The main purpose of this release is to get a quick feedback from real users who see the application first time and see if there is anything our team has missed on earlier stages of application development. As a communication channel on this stage will be used dedicated email box. When there won’t be any complains, like bugs, crashes, from the trusted users within few days we will move to the next release – beta release. At this stage we expect our application to have all features we are planing to put in the production version. And this will be the first time we will open our application to the general public. The purpose of beta release will still be gathering feedback from users. But this time audience of users will be much bigger and diverse. Dedicated bug tracking system will be used as a main communication channel. After that we will be ready to put our application on production track.

Visit our website and join the mailing list:

http://fitradar.me/

Code generation

One of the technologies that helps us in our project to follow the Inversion of Control (IoC) principle is Dagger framework (https://google.github.io/dagger/). It allows to implement Dependency Injection – one of the IoC forms. Most Dependency Injection frameworks relay on reflection that is used to scan a code for annotations, and this was the way how the early Android DI frameworks like Guice were implemented. Contrary to back-end solutions were the extra time and memory required for DI frameworks usually is not a problem, because dependencies are resolved and created at application’s start time and not during a request time, and user experience is not degraded, on mobile devices this extra time and memory might be crucial and can significantly impact user experience. Therefor Dagger which is recommended DI framework (https://developer.android.com/topic/performance/memory), takes a different approach – it resolves dependencies at compile time by generating extra classes. The code generation is something that most of our team members have skeptical attitude, since very rarely generated code is of the same quality as the one written by seasoned developer. And we decided to take a little bit closer look at the area of code generation and whether it should be our concern.

The whole history of programming languages is about raising the abstraction level and generating the lower level code. It started with assembly languages when developers didn’t have to write the program in binary code (zeros and ones) anymore but were able to use human readable keywords. And assembler was used to convert the assembly code in to the machine language. Since the conversion was quite straight forward developers didn’t worried much about the resulting machine code but mostly about the assembly code. The different story was with the next generation programming languages like Fortran, Cobol, Pascal, C known as high-level programming languages. The main advantage of high-level languages over low-level languages is that they are easier to read, write, and maintain. But the process of translation (done by compiler) of such language in to machine code is much more complex and therefore the generated machine code was not that optimal as in the case of assembly language. But after awhile the compilers improved and most of developers stopped to bother with the compiled code and focused solely on high-level programming language code.

The main conclusion we made from the history of the programming languages is that as long as the generated code is not the part of the code base that developers have to read, extend and maintain and it does not impose performance penalty, there is no risk of using generated code. And from the other hand the benefit it brings is significant – it saves time a lot of time for boilerplate code and allows to focus on the application’s main logic.