Bringing order in files

One of the main paradigm we followed during Fitradar application development was Object Oriented Programming paradigm. And the main objectives of OOP are:

  • to organize the code in such a way that data and functions using its data stick together in one entity (class),
  • to extract reusable parts in separate entities (classes interfaces).

And as we followed the OOP principles and patterns our big code evolved in to many small files, each representing one or sometimes several entities. Each file contained clean and well organized code that was easy to maintain. From one hand we reduced the size of the files and such improved the navigation within a file but increased the number of files. And the more files we produced the more harder it became to navigate between the files. And very quickly it became clear that we need a new way how to organize our code-base files that anyone could quickly find a needed file. And since there are several ways how to organize the files in packages and the source code packaging really depends on the project, in this article I wanted to share our teams experience on how we found a way that helped us to find file quicker in our code base.

The goal of organizing files in packages is to allow a developer or any other person who is working with a source code easily find a needed data type. In order to achieve this we had to introduce particular principles on how to organize files within a packages. And once a person learns these principles it should be a breeze for him to find a necessary type. When we thought about it, we came to conclusion that these principles should act like search algorithm but for human. The basic partition of our source code in separate projects was predefined by Clean Architecture. It gave us a basic understanding where to put files on the high level. In our first attempt we tried to put the same type data under the same package. For example all the repositories definitions we kept in package com.fitradarlab.fitradar.domain.repository, all the retrofit endpoint definitions we kept in package com.fitradarlab.fitradar.data.net.endpoints and so on. This kind of approach introduced by Clean Architecture worked well in data and domain projects, but when we tied to apply it to the UI project it didn’t really helped us. And the reason was the way how we worked with UI part of the project. Our work was organized around the use cases. And to implement a use case on the UI level we had to work simultaneously on Activity, Fragment, ViewModel, Dagger dependency Module, layout and navigation. All these types were located in different packages and under each package there were already quite a few other files and therefore it was hard to find a needed file fast. First to mitigate the problem we tried to keep all the files of a use case opened, but we realized quickly that the more files we open in Android Studio the less we see of a file name in a tab because it shrinks. So even on our big screens we could have only 5-7 files opened, but in many cases we needed more than that. It was not right away that we noticed that the files we try to keep opened belong to one use case, but once we realized that it became clear that we need to put those files under the same package. Once this discovery was made the new packaging structure for UI project was born. We completely refactored UI project by introducing packages that reassemble the names of our use cases. And thanks to Android Studio refactoring tools it took only a few hours, and after that we really felt comfortable with the new packaging structure. Now we didn’t have to keep the bunch of file opened because all the files we needed to work with were visible under the single package in Project window.

New UI project package structure

But there was still one problem left – the resources files. Contrary to source code where developer can create a hierarchy of packages the resources have only several predefined folders and the most used resource types like layouts and drawables usually have long list of files. And once again we applied the use case approach and came up with following naming convention for our layout files: the layout file starts with the type – fragment, activity, row or view, then we mimic the name of the package and the name ends with unique name of the layout. For example the layout for our timeline page has the name fragment_sport_event_timeline.xml. Unfortunately we still can’t find a good naming strategy for drawables and other shared resources that are not bind to particular use case, but already now with these new naming conventions we see a noticeable improvement in our source code maintenance.

Visit our website and join the mailing list. Our app is coming soon:

http://fitradar.me/

Collaboration between developers and non-developers in FitRadar

Initially when we worked on the idea of Fitradar several people were involved – among them were sports club manager, designer, and developer. Everyone put his ideas on the table and after several brainstorm meetings we came up with initial requirements for our future product. Later all who participated in user requirement creation became in one or another way a tester. The user requirement authors knew the best how the app should work, so they were the ones who could verify the application’s design and the functionality.

Now when we have a tangible product, the people who laid down the requirements want to make sure their ideas are working properly in the application. Some of the features are possible to test via User Interface, like design and navigation. And in this case a screen is serving as a medium of communication between developers and testers. People who are testing the application can take screenshots and explain the problem to developer in a demonstrable way. But how to make sure the business logic implemented in application is behaving according to the requirements. Developers implemented it according to their interpretation of requirements. Although we used UML diagrams tables and pictures, many times the only way to describe the business logic was a human language. And as we know human language is quite ambiguous and fills the gaps with a lot of assumptions and that inevitably leads to misinterpretation. One way for non developer to test the applications business logic is to try to invoke it from a UI and see if it works correctly, but developers and QA know that in such way we are covering just a small portion of test cases. So there must be a better way for user requirement authors, would it be Quality Assure, Business Analysist or any other team member, to make sure the business logic he proposed is functioning as expected.

I already wrote about the test strategy in our team and well known test pyramid. The lower level tests are written by developers and only time when they need an input from other team members is the time when they are interpreting the requirements. Once the requirements are clear they can write the code and accompanying unit tests. Even UI tests can be written with little help from non developers, but the part where user requirement authors start to play important role is functional tests. In case of our app the UI is separated from the rest of application and most of the business logic happens on the back-end. So the functional tests in our team include back-end API tests and mobile application API tests.

The way how we helped the non developers in our team understand the implanted business logic was by using some principles of the Domain Driven Design and Behavior Driven Development. I should say not everything what Eric Evans described in his book “Domain Language Tackling Complexity in the Heart of Software” was applicable to our solution, but some aspects really helped us to establish a good way of collaboration between team members and today I wanted to share those aspect of DDD:

  • Ubiquitous Language, the term introduces by Eric Evans, helped our developers and the rest of the team speak the same language. The main idea of Ubiquitous Language is to create a common vocabulary of a given business domain that both domain experts and developers would speak. It was quite easy to establish since in the process of creation of user requirements both developers and non developers were present and developers knew quite well the business domain. Most of the value of the common language we started to see when developers had to read their implemented logic to non developers
  • The next step when the vocabulary of app was created, was to create the API methods that non developers are able to understand and follow. In order to achieve this API methods were written on such abstraction level that only domain vocabulary was used – all the technical details were hidden in the underlying classes and methods. Even logical operations were converted in to the methods that expressed the domain concept.
  • Once the API methods were in place, the business logic authors could participate in API review. But even then there might be a flaw in the requirements itself. And so we needed to write tests expressed in domain language. For this purpose we use some aspects of Behavior driven development, particularly we used Cucumber language (Gherkin plugin on Android Studio and SpecFlow extension on Visual Studio) for writing functional and UI tests. Since the Cucumber language is intended to be used not only by developers but by regular humans as well then everyone who contributed to our application’s user requirements was able to add tests as well.

Visit our website and join the mailing list. Our app is coming soon: http://fitradar.me/

Logging in our application

Any application to some extent is using the logging system. And we are not an exception. We log a lot of information in our Android and iOS applications as well as in our backend server application. But what logs give us, after all, they are not contributing to the application functionality? They might be a part of user requirements, but why would anyone want to invest money and time for the feature that is not used by a user? Let’s explore the purpose of logs in software development and practices developers are applying.

Here are some of the cases where we are using logging in our applications:

  • during development to make it easier and faster spot a bug along the
    • @NonNull and other annotations and code inspection tool Lint
    • unit and integration tests, where we try to cover most of the code execution paths and integrations points. (And I already wrote how we approached the testing in our project)

    we use logs that show as the trace of actions and values that led to a particular bug. But before we started to use logs we had to answer the following questions:

    • where to put log statements on
    • what information to log

    Since a lot of our code is covered by unit tests then there is little reason to log our own code for debugging and test purposes, but where we don’t have control over the values we are processing is:

    • framework lifecycle methods, like onCreate, on ViewCreate, onResume, onPause in Android application. If our code is using the arguments provided by lifecycle method we log these values and assert the expected value if it is possible
    • to click listeners and other event handlers
    • third party library methods and their returned results. For example, for network calls we are using Retrofit and we always log the network requests and responses in debug build variant. Another example calls to the database. Since we are using Room in Android application, we want to see SQL code sent to the database

    So once we log enough information we can spot the bug just by going through logs and skipping the process of starting the app again with attached debugger and stepping through the code. And we don’t have to worry much about the performance as well since Android logging system is taking care of debug log statements and skip them at runtime.

  • in production:
    • to monitor, on the back end server the logs are used to monitor the normal business logic functionality. We log every request. If the request is successful we log it as info, in case request ends up we log error. When the error is reported on the back end service, the gathered logs are of great value. Very often they allow to spot the problem without further investigation.

    • we log fatal errors, that caused the application stop. On the backend server, we catch the exceptions as early as possible and when the exception is caught it is logged and an email is sent to an administrator. We don’t experience many exceptions outside of our app on the back end part, because the only reason that can lead to such exception is a malformed request and not the back end code itself, and since the request is formed by third party library it is very rear that it is malformed. A different story is with uncaught exceptions on the client – Android or iOS app. It is not possible to catch all the exceptions so they leak and cause the app to crash. To log these kinds of exceptions we use Fabric Crashlytics (Firebase). These logs later help us a lot to find a cause of a crash.

Visit our website and join the mailing list! Our app is coming soon:

http://fitradar.me/

Testing strategy for our application

In this article it’s time to talk about the way how we are testing our application and the strategy for testing we laid down in the application design phase. The experience in the past showed us that one of the good design outcomes is easy testable application. Therefore from the very beginning along with the application architecture we were designing our testing strategy.

The goal of the testing application is to achieve a flawless work of it. The simplest way how to test an application is to do a manual tests, but these tests definitely are not the most effective. Along with the software development evolution test approaches evolved as well. We already new and applied in our other projects such tests as unit tests, integration tests, functional tests and regression tests. But since in our team we don’t have a person who knows about testing techniques, can develop and design an application, we developers decided to explore different testing strategies together with QA who mostly does the tests manually. So we started to investigate other teams experience, information in books and on the web, and started to lay down foundation for our own test strategy that would fit our needs.

The thing we decided to start with was “Test pyramid” the term Mike Cohn came up with in his book Succeeding with Agile. The pyramid gave us some understanding how to start to organize the tests.

Unit Tests. Taking in to account our own experience, and the information we obtained it was clear that unit tests will be foundation for our test suite. To be able to easy and what is most important quickly write unit tests one should be able to isolate different peaces of software. In case of OOP the paradigm we are following we should be able easy isolate classes. In my opinion this is one of the core principles to successful unit tests. But in order to do that we must have a design that would allow us to isolate the classes. And that is why the dependency injection became very important technique in our software and I even dedicated the whole blog article to it. And to make it easier to implement this principle in our Android application we chose to use Dagger 2 framework. Once we wrote a use case implementation and accompanied unit tests we ask are our QA to test it. Very soon we discovered that despite the fact that we had a quite good coverage of code by unit tests QA still was finding bugs. And as you can guess the errors occurred in the code that interacted with framework, in our case Android framework. So now it became obvious we needed integration tests to make sure that our code callings to third party libraries or framework work flawless.

Integration Tests. According to Test pyramid these tests should be less than unit tests and indeed we already had tests that cover our own code logic, and so in these tests we decided to test only the connection points between our code and the third party libraries. For example in our application we are fetching data from the server and caching the data in database. In our Android application we are using Room persistence library. There is no really way how to test DAO interfaces, without performing the operations on database. And so we had to write tests that involve Room library. Another example is our code that interacts with our back end server via Retrofit library. And again we had to write tests that involve Retrofit library. Although these tests were written in the same manner as unit tests using Junit test framework they covered more than our class or method and were considered as a separate type of tests.

UI tests. The main reason why we decided to write UI tests is because we wanted to fully automate the test process and make it a part of Continues Integration (CI) process. So for Android we chose Espresso framework and as a firsts test we wrote navigation tests, that made sure that we can navigate to every page we have in our application. Then we wrote the tests only for the errors that QA reported and so comparing with the rest of the tests these are making only small portion of all tests.

http://fitradar.me/

Joining client with the back end service

Bridge

Sooner or later any Android, iOS or Windows application wants to get outside of the local hardware box and start to communicate with the outside world. In case of our application we new that already from the very beginning that we are going to have a back-end web services in one or another way. At the beginning of the project, after the user requirements were set, and the architecture of the solution designed, it seamed quite straight forward to implement our back-end services. Bellow is the architecture we decided to use for our Fitradar system.

But in the process of the implementation we realized that quite often we have to adjust our back-end web services API. So in this article I want to share some experience we had while joining our client applications with back-end web services.

Initial approach

First we divided the whole system in 4 sub-projects:

  • Android application
  • iOS application
  • back-end web services application
  • the project website application

and created 3 teams. We thought the client application teams and web services team will be able to work independently after the APIs and data contracts on the web services were defined. Once we received the UI designs we started to model web services API and data contract between the applications. And when the data contract between the applications was defined the back-end team started to work on the web service implementation, meanwhile application teams started to work on the UI (User Interface) part.

We estimated that by the time application teams will complete one use case UI, back-end team will implement the web services for that use case to the degree that they can start to receive and send data.  The first use case we picked up was user profile creation. The interaction between the client application and back end services went smoothly. Web service received the user profile data and stored that in the database. So we took the next use case – displaying full user profile information to the profile owner and limited information to other application users.

The problem

As we started we thought it will go fast and smooth since all we needed was to fetch already stored data from our web service. But soon we discovered that the data coming from the web service is not complete. We needed to display some statistics about the user activity. Since on early stage of development and user requirement gathering we new very little how the statistics will be calculated, we assumed that some of them we will be able to calculate in the clients application and some we will be calculated on the back-end server. It turned out that some statistics we were not be able to calculate in the client’s application and we needed to fetch more data from the server but on the other hand to calculate statistics on the back-end server we needed more input data from the user during the profile creation. So we had to change the data contract for the user profile creation and for user profile fetching web service end points. And when was the time to move to the next use case – sport event creation, we decided to call for a meeting to find a better approach.

Different approach

After some brainstorming we decided that both teams should work simultaneously on one or two use cases. When we had to work on use case that shows some data we kept an eye on the use case that sends this data to the back-end web service.  It means that we always started with the use case that fetches data from the web service and displays data to user. Application team started with UI that was unambiguous, since we had UI designs and we new quite well what we want to see in the UI. In the development process along with UI layout View Model was created. This View Model clearly defined what data we need in order to meet the design and functional requirements. Now we could see what business layer entities can provide the required data. And once the entities were defined for the use case we could tell what endpoint we need and more important what data these endpoints should provide. So at this point the data contract for fetching data was quite stable (it had some minor changes later nevertheless, but mostly because we decided to introduce change in UI design).  And the back-end team now could see if they can provide the required data form the data they will gather. If something was missing on the server the back-end team adjusted the data contract for incoming data accordingly.

To make this process more smoothly and allow the teams to work independently as much as possible application team instead of real network calls used mock objects that returned the required data. And the same did the back-end team for incoming data.

Another thing that back-end team had to keep in mind was, if we really should be the ones that provide the requested data. For example, after user profile use case analysis it quickly became clear that we should delegate the user’s profile picture storage to third party services like Cloude Storage

Conclusion

The one thing we learned is that the more complex project is the more difficult is to come up with precise requirements. We were familiar with the agile development model and applied it extensively on the project bases, but we thought that the interface between the client and the back-end is clear from the initial requirements. We were wrong, the web service interface was changed in several iterations because the discovered more requirements for the client application. So the conclusion we made is that for bigger projects we have to apply iterative development approach for the whole system, we can’t really make the projects independent from each other. We had to check the initial web service interface after each implemented use case and see if we need to make changes there. So are final web service API and data contracts after several use cases looked quite different than that we designed in the beginning.

http://fitradar.me/

Keeping the code growth under control

The other day I had a discussion with one of our team members about how to keep classes readable and don’t end up with huge files containing thousands of lines and hundreds of methods. It turns out that this question deals with the very basics of Object Oriented programming and I decided to give my view on some of the OOP principles that help me keep a code growth maintainable and get my code to comply with good design principles. One of the first principles that comes to my mind is Single responsibility principle (SRP) that states “Each software module should have one and only one reason to change”. Frankly for a long time I had a hard time to apply this description to daily code and therefore I came up with my own steps derived from other OOP principles that help me to follow this principle.

Starting Point

Since lately I was developing either webservices, websites or mobile applications then my starting point is one of the widely acknowledge architectural patterns It gives me a good starting point with project structure and files, where I can start to add a code. For this moment these patterns are well established and I really suggest to go with one of these design patterns unless you are developing a very simple application and don’t have plans to evolve the project. And if you know the project will be something more than delegating CRUD requests to the database, then it is worth to start already with layered project structure where Model in MVC, MVP or MVVM is organized in Business Models, Services and Repositories. And maybe even consider the whole Domain Driven Development approach. But how to estimate the starting architecture for the application is a topic for another article.

Single responsibility principle for methods

Then next I start to fill provided methods (actions in controllers in case of web-services or activity’s lifecycle methods in case of Android application) and observe how my starting methods evolve. These methods are placed where I start to apply the Single responsibility principle for methods. Once I have methods that have only one reason to change I switch my focus to classes. And here are some rules I follow to achieve SRP in my methods:

  • DRY (don’t repeat yourself). If I discover that several methods share a common piece of code I extract the common code in a separate method and make it reusable for other methods. I think this is one of the first principles of clean code most of the developers learn. And since this principle is so fundamental then many IDEs included the method extraction as part of their refactoring tool set.
  • I check whether there are common variables more than one method is operating on. If there are such variables I make them class level private fields. I repeat this step every time when a new method is extracted. And if several classes have common fields or extracted methods then it is time for new base class.
  • I make sure the methods I extract are doing what its name suggests. If a method’s name contains one verb then I make sure the method is either command that changes the state of the object or it returns data. And if the method name contains more than one verb it is obvious that the method is doing more than one thing. Sometimes it is acceptable. For example in cases when I write logs along the method’s basic logic. If tools or frameworks allow handling such method side behavior I extract it as an aspect.
  • I respect the levels of abstraction and try to keep method statements on the same abstraction level. One sign of that I might be violating this rule are a long loop and if bodies. The statements in long bodies most likely belong to lower level abstraction than the statements outside the loop or if. But sometimes on high-level method I have to call a single line of lower abstraction code, then I leave it.
  • I keep the number of method arguments short. If I need more than 2 arguments, then maybe it is time for new class, and instead of several primitive types, I should be passing a class as a method argument.
  • And finally, I use cohesion level description to match my methods against different cohesion types and see that I am avoiding Procedural, Logical, and Coincidental Cohesion.

Single responsibility principle for classes

As code evolves I start to have more and more methods in classes provided by initial architecture and fields in those classes. Now I check if it is not the time to split my classes. For a long time, I had a hard time to choose the right class for a method. And one of the reason were examples I was reading about in books and articles on the Internet. Those examples were focusing mainly on the names of methods and classes and how by names estimate the relationships between methods. But soon I discovered that using only names can lead to subjective decisions. Although many OOP principles are subjective from my point of view anyway (in case of SRP someone could argue that keeping all methods in one class is more convenient than creating a hierarchy of classes, and one should use an IDE for navigation between the methods instead of file system navigator) I wanted to use something measurable that would allow me to estimate how tight the relationship between a method and its containing class is. And one of such metrics I found is cohesion. Low cohesion means methods inside the class are independent of each other. On the other hand, high cohesion means methods in the class are strongly related. But how can we express this relationship in numbers? It turns out there are several cohesion metrics that give a developer insight into relationships between methods. And sometimes distributing methods to classes according to the cohesion level among them, one can discover new classes, he didn’t even think about before. And here are the description of some of the cohesion metrics. The metric I am using the most is “Lack of Cohesion of Methods” (LCOM) metric, that for each field in a class counts the number of methods that reference it. Then it sums up the number of methods and divides the result with a count of methods times the count of fields and subtract the result from one, like this: 1 – (NumberOfMethodsReferencingFields /(NumberOfMethods * NumberOfFields)). The metric ranges from 0 to 1, where 0 means high cohesion and 1 low cohesion.

Following these rules, I was able to achieve the Single Responsibility principle in my modules.

P.S. Visit our website and join the mailing list! Our app is coming soon: http://fitradar.me/

Another programming language!

During the last year while searching and exploring code examples of Android’s new technologies like Jet pack architecture components our team noticed that more and more code on the git hub is posted only in Kotlin – the new programming language for Android platform. And that raised a question in our team, is this new kid on the block something that can speed up our app development even more and we should start to prepare for another transition? And as usually we started to explore and see what benefits the new language can bring. But for me all this recalled my own experience with programming languages and how I was switching between different languages.

My first encounter with programming happened at high school and the first language I learned was BASIC. The friendship with BASIC lasted only a year. When I started to study at university I switched to Pascal. Yes, that was quite long time ago. At that time most of the software I was producing was dealing with mathematics and were command line utilities. I didn’t see a much difference between those two languages except that now in Pascal along the variables I had to declare their types. The course at university was organised so that students had to start with some high level language and gradually switching to C/C++. So far in the length of time I was switching from simple to more complex languages, and at that point of time it didn’t seem that switching to more complex language would ease the programming process, on the contrary the C/C++ comparing with BASIC made it harder to write software that solve mathematical problems. And that was the first time when I learned about the Programming domain. C/C++ gave me the ability to write software for almost any domain. But at the same time for some tasks more high level languages were preferable. After C/C++ followed SQL and Java, then C# and JavaScript. When I switched from C++ to Java the same task could take 10-25% less coding. Now the operations with text were easier and all pointers were gone. When I learned C# and that was the time when Microsoft released C# version 2.0, but Oracle had Java 1.5 I noticed in cases where callbacks and events where used in C# I could type 5-10% less code. But this increase in productivity was only in certain programming domains, like Windows applications. In other areas like embedded systems or OS C was still the main option, and the arise of new languages didn’t really affect those fields. The main conclusion I made for myself was that if a new language emerges, then most likely it speeds up development only in certain domain.

And what about Kotlin? The creators of Kotlin JetBrains claim that replacing Java with Kotlin will allow developers be more productive. After playing with Kotlin little bit and converting some existing Android classes into Kotlin we saw that it could really be the case. Then we looked at oficial Kotlin’s documentation where Kotlin is compared with Java and although some of the features we are really missing in Java like Data classes, Null-safety and Extensions we didn’t really see anything groundbreaking. And we were about to put Kotlin aside for this project since :

  • our team doesn’t feel yet so comfortable with Kotlin that we could produce code that is easy readable and follow best practices. To master the language requires time.
  • The productivity we would gain would not outweigh the time we would spent on learning the new language

We assumed if we want to switch to the new language we have to convert the whole application code base to the new language and keep coding in Kotlin, but then we noticed this tutorial saying that we can mix Java and Kotlin in one project. And we saw this as a good opportunity some of the files write in Kotline. And one of the first candidates for Kotlin we chose entities and DTO. It requires to learn very few new syntax and classes can be made much smaller thanks to getting rid of getters and setters. We decided to spend some time on learning Kotlin and once we learn the feature that really reduces the number of code lines and brings something valuable in code comparing with Java we will write that peace of code in Kotlin. In such way we hope we can keep the same development speed while learning new language and at some point of time start to reduce the development time comparing with Java. It worked well in time when I switched from C/C++ to Java and now it looks like it is time to switch from Java to Kotlin.

http://fitradar.me/

Migrating to the new navigation system

In my last article I mentioned that we migrated to the new Android’s navigation system that is a part of Android Jetpack components. This time I want to share some implementation details. Usually in my posts I don’t really like to share the actual code, but only concepts, since there are plenty of implementation examples for the particular concept, and I don’t want to repeat what the others already wrote, I rather prefer to explain the reasoning behind the one or another decision. But while migrating to the new navigation system our team has faced a few challenges, that was not possible to resolve just by reading available documentation and copying code examples, therefore I think some implementation details we came up with might help someone who is striving to master the new navigation system.

Problem

After all our Fragment and Activity analysis we came to conclusion that we should have several activities In our application which would serve as hosts for set of Fragments. And the features we used to group by the fragments were:

  • common master layout
  • common UI logic
  • common use case

So we started to explore the documentation and look for code examples, that would help us to understand how to implement the navigation from one navigation graph to the other graph hosted in different activities. And very quickly we realized that the documentation and examples available on the Internet are heavily focused around a single Activity application and navigation between the fragments hosted in one navigation graph. Although official Android documentation mentions that the new navigation system supports Activities as destinations. The biggest challenge was to find a way how to assign the start destination to navigation graph at runtime depending on various parameters. In our app user should be able to create a sport event and later edit it or delete it. All fragments related to sport event we decided to put in one navigation graph. But the start destination in this graph depends on whether user wants to create a new sport event, edit previously created sport event or see other user’s sport event. So we could not assign the start destination at design time in navigation graph but we had to do it at runtime. Since we could not find enough information how to do it, we started to consider other solutions.

Solution

Spending hours on reading documentation and exploring examples we started to think maybe we should use only one Activity. And put all UI logic from the other Activities in this one and show/hide the parts in UI which are needed/not needed for active fragment. But very soon it became clear that this universal hosting activity is getting very big and it is hard to read it and we have to add additional logic for hiding and showing views in Activity. The hosting activity became something opposite we were striving for – smaller and more manageable classes. So we dropped this idea and started to consider moving the views from Activities to Fragments, that Fragments would contain all the necessary views and UI logic and hosting Activity would be left only with NavHostFragment. That would mean that views like BottomNavigationView and Toolbar now would be located in several Fragments. We could ofcourse use include tag but that still would not remove the need to repeat the same include tags in several fragments. So we didn’t really like this idea either, since it looked like we are violating the DRY (Don’t Repeat Yourself) principle. And we got back to the idea about several activities. This time in order to find a solution we started to dig in to the navigation components source code, and after awhile we came up with the following implementation that allows assign a start destination to a navigation graph at runtime:


private void initNavGraph() {
    NavHostFragment navHostFragment = (NavHostFragment) getSupportFragmentManager()
            .findFragmentById(R.id.sport_activity_nav_fragment);
    NavInflater inflater = navHostFragment.getNavController().getNavInflater();
    NavGraph graph = inflater.inflate(R.navigation.nav_sport_event);
    String sportEventId = getIntent().getStringExtra(EXTRA_SPORT_EVENT_ID);
    long calendarEventId = getIntent().getLongExtra(CalendarContract.Events._ID, -1);
    if (calendarEventId == -1 && sportEventId == null) {
        graph.setStartDestination(R.id.editMySportEventFragment);
    } else {
        graph.setStartDestination(R.id.bookSportEventFragment2);
    }
    this.mNavController = navHostFragment.getNavController();
    graph.addDefaultArguments(getIntent().getExtras());
    this.mNavController.setGraph(graph);
}

The above method creates new navigation graph and sets the needed start destination, and passes arguments we set in Intent when started the hosting Activity. In such way we can pass needed arguments further to the start destination. And finally we set this new navigation graph with defined start destination and arguments in navigation controller.

Striving for better code

Recently we refactored our Android’s application navigation system – we replaced fragment transactions with Android Jetpack navigation component. We already had a good experience with migrating from Model View Presenter architecture to Model View View Model architecture (using Android Jetpack Architecture software components). During refactoring we had to convert our existing presenters and views into viewmodels and views with binding expressions. There was a risk of loosing time due doing unplanned activity, therefore before making the decision of migration we carefully estimated the risks related to refactoring. And the main question we were facing with was: will we see a benefit of a refactoring before the app’s release? (We already saw the benefit in a long term, but were not sure about the immediate gain). Calculations showed that even doing refactoring we would gain in development speed. And indeed it turned out that within a month our invested time started to pay off and we started faster implement new use cases even spending additional time on refactoring the existing use cases.

So we started to wonder if it is a worthwhile endeavor to switch to new navigation system. While estimating time for refactoring and immediate benefits I started to think why we started to consider this idea at all. What exactly drives us in making such decisions as going back to already written code and refactor it? And I decided to put some thoughts in this blog about the motivation behind the desire for better code.

I already wrote about clean architecture and design. And in this blog I will return to this topic but from a different angle. Let me first mention that in my experience clean code is very opinionated term because it is not strictly defined by rules but rather by feelings – if the given code is easy to read, perceive, extend, test and maintain it must be a clean code. And different developers this feeling can achieve by different coding approaches. Therefore there will always be disputes about the best language, framework or paradigm. And quite often different developers perceive the same code differently – the one might be convinced that the code is clean because from his perspective it is easy to understand and extend and at the same time another developer will claim that taking a different approach would lead to better code.

Therefore from my point of view it is pointless to argue only about the approaches in coding, but the languages, frameworks or paradigms should be discussed together with the measurable results these different approaches yield. And only then we can talk about the clean code.

And one such measurable indicator that we looked at, while were considering migration to the new Jetpack components, was overall development speed.

And here I want to emphasize the word overall, because initially coding in dirty way is much faster than structuring code and only after awhile the well written code starts to shine and only then the terms testable, extendable, maintainable really start to make sense for you. Actually I don’t see anything wrong in writing one big chunk of spaghetti code if all developers who work now and will work on such code in the future can understand it, test it and extend it as fast as very well structured code. But this is never the case, unless the developer is some kind of genius, and even then you need to make sure that every developer in a team is a same level genius. So that is why the abstraction levels were raised, that we don’t have to write mathematical expressions and well known business logic in machine code. That is why different paradigms were invented that we faster and easier grasp the problems we are coding. And that is why patterns and best practices are used that we can solve the problems faster and every developer can understand the solution. From the other hand the computer or the phone that runs our software doesn’t care about the code we wrote because it doesn’t use our code but the code produced by compiler or interpreter. The very bad code (from developers point of view) can be converted to the same machine code, that is produced from very well written code. Unfortunately in most cases the only people who care about the code quality are developers. No one else really care about it – stake holders, managers and users care only about the end result. As long as the application meets the requirements it is good enough to use it. But for us developers the well written code can greatly shorten development time and bring a peace in our mind.

But how come we get so much code which doesn’t follow the clean code principles? One obvious reason is lack of experience. At some point of time we developers all have written poor code, just because we didn’t now how to write it better, but along the time we learned and reached a certain level of mastery. Shouldn’t we write now only a clean code? Unfortunately no, and there are several reasons for it. The one we learned while migrating to Jetpack components is: it doesn’t matter how good the code is now there is always a room for improvement, we just don’t know yet how to do it.

FitRadar application versioning

In the last post, I wrote about FitRadar application’s release process and what kind of releases we are planning to have. And since every release is having a unique version number, I think it is a good time to talk about the release of versioning schemes in this post.

In our app we are planning to use the Semantic versioning schema that follows this pattern: Major.Minor.Patch, where

  • Major number will be increased by 1 when breaking changes will be introduced that are incompatible with the previous Major version
  • Minor number will be increased by 1 when a new functionality will be introduced that is backward compatible (at the same time patch number will be reset to zero)
  • Patch number will be increased by 1 when a bug or set of bugs will be fixed that is backward compatible

But before we can start to apply Semantic versioning schema we still have to reach the phase when the application is stable and ready for production and that will be version 1.0.0. But until then when we are still in the development phase we are using a little bit different versioning schema that follows the pattern 0.x.y.

Our versioning schema during the development phase is tightly coupled with the application development life-cycle. In the very beginning of our project, we decided to adopt some of the Scrum project management aspects, particularly sprints or iterations. The length of the sprint is such that our team can deliver at least one use case from the user requirement list. Mostly but not always after every sprint we release an application with new major version x. Every application release can contain one or more use cases. During a sprint beside a new user story, we have to fix bugs or make improvements in the previous user story. When a bug was fixed or improvement made we release a new version with increased minor number y. It means that our application started with version 0.1.0 when the first use case was implemented.

Once we started to release our application to Google Play Store tracks we extended application version with an additional label denoting the track like this 0.x.y-track_name.

As you can see this versioning schema is not quite suitable for the production where each release should be stable and finalized. But during the development phase when each use case is iterated several times and after each iteration, we should have an application that we can install, demonstrate and test, this kind of versioning schema serves us very well.

http://fitradar.me/