Making sure data is valid and business rules are kept

fitness
workout
gym
sports
personal trainer
venture capital
startup

Our Fitradar back-end solution is created based on Clean Architecture as described in this article and explained in this conference talk. One part of the back-end solution is incoming request validation, and in order to implement it as a part of the MediatR pipeline we are using FluentValidation library that replaces built-in ASP.NET Core validation system. At the beginning of the project it seemed that all we have to do is just to create a custom validator derived from AbstractValidator and perform the validation of an Application Model in a single class. And that is how we started to implement the validation. And since the library has quite enhanced capabilities that allow implementing complex validation rules for a while we didn’t notice any problems. Until the moment when we realized that not all validation rules are simple incoming Data Model validation. Some of the rules expressed our business logic rules and restrictions. And so we started to think about splitting the rules across the layers and following are multi step validation solution we came up with:

  • Application level validation. At this level incoming JSON data model fields validation is performed. The field types are validated automatically by ASP.NET core built-in Model binding but the other validation rules are executed by FluentValidation including:
    • required field validation
    • text field length validation
    • numeric field range validation
  • Domain level validation. At the domain level we apply Domain Driven Design approach and therefore instead of FluentValidation we decided to use Specification pattern. We liked the idea that we can move the validation logic outside the Domain Entity to the separate Validator. By doing that we were able to bind validation logic to incoming commands instead of Domain Entity. For example we have an entity SportEvent and several commands that call methods inside that entity like Update Sport Event and Cancel Sport Event. Now because the different commands have different business requirements for the same Entity property it would be hard to implement them with FluentValidation. For example property BookedSeats in case of Update Sport Event command should be 0, but in case of Cancel Sport Event command it is allowed that someone has already booked a particular Sport Event. And bellow is how the Cancel Sport Event validator looks like:
public sealed class EventCancellationValidator : EntityValidatorBase
{
    public EventCancellationValidator()
    {
        AddValidation("CancellationTimeValidation",            
                new ValidationRule<SportEventInstance>(
                new Specification<SportEventInstance>(sportEvent =>
                    sportEvent.StartTime > DateTime.UtcNow),
                ValidationErrorCodes.CANCELLATION_TOO_LATE, "It is not possible to cancel the 
        Sport Event after it has started", "SportEventStartTime"));
    }
}
  • Database level validation. For accessing database we are using Entity Framework Core and to make sure the database models are in tune with database constraints we are using built in property attributes and classes that implement IEntityTypeConfiguration interface that allows to define database constraints like Primary Key, Foreign Key and Unique Key.

P.S. Please follow our Facebook page for more!

Performance or Readability

sports
venture capital
startup

In our Fitradar backend solution we are heavily using a database in order to implement the necessary use cases. Almost every call to our backend API includes calls to the database. In order to make it easier to work with database from C# code we are using Entity Framework Core. The library allows us quickly and easy save C# objects in database and fetch persisted C# objects from the database. In many use cases the EF Core is used just for these basic operations – saving and fetching but the business logic is isolated in other classes. Nevertheless for some use cases we discovered that it is more straightforward to execute the business logic directly in the database in the form of stored procedures or user functions. And in this article I wanted to share some of the experience we had while were working on complex database operations and the solution we arrived at.

One of the main reasons why we started to consider moving some parts from the C# code to the database was the user experience. We noticed that some operations on the mobile application are taking considerable amounts of time and might degrade overall user experience and we started to look how to improve the performance of those slow operations. Some of the reasons we discovered that were causing the long response time were multiple calls to the database and unoptimized database operations. Striving for the clean code in our backend solution we tried to avoid stored procedures, user functions and triggers. Introducing SQL code that can’t be mapped to the C# code within a single use case would mean introducing another codebase. So far EF core allowed us to abstract from SQL code and we very happy to have all the logic in C# code. Besides one more language and codebase there were following issues that were holding us back from using SQL directly:

  • Adding SQL code in DbContext in the form of strings would take away the Intellisense for SQL and that would make us deal not only with logic errors but with syntactic errors as well in the runtime.
  • Refactoring with Visual Studio built-in tools would become more complex
  • In case we needed to debug our method that uses stored procedure or fires SQL trigger we would have to debug C# and SQL code separately
  • Working with two codebases readability of single use case is degrading – in order to understand the full logic we have to switch between C# and SQL
  • There are different logging systems for ASP.NET Core and SQL and therefore it is harder to keep unified log format.
  • More complex deploy process

But at the same time for complex persistence and database query operations we would have:

  • Significant gain in performance
  • Terse code, since SQL is specifically designed to work with relational databases and many operations can be expressed in much shorter code compared with C# and LINQ without losing the readability.

The main question we had to answer in order to understand whether we need to move some of the logic to stored procedures, triggers and user functions was: “Is the gain in performance worth it?” For some use cases we could see the improvements even with the test data and for the other use cases we predicted that changes we are planning to introduce will start to impact the performance once certain amount of data will be stored in a database. And so we decided to move some of our logic from C# to SQL code, but before that we really wanted to mitigate some of the above-mentioned problems we discovered during analysis. And to do that we decided to create separate SQL Server Database Project instead of keeping the SQL code in EF Core DbContext class and by doing that we solved the problem with structuring the code, Intellisense and SQL code maintenance.

P.S. Visit our website: https://www.fitradar.me/ and join the waiting list. We launch very soon!

Testing all links in the chain

Our Fitradar solution is using various third party services, and therefore the Integration tests in our solution became a very important part next to the Unit tests. In our backend application we are trying to follow the ideal Test Pyramid

Ideal Test Pyramid

but for some Use cases it makes more sense to run all the involved services rather than replace them with Mock objects, mostly because there is no complex logic involved in particular Use Case; it is rather straightforward without conditionals and loops. And on the other hand the Integration Tests give us more assurance that the Use Case is robust and works as expected. And so for some Use Cases we started to prefer Integration tests instead of Unit tests and our test pyramid became a little bit distorted. The following parts in our back end application are tested by Integration Tests:

  • ASP.NET Core Model Binding. Once we discovered that produced JSON data for some C# objects is not exactly as we expected, and that some incoming JSON is not parsed as we expected we started to test and verify that incoming and outcoming data are transformed as we are expecting
  • ASP.NET Core and third party libraries configurations defined in Startup file. The configurations can be tested only at runtime and therefore to test these configurations we have to run the test web server and make some requests against the server. Some of the configurations we are testing are following:
    • Dependency injection
    • AddMediatR and FluentValidation configuration
    • ASP.NET Core Authentication and Authorization integration with  Firebase Authentication
  • Entity Framework produced SQL code. Especially for complex queries. And Entity Framework save and update logic for complex types where at runtime some nested objects might have different EntityState than parent object. And any complex logic interacting with Entity Framework
  • Integration with Azure Services like Service Bus and Azure Web Functions
  • Integration with Stripe

The crucial factor that made our team write more Integration Tests is the new ASP.NET Core test web host and test server client. It allowed us to quickly setup the test server with our back-end application and run the requests against our API. By using the test host we were able to test most of the configurations in the Setup file. It was harder to test our back-end parts that were part of the Azure Functions, but after some error and trial we were able to create a test web host for the Azure functions as well and issue the calls that triggered the Functions. And here is how we test the sport event cancellation Use Case that is triggered by message from Service Bus queue:

        [Fact]
        public async Task TestUnpaidEventCancelled()
        {
            var resolver = new RandomNameResolver();

            IHost host = new HostBuilder()
                .ConfigureWebJobs()
                .ConfigureDefaultTestHost<CancelSportEventFunction>(webJobsBuilder => {
                    webJobsBuilder.AddServiceBus();
                })
                .ConfigureServices(services => {
                    services.AddSingleton<INameResolver>(resolver);
                })
                .Build();

            using (host)
            {
                //Arrange
                await host.StartAsync();
                _testFixture.SeedSportEvent();
                await _testFixture.BookSportEventInstance();
                _testFixture.AddCommentToSportEventInstance();

                //Act
                await _testFixture.SendMessageAsync("CanceledEvents",
                    _testFixture.CanceledSportEventInstance.Id.ToString());
                await _testFixture.WaitUntilSportEventInstanceIsCancelled();

                //Assert
                //_output.WriteLine($"Found {parsedOrders.Count} invoices.");
                var calendarEvents = _testFixture.GetCalendarEvents();
                Assert.Empty(calendarEvents);
                var order = _testFixture.GetCancelledSportEventOrder();
                Assert.NotNull(order);
                var msg = _testFixture.GetVisitorsInboxMessage();
                Assert.NotNull(msg);
                Assert.Equal(MessageSource.CANCEL_EVENT, msg.Source);
                var statistics = _testFixture.GetVisitorsInboxStatistics();
                Assert.NotNull(statistics);
                Assert.Equal(1, statistics.NumberOfCancelMessages);
            }
        }

P.S. Visit our website: https://www.fitradar.me/ and join the waiting list. We launch very soon!

Our journey towards serverless

Recently we were working on a ticket booking and booking cancelation feature in our Fitradar solution. In our solution the booking process consists of two steps: the place reservation in a sport event and payment. And in case the user for some reason didn’t make a payment the system should execute one more step – cancel the reservation. The process becomes non trivial when each step is done in separate environments – the reservation is done on the back-end, but the payment is done on the client’s device. The most challenging part was to figure out how to cancel the reservation in cases when payment was not made or failed.

So we started to look for possible solutions. Initially we were considering Background tasks with hosted services but we didn’t like the fact that the background task will be in the same process as our main back-end service and we have to make sure the process doesn’t shut down after the request is handled. And since we were using the Azure Cloud for hosting our beck-end services the next thing we started to look at was the WebJob which seemed a very good solution for our use case until we introduced ourselves to the Azure Functions. After some investigation we came to conclusion that Azure Service Bus with message deferral and Azure Functions would be perfect solution. The idea was to send a message to the Service Bus after the user has made a reservation. The Service Bus would deliver the message to our Subscription with 15 minutes delay and our Azure Function would cancel the unpaid booking. In order to receive the scheduled Service Bus message we needed to use Azure Function with Service Bus trigger.

Once we learned more about Azure Functions we decided to use them for other tasks as well that should be done outside the usual request – response pattern. One of such tasks in our solution was User Rating calculation. The task should run on a regular basis every Sunday night. This requirement was perfect job for timer triggered Azure Function.

We started to work on our serverless functions. After the function was scaffolded we started to add our business logic. And since we already had a solid CQRS architecture in place for the main back-end service and for each use case we were creating a separate Command that was using the rest of our infrastructure we wanted to add another Command for reservation cancelation use case, but we faced some .NET Core compatibility issues. Our backend runtime was targeting .NET Core 2.2 version, but our Azure Functions runtime was targeting .NET Core 3.1. Although we could just downgrade our Functions app runtime but the documentation strongly encouraged to use version 3.1, because Function apps that target ~2.0 no longer receives security and other maintenance updates. And so we started to migrate our back-end app to the .NET Core 3.1.

We followed official documentation but still struggled to make our third party libraries work again. The biggest challenge was to find a way how to make to work FluentValidation, MediatR and Swagger together. Once we updated the FluentValidation registration in the Startup file according to the provided documentation it switched off the MediatR Pipeline Behaviors. And once we found a way how to make Pipeline Behaviors work it switched off FluentValidation rules on the Swagger UI page. The Stackoverflow and Google couldn’t help us and we started to experiment with the available settings for FluentValidation and MediatR registration. After several hours of error and trial  we found one setting that made all three libraries work together nice and smooth. Here is the peace of code that made our day:

services.AddMvcCore().AddFluentValidation(fv =>
                {
                    fv.RegisterValidatorsFromAssemblyContaining<PlaceValidator>();
                    fv.RunDefaultMvcValidationAfterFluentValidationExecutes = false;
                    fv.AutomaticValidationEnabled = false;
                });

Now our Fitradar back-end can really benefit from all ASP.NET Core and Azure Functions provided functionality.

P.S. Visit our website: https://www.fitradar.me/ and join the mailing list. Our app is coming soon.

Being unique is not always a good thing

The other day I was reviewing my colleagues’ code. It was back-end code written in C#. I noticed that he introduced a class that is very similar to the structure I was quite sure existing in the .NET base library. I found that structure in Microsoft documentation and we agreed that it is better to use .NET type instead of introducing the same type with a different name. After that occasion, I recalled several other cases when I saw the unique solutions for the problems that could have been solved by well-known pattern or practice. My gut feeling was telling me then that it is not a good idea to introduce your own solution for a problem that is already solved by someone else, but I have never really thought what problems such decision can introduce in the project. And so in this article, I wanted to share some thoughts regarding the widely accepted and unique solutions in software development.

First let’s make it clear that any best practice, framework, algorithm or technology was a unique solution in their time. Then when exactly one or another solution became widely accepted?

  • There are solutions in computer sciences that can be measured, like algorithms. In order to estimate the cost of algorithm calculation Big O notation was introduced. When someone comes up with a new algorithm for common problem, for example sorting, we can simply compare how fast each algorithm solves the problem using big O, and choose the one that meets our criteria. And if the new algorithm beats the old one in performance working on the same data, then it becomes the new standard.
  • But most of the technologies, frameworks, patterns and best practices can not be compared by introducing measurable parameters. Think about programming languages or frameworks. There is noway you can make an unambiguous claim which language or framework is better. Then how to find out what is the best language or framework for the given problem. Let me share a story from my youth that might give some insight how the new solution become widely accepted. I remember the story that was shared by my Assembler class professor at University. He started his computer scientists career when software was written in punch cards and he became very good at writing machine code and later at Assembler. And when the first compilers emerged such as FORTRAN and ALGOL he was very skeptical regarding these new languages and the compiling technology in general, because he didn’t believe that the compilers algorithm is able to produce the same quality machine code as a skilled Assembler developer. And in the fifties and sixties well written assembly code really metered, because every byte was counted and his reasoning at that time was valid. But time passed by the hardware and compilers improved and we see that most of the code even in embedded systems now days is written in high level programming languages. The same story goes for programming paradigms – the business application world started with procedural programming and today it ended with Object oriented programming. In Windows and Mobile apps we started with Forms and Activities and today ended with MVVM. So it looks like for such things as programming languages and frameworks more or less useful criteria that we can use is
    • the amount of time the technology is used,
    • amount of software written using the the particular technology
    • number of developers actively using the particular technology

And still all these criteria are not so reliable as big O for algorithms.

In case of algorithm I think it is quite clear that if there is already an algorithm that solves the problem it is pointless to write your own, unless you can produce better one. In case of frameworks I would follow the same logic – if there is an well known framework that can solve the problem it is much safer to relay on the majorities opinion (although the majority can be wrong) rather than on your own single point of view. Even if the framework or pattern will be rejected in the near future at this moment:

  • it is much more tested than your own solution, and therefore there are much more chances to have a flaw or even bug in your solution than in the widely accepted solution
  • it will be much easier for other developers to work with widely accepted solution than yours, because there is much more documentation and all kind of sorts information about the widely accepted solution than you can ever produce. And this will become crucial when your solution will start to live it’s own live

Visit our website: https://www.fitradar.me/ and join the mailing list. Our app is coming soon.

P.S. We are hiring: http://blog.fitradar.me/we-are-hiring/

Let’s keep the distances

In my last article I mentioned that we had cases when we needed to split our back-end application across several processes and as a result we gained more flexibility when we needed to scale the application. This time I want to keep sharing some insights on why we try to keep some parts of the application loosely coupled. This time I wanted to talk about loose coupling at design time when the whole application might run in the single process. According to one of the definitions ”loosely coupled system is one in which each of its components has, or makes use of, little or no knowledge of the definitions of other separate components”. In object oriented paradigm, that we are using to create our Fitradar application, most of the code is spread among various classes and therefore the smallest component that we consider is a class. And since classes are just containers for data and functions then we usually see the loose coupling as a way how to call a function of another class with little information as possible.

The simplest way how to call a method on the other object is to create that object from a class (in Java and C# we do that by new operator) and call the needed method. But it turns out there are many cases when it is undesirable to know the class name of an object or even a specific method name during the design time and therefore there are several methods how to reduce the knowledge of a class we are going to use and a method we are going to call:

  • We can replace a class with an interface or an abstract class and in such way reduce the information about the class to the subset of methods that the other classes are interested in. In Object Oriented programming this is known as Polymorphism and is one of the OOP cornerstones. It is really hard to express how important this OOP feature is. I think back in a day that feature was one of the selling points of OOP that pushed out the procedural programming. The biggest gain of using Polymorphism is a flexibility. Now we are not bound to a single code execution flow, because every function that is presented in the execution flow via interface or abstract class can be replaced with other implementation through the configuration or dynamically during a runtime. This in turn opens whole bunch of new possibilities. And the one I like the most is Unit testing. Now days it is hard to imagine how much work it would require to isolate the piece of code and test it without the interfaces that can be mocked.
  • Sometimes we are interested only in one method of the other class and we don’t want to be bound to single class that implements this method, and we want to replace that method implementation. In order to reduce the knowledge about the other class to single methods signature we can use delegates in C#, functional interfaces in Java, Kotlin or function pointers in C\C++. Not all OOP languages provide such constructs though then we should use more verbose interfaces or abstract classes. A delegate can bee seen as placeholder for a method. And once we start to focus on a method without accompanying class we start to enter the realm of Functional programming where function implementation can be written as lambda expression (anonymous functions) and its return type and arguments can be other functions. The most common areas where the function placeholders are used are events and callbacks. And once Linq came out everyone who got to know this library really started to appreciate lambda expressions
  • In cases where Interfaces and Anonymous methods are not enough to describe dependency, for example we don’t want to restrict a method description in our execution flow with a single signature and want to use different rules to describe the method or a class that contains the method, then we can use Reflection. Although Reflection is very powerful tool that allows to achieve a great flexibility, usually it is not recommended to use because it adds an overhead at runtime and degrades the performance

On the larger scale where components are not anymore single classes but the whole systems the loose coupling further can be achieved by introducing the middle-ware like Queues and Service Buses.

As we can see making a method call something that is not bound to a single implementation opens up a dozen of options how to design and build an application in very flexible way.

P.S. Visit http://fitradar.me/ and join the mailing list!

Stay Loose

Recently our team finished the work on FitRadar booking system. That was quite a challenging task and I decided to share some insights and knowledge we gained during the design and development of this system. Already on the early stage of designing and requirement gathering it was clear that ticket booking system on the back-end should be independent from the rest of the Fitradar system because we wanted to introduce Event Sourcing for such entities as Order and Payment. Since these two entities are responsible of money charging and money is always a very sensitive topic, we wanted to make sure we can always track down the state changes in booking and payment processes. But to what degree it should be independent was the question we had to answer.

Let’s start with a simple case we considered – one process application. In this kind of applications all communications between objects occur within one process and it means all libraries needed for application are loaded in single process. Usually when we talk about the client side applications – Android, iOS or client side JavaScript, then most of the time these kind of applications work within one process and rarely leave the boundaries of that process set by OS. In such case the isolation can be introduced on the code level, like implementing the new feature in a new library, but at runtime we are somewhat limited. Since all objects share the same process:

  • fault in one object might lead to crash in another object
  • the load of a single object affects the whole process, so we can’t assign PC resources to separate app components. It means you can only scale the whole application, usually by creating a new instance of the application. This starts to play a big role when app is hosted on cloud environment like Azure, Google Cloud or AWS where we must pay for used resources.
  • It is hard to apply new technology or framework to the new feature. Usually the whole application is build based on single architectural pattern like MVC and CRUD or MVC and CQRS and it is hard to introduce one more pattern (if we already using simple Data Access Objects for working with data coming from web, then it will take some time to introduce CQRS with Event Sourcing) in the same application. It is much simpler to create a new application.

So there is very little we can do about object decoupling at runtime in single process application case, but do we really need to deploy a booking system in a separate application? Maybe we can just isolate our booking system in a separate library and still run it in the same process with the rest of FitRadar libraries? And so we started to investigate it. Very quickly we noticed that our ideas of making booking system independent from the rest of application strongly reassembles the principles of Microservices architecture . And since now days there are a lot of talks around the Microservices we thought that our project might be a good candidate to move to Microservices architecture. But just because something is promoted doesn’t mean it fits your needs. You usually don’t start the development of application as a distributed system, unless it was a requirement from the beginning. Applications usually mature to the state when Monolithic Architecture doesn’t satisfy the initial requirements and a team starts to consider switching to Microservices Architecture. And we wanted to make sure that this is the time when the new feature requires us to switch to the new Architecture to make the development easier in the future. Here are the advantages of Microservices Architecture that really attracted our attention:

  • it gives us a smaller code base to work with. It is a really huge benefit if you consider that at certain point a team more and more starts to think about how properly structure the project (how to write the Clean Code), that any new developer could easy understand it and navigate it. And putting a Booking system as a separate solution would allow as to reduce the code complexity and as result would allow developers navigate faster and add new code faster. If you have ever worked on enterprise scale application development then you know how much time it can take just to figure out where to put your new code.
  • Web API end to end tests and Integration tests can run much faster because there is no need to load all of the Fitradar system libraries only those related to booking system. This advantage really starts to shine when we enter the test phase.
  • It allows us to scale booking system and the rest of the back-end application independently. The booking system is more focused on data writing in the database on the other hand main back-end application is focused on data serving. We expect thousands of data requests a day while user will be navigating around in mobile application and just few bookings a day.
  • Improved fault isolation

And for our project it seemed that we will gain more than loose if we will treat booking system as separate application and deploy it in a separate process. So we gave it a try.

Visit our website: http://fitradar.me/ and join the mailing list. Our app is coming soon!

Mobile app development versus back-end app development

One day I was listening interview with a person who was running software development courses, he was answering questions regarding the software development in different fields and what a full stack developer means. And since our Fitradar system covers several development fields I decided to share my experience about working in one or another field and how easy is to jump from one domain field to another one.

I remember back in a day when I was still a student at university several classes were focusing on business modeling and system implementation according the model. As later I found out this was a common approach for enterprise application design and development. But my first job related to software development was in the company that produced routers and its software. At that time I was writing small scripts to test different router configuration setups, I was not directly involved in router’s software development but I wanted to become a par of a development team. And so I started to explore the routers operating system which was based on Linux kernel as many other embedded systems. I soon discovered that many principles I learned in programming classes are not applied in embedded systems and instead big emphasize is put on performance. For example instead of throwing exceptions error codes were used to improve the performance. In mobile and enterprise applications that would be considered as a bad practice. Another odd thing for me was to see that there are no unit tests only integration or end to end tests, which again in enterprise application world would be considered very bad. And so at that moment it hit me that the way how one or the another system is developed depends from the domain field and not that much on the language. So for myself I distinguished following software development fields:

  • web front-end development
  • mobile application development
  • desktop application development
  • game development
  • embedded systems development
  • enterprise application development

This is by no means the full list of the domain fields where the software is developed. These are just areas I have come across in my developers career. There are many principles that cover all the fields and that is what every programming class starts with, like variables, loops, conditions, functions, but once we need to organize bigger code base the principles start to vary. In our Fitradar project we are developing two big applications: mobile application for Android and iOS and back-end system that gradually evolves in full scale enterprise application. The approaches in some parts of both systems are similar but some parts have quite different goals. So in this article I wanted to sum up the differences and common things approaching the mobile app and back-end app development in Fitradar solution. As mentioned earlier those differences really start to show up when the code base starts to become so big that you need some extra time to navigate within it. And if you don’t follow any code organization principles the time you will need to navigate around, understand and modify will grow proportionally and sometime even exponentially to the size of the code base. So for big systems we really need to organize our code. And almost all my previous articles about development were dedicated to the different approaches on how to better organize the code. The one thing I discover again and again is this – although it is important to know the principles like Object Oriented programming, SOLID and design patterns but just as important is to apply those principles only there where is needed. I remember one web project where front-end part was developed in Angular but back-end in ASP.NET MVC. Our team took over the project from the other company and had to continue to extend it with new features. When we worked with back-end part it was easy to understand and modify it because it followed well established enterprise application best practices. But we really struggled with front-end part because the code was organized according the same principles as in a back-end part and it looked like developers were ignoring many Angular built-in features and principles. And only later we found out that the developers who mainly worked with back-end designed the front-end application as well. This approach would have worked fine if the front end had been very simple application but it was so big that in order to organize the front-end code it required a knowledge of principles specific only to User Interface. And from my experience for a developer new in a domain field to start to produce decent design takes about a year and more, not counting the time needed to master the programming language itself. So therefore for big web application projects there are usually front-end and back-end developers. For smaller web applications the same knowledge about the programming might be enough. So let’s see where the focus in back-end and mobile application lays in:

  • as you can imagine even the simplest mobile app has an UI (there are some special background apps but we will not consider them here) and therefore the emphasize in mobile application in first place will be on the UI code organization and how to connect UI with the rest of the application. The UI will be the part of the system that takes the input from a user and displays the information to the user. From the other hand back-end interaction with the outside world will be via REST (or maybe GraphQL) web services where data is received and send in well formatted way. And formatting usually is done by back-end third party library. Since UI can be very complex then we need to consider principles, practices and patterns that are specific only to UI development. And that is where pure back-end developers lack the knowledge. And if no data is used then mobile app might be limited just to UI and for back-end developer that would mean that very little knowledge can be transferred from back-end development. But in case mobile application works with data stored either locally on mobile device or on a back-end server the extra layer of data persistence might be required.
  • Data layer development in mobile app and on the back-end might seem very similar. And indeed if we choose to store data on a mobile device we can chose database for this purpose and use the well known patterns like Repository and Data Access Object as we do on the back-end server, then it might give the impression that mobile app developer can develop this part of the system for both mobile and back-end application. But my experience shows that it is true only to some extent. On the mobile app data persistence layer always should be simple, because the hardware resources are very limited and it is unwise to build large scale database on the mobile device, instead data are transferred to back-end server and stored there. Database on mobile devices often is used as a cache store, where data are denormalized and structured purely for UI needs. On the other hand the back-end database design is a big thing where data normalization and performance is considered. And this time pure front-end developers might lack the knowledge of complex persistence layer design.
  • And when the system’s complexity grows more layers on the back-end start to emerge, like Domain layer and Event Bus. Inner details of Domain logic usually is something that people don’t want to expose to the outside world therefore it is implemented only on a back-end server. Domain logic implementation might require a lot of specific design patterns and practices. Which again only back-end developers might be aware of.

So at the end of the day we can still apply general software development knowledge across the domains as long as the code base stays small and simple. And that is why even high school student can produce decent software in any field as long as that peace of software is small. But once the system grows big particular domain expertise starts to become crucial, and that domain expert knowledge comes only over the years as a result of learning and practice.

Please visit our website: http://fitradar.me/ and join the mailing list! Our app is coming soon.

Repositories and Data Access Objects are still alive

In my previous articles when I was talking about Clean architecture and Domain Driven Design I mentioned one piece of domain layer – repository or more precisely repository pattern. The repository pattern is used to persist and retrieve domain models. Although repositories are mostly related to Domain Driven Design the other architectures might have the objects with the same functionality but are called different. In Fitradar application (front end and back end) the objects that are used to persist data in the database and fetch data from the database and map data to in-memory objects (aka POCO – Plain Old C# Object, POJO – Plain Old Java Object) and are not part of the Domain layer are called DAO (Data Access Objects). DAO usually are used in cases when no business logic is involved and the application needs to execute simple CRUD operation.

In Android application to work with database we use Room Persistence library but in our back-end solution we use Entity Framework Core. These two are know as Object-relational mappers or ORM. And today I wanted to explore the relationship between Domain Repositories, Data Access Objects and ORM and share some experience we had in our team working with these patterns and libraries.

If you look at the responsibilities of Repository, DAO and ORM they are very close to each other. Some years ago when ORM technology was in its inception the DAO and Repositories usually were working directly with the low level database access services. In case of .NET it was ADO.NET library and in case of Android it was SQLite library. And it was quite clear that DAO or Repository should use these libraries to persist or retrieve in-memory objects and map the data. But now-days when in application development main data access technology became ORM the border between ORM and DAO and Repository has become very blur, especially when one just switched to ORM. Working with ORM in different projects and languages our team came to conclusion that in same cases ORM can fully replace the DAO or Repository and can be used instead of it. Let’s look closer at the cases when it is appropriate to use just a ORM library and when the DAO or Repository should be used in combination with ORM:

  • If you need to save, update or delete single flat plain old object then in most cases ORM library will do the job for you. Of course EF Core and Room are capable of doing more, but then we really should investigate each case separately
  • If you need to fetch the data from single table by primary key then again in most cases you can fully relay on ORM
  • In case Domain layer aggregate has complex entity cluster hierarchy, where different entities might have different persistence state you most likely will need to write such aggregate persistence logic by yourself either using ORM or low level database access library. In our application one such aggregate is Sport Event that has Place and Organizer and some other entities. And the problem we faced when tried to save Sport Event by using built in EF Core capabilities was the different Entity State and the calculated state before Save operation. For example we had cases when we needed to create a Sport Event in Place that was not yet saved in database, and since both entities have the same Entity State EF Core tackled that case well and was able to create both entity records in database. But the problem started to appear when we wanted to create Sport Event in the Place that already had other Sport Events, now before saving aggregate Sport Event we had to fetch Place entity to make sure EF Core sets the correct Entity State. And the more entities your aggregate root has the more fetch operations might be needed. So in this case for us seemed obvious putting Entity State synchronization logic in separate Repository.
  • In case data query logic and following mapping logic is so complex that to make a code readable it is necessary to split logic in separate functions, you most likely will move those functions to separate class, Data Access Object, to persist the Single Responsibility principle.

As you can see the Repository and DAO still has their place in the modern software architecture. In case of queries in CQSR architecture query logic might be put in Application layer since anyway contrary to Commands, Queries only responsibility is data fetching, and that duplicates DAO responsibility. By choosing to put query logic in Application layer we make a direct dependency on EF Core. That is not a problem in classical three layer application, but it does not fit the Clean Architecture principle where Application layer should be aware only of Domain layer and with outer layers (EF Core belongs to the Infrastructure layer) communicates only through interfaces. In this case we should use DAO.

Please visit our website and join the mailing list. Our app is coming soon:

http://fitradar.me/

Clean architecture for our back-end

Today I want to continue discussing the architecture we came up with for our back-end service. In the previous article I brought up some arguments why it was important for us to separate read and write models. And the main thing that made it clear was that we operated with two different data sets for read and write operations. But I didn’t go much in to the details on how we arrived at such a different models. And this aspect of CQRS will be the topic of today’s article. Before we even realized we need to apply CQRS pattern in our design we decided to follow Uncle Bob’s Clean Architecture the same architecture we used in our mobile application development.

And the the reasoning that brought us to this architecture I lied down in one of my previous articles. Although on both platforms the overall architecture was the same but implementation details were quite different. The main attraction point of this architecture was possibility to build the server side application around the Domain Model and apply Domain Driven Development. There are several definitions for Domain Model and some of them you can find here but for us in the design process more important was to understand the peaces that constitute the Domain model. It allowed us in the next designing steps to decide whether we need a separate Domain model layer or entities were just enough. Quite often I notice that the architecture does not make a big difference between the bare bones domain entities and full domain model. In first case entities are just a database relational model, and they serve as in memory tables. In many cases especially in ASP.NET world (the framework we are using to build our back-end services) Object Relational Mappers like Entity Framework require to separate entities from the CRUD operations and thus giving impression that full-fledged Domain model is created, but in fact you end up with something what is called Anemic Domain model which is considered an anti-pattern. So following the advise of Eric J. Evans in his book Domain-Driven Design: Tackling Complexity in the Hart of Software we noted for ourselves following parts that should be presented in our Domain Model in order to consider it us a separate layer:

  • Entities accompanied with business methods
  • Value objects
  • Repositories
  • Aggregates
  • Bounded Context

By analyzing our model designed for Android or iOS platform and for our back-end platform we realized that only our back-end model meets Domain Model criteria and deserves a separate layer but in case of mobile phone platforms we ended up with simple entities that were the part of application or Use Cases layer.

By modeling entities we replicate real life entity attributes, like person’s name, surname or gender that are operated by methods encapsulated in the same entity or in service. At the end of the modeling process we come up with our business model that further can be converted to the Entity-Relationship model, that is used to build the database. In such way Domain model or plain entities are tightly bound to normalized ER model. And one can live only with one ER model until the moment when displayed information starts more and more deviate from the ER model and in order to fetch the data from database more complex queries are required. And this is the time when we started to consider CQRS. But now it was not clear how to integrate the CQRS in Clean Architecture layers and we turn to the mighty Google for someone’s else experience and we found this wonderful talk that showed us exactly what we were looking for and so we ended up by extending the application layer with commands and queries and adding separate Database context or our queries.

Please visit our website and join the mailing list. Our app is coming soon:

http://fitradar.me/