Making sure data is valid and business rules are kept

fitness
workout
gym
sports
personal trainer
venture capital
startup

Our Fitradar back-end solution is created based on Clean Architecture as described in this article and explained in this conference talk. One part of the back-end solution is incoming request validation, and in order to implement it as a part of the MediatR pipeline we are using FluentValidation library that replaces built-in ASP.NET Core validation system. At the beginning of the project it seemed that all we have to do is just to create a custom validator derived from AbstractValidator and perform the validation of an Application Model in a single class. And that is how we started to implement the validation. And since the library has quite enhanced capabilities that allow implementing complex validation rules for a while we didn’t notice any problems. Until the moment when we realized that not all validation rules are simple incoming Data Model validation. Some of the rules expressed our business logic rules and restrictions. And so we started to think about splitting the rules across the layers and following are multi step validation solution we came up with:

  • Application level validation. At this level incoming JSON data model fields validation is performed. The field types are validated automatically by ASP.NET core built-in Model binding but the other validation rules are executed by FluentValidation including:
    • required field validation
    • text field length validation
    • numeric field range validation
  • Domain level validation. At the domain level we apply Domain Driven Design approach and therefore instead of FluentValidation we decided to use Specification pattern. We liked the idea that we can move the validation logic outside the Domain Entity to the separate Validator. By doing that we were able to bind validation logic to incoming commands instead of Domain Entity. For example we have an entity SportEvent and several commands that call methods inside that entity like Update Sport Event and Cancel Sport Event. Now because the different commands have different business requirements for the same Entity property it would be hard to implement them with FluentValidation. For example property BookedSeats in case of Update Sport Event command should be 0, but in case of Cancel Sport Event command it is allowed that someone has already booked a particular Sport Event. And bellow is how the Cancel Sport Event validator looks like:
public sealed class EventCancellationValidator : EntityValidatorBase
{
    public EventCancellationValidator()
    {
        AddValidation("CancellationTimeValidation",            
                new ValidationRule<SportEventInstance>(
                new Specification<SportEventInstance>(sportEvent =>
                    sportEvent.StartTime > DateTime.UtcNow),
                ValidationErrorCodes.CANCELLATION_TOO_LATE, "It is not possible to cancel the 
        Sport Event after it has started", "SportEventStartTime"));
    }
}
  • Database level validation. For accessing database we are using Entity Framework Core and to make sure the database models are in tune with database constraints we are using built in property attributes and classes that implement IEntityTypeConfiguration interface that allows to define database constraints like Primary Key, Foreign Key and Unique Key.

P.S. Please follow our Facebook page for more!

Performance or Readability

sports
venture capital
startup

In our Fitradar backend solution we are heavily using a database in order to implement the necessary use cases. Almost every call to our backend API includes calls to the database. In order to make it easier to work with database from C# code we are using Entity Framework Core. The library allows us quickly and easy save C# objects in database and fetch persisted C# objects from the database. In many use cases the EF Core is used just for these basic operations – saving and fetching but the business logic is isolated in other classes. Nevertheless for some use cases we discovered that it is more straightforward to execute the business logic directly in the database in the form of stored procedures or user functions. And in this article I wanted to share some of the experience we had while were working on complex database operations and the solution we arrived at.

One of the main reasons why we started to consider moving some parts from the C# code to the database was the user experience. We noticed that some operations on the mobile application are taking considerable amounts of time and might degrade overall user experience and we started to look how to improve the performance of those slow operations. Some of the reasons we discovered that were causing the long response time were multiple calls to the database and unoptimized database operations. Striving for the clean code in our backend solution we tried to avoid stored procedures, user functions and triggers. Introducing SQL code that can’t be mapped to the C# code within a single use case would mean introducing another codebase. So far EF core allowed us to abstract from SQL code and we very happy to have all the logic in C# code. Besides one more language and codebase there were following issues that were holding us back from using SQL directly:

  • Adding SQL code in DbContext in the form of strings would take away the Intellisense for SQL and that would make us deal not only with logic errors but with syntactic errors as well in the runtime.
  • Refactoring with Visual Studio built-in tools would become more complex
  • In case we needed to debug our method that uses stored procedure or fires SQL trigger we would have to debug C# and SQL code separately
  • Working with two codebases readability of single use case is degrading – in order to understand the full logic we have to switch between C# and SQL
  • There are different logging systems for ASP.NET Core and SQL and therefore it is harder to keep unified log format.
  • More complex deploy process

But at the same time for complex persistence and database query operations we would have:

  • Significant gain in performance
  • Terse code, since SQL is specifically designed to work with relational databases and many operations can be expressed in much shorter code compared with C# and LINQ without losing the readability.

The main question we had to answer in order to understand whether we need to move some of the logic to stored procedures, triggers and user functions was: “Is the gain in performance worth it?” For some use cases we could see the improvements even with the test data and for the other use cases we predicted that changes we are planning to introduce will start to impact the performance once certain amount of data will be stored in a database. And so we decided to move some of our logic from C# to SQL code, but before that we really wanted to mitigate some of the above-mentioned problems we discovered during analysis. And to do that we decided to create separate SQL Server Database Project instead of keeping the SQL code in EF Core DbContext class and by doing that we solved the problem with structuring the code, Intellisense and SQL code maintenance.

P.S. Visit our website: https://www.fitradar.me/ and join the waiting list. We launch very soon!

Testing all links in the chain

Our Fitradar solution is using various third party services, and therefore the Integration tests in our solution became a very important part next to the Unit tests. In our backend application we are trying to follow the ideal Test Pyramid

Ideal Test Pyramid

but for some Use cases it makes more sense to run all the involved services rather than replace them with Mock objects, mostly because there is no complex logic involved in particular Use Case; it is rather straightforward without conditionals and loops. And on the other hand the Integration Tests give us more assurance that the Use Case is robust and works as expected. And so for some Use Cases we started to prefer Integration tests instead of Unit tests and our test pyramid became a little bit distorted. The following parts in our back end application are tested by Integration Tests:

  • ASP.NET Core Model Binding. Once we discovered that produced JSON data for some C# objects is not exactly as we expected, and that some incoming JSON is not parsed as we expected we started to test and verify that incoming and outcoming data are transformed as we are expecting
  • ASP.NET Core and third party libraries configurations defined in Startup file. The configurations can be tested only at runtime and therefore to test these configurations we have to run the test web server and make some requests against the server. Some of the configurations we are testing are following:
    • Dependency injection
    • AddMediatR and FluentValidation configuration
    • ASP.NET Core Authentication and Authorization integration with  Firebase Authentication
  • Entity Framework produced SQL code. Especially for complex queries. And Entity Framework save and update logic for complex types where at runtime some nested objects might have different EntityState than parent object. And any complex logic interacting with Entity Framework
  • Integration with Azure Services like Service Bus and Azure Web Functions
  • Integration with Stripe

The crucial factor that made our team write more Integration Tests is the new ASP.NET Core test web host and test server client. It allowed us to quickly setup the test server with our back-end application and run the requests against our API. By using the test host we were able to test most of the configurations in the Setup file. It was harder to test our back-end parts that were part of the Azure Functions, but after some error and trial we were able to create a test web host for the Azure functions as well and issue the calls that triggered the Functions. And here is how we test the sport event cancellation Use Case that is triggered by message from Service Bus queue:

        [Fact]
        public async Task TestUnpaidEventCancelled()
        {
            var resolver = new RandomNameResolver();

            IHost host = new HostBuilder()
                .ConfigureWebJobs()
                .ConfigureDefaultTestHost<CancelSportEventFunction>(webJobsBuilder => {
                    webJobsBuilder.AddServiceBus();
                })
                .ConfigureServices(services => {
                    services.AddSingleton<INameResolver>(resolver);
                })
                .Build();

            using (host)
            {
                //Arrange
                await host.StartAsync();
                _testFixture.SeedSportEvent();
                await _testFixture.BookSportEventInstance();
                _testFixture.AddCommentToSportEventInstance();

                //Act
                await _testFixture.SendMessageAsync("CanceledEvents",
                    _testFixture.CanceledSportEventInstance.Id.ToString());
                await _testFixture.WaitUntilSportEventInstanceIsCancelled();

                //Assert
                //_output.WriteLine($"Found {parsedOrders.Count} invoices.");
                var calendarEvents = _testFixture.GetCalendarEvents();
                Assert.Empty(calendarEvents);
                var order = _testFixture.GetCancelledSportEventOrder();
                Assert.NotNull(order);
                var msg = _testFixture.GetVisitorsInboxMessage();
                Assert.NotNull(msg);
                Assert.Equal(MessageSource.CANCEL_EVENT, msg.Source);
                var statistics = _testFixture.GetVisitorsInboxStatistics();
                Assert.NotNull(statistics);
                Assert.Equal(1, statistics.NumberOfCancelMessages);
            }
        }

P.S. Visit our website: https://www.fitradar.me/ and join the waiting list. We launch very soon!

Our journey towards serverless

Recently we were working on a ticket booking and booking cancelation feature in our Fitradar solution. In our solution the booking process consists of two steps: the place reservation in a sport event and payment. And in case the user for some reason didn’t make a payment the system should execute one more step – cancel the reservation. The process becomes non trivial when each step is done in separate environments – the reservation is done on the back-end, but the payment is done on the client’s device. The most challenging part was to figure out how to cancel the reservation in cases when payment was not made or failed.

So we started to look for possible solutions. Initially we were considering Background tasks with hosted services but we didn’t like the fact that the background task will be in the same process as our main back-end service and we have to make sure the process doesn’t shut down after the request is handled. And since we were using the Azure Cloud for hosting our beck-end services the next thing we started to look at was the WebJob which seemed a very good solution for our use case until we introduced ourselves to the Azure Functions. After some investigation we came to conclusion that Azure Service Bus with message deferral and Azure Functions would be perfect solution. The idea was to send a message to the Service Bus after the user has made a reservation. The Service Bus would deliver the message to our Subscription with 15 minutes delay and our Azure Function would cancel the unpaid booking. In order to receive the scheduled Service Bus message we needed to use Azure Function with Service Bus trigger.

Once we learned more about Azure Functions we decided to use them for other tasks as well that should be done outside the usual request – response pattern. One of such tasks in our solution was User Rating calculation. The task should run on a regular basis every Sunday night. This requirement was perfect job for timer triggered Azure Function.

We started to work on our serverless functions. After the function was scaffolded we started to add our business logic. And since we already had a solid CQRS architecture in place for the main back-end service and for each use case we were creating a separate Command that was using the rest of our infrastructure we wanted to add another Command for reservation cancelation use case, but we faced some .NET Core compatibility issues. Our backend runtime was targeting .NET Core 2.2 version, but our Azure Functions runtime was targeting .NET Core 3.1. Although we could just downgrade our Functions app runtime but the documentation strongly encouraged to use version 3.1, because Function apps that target ~2.0 no longer receives security and other maintenance updates. And so we started to migrate our back-end app to the .NET Core 3.1.

We followed official documentation but still struggled to make our third party libraries work again. The biggest challenge was to find a way how to make to work FluentValidation, MediatR and Swagger together. Once we updated the FluentValidation registration in the Startup file according to the provided documentation it switched off the MediatR Pipeline Behaviors. And once we found a way how to make Pipeline Behaviors work it switched off FluentValidation rules on the Swagger UI page. The Stackoverflow and Google couldn’t help us and we started to experiment with the available settings for FluentValidation and MediatR registration. After several hours of error and trial  we found one setting that made all three libraries work together nice and smooth. Here is the peace of code that made our day:

services.AddMvcCore().AddFluentValidation(fv =>
                {
                    fv.RegisterValidatorsFromAssemblyContaining<PlaceValidator>();
                    fv.RunDefaultMvcValidationAfterFluentValidationExecutes = false;
                    fv.AutomaticValidationEnabled = false;
                });

Now our Fitradar back-end can really benefit from all ASP.NET Core and Azure Functions provided functionality.

P.S. Visit our website: https://www.fitradar.me/ and join the mailing list. Our app is coming soon.

Being unique is not always a good thing

The other day I was reviewing my colleagues’ code. It was back-end code written in C#. I noticed that he introduced a class that is very similar to the structure I was quite sure existing in the .NET base library. I found that structure in Microsoft documentation and we agreed that it is better to use .NET type instead of introducing the same type with a different name. After that occasion, I recalled several other cases when I saw the unique solutions for the problems that could have been solved by well-known pattern or practice. My gut feeling was telling me then that it is not a good idea to introduce your own solution for a problem that is already solved by someone else, but I have never really thought what problems such decision can introduce in the project. And so in this article, I wanted to share some thoughts regarding the widely accepted and unique solutions in software development.

First let’s make it clear that any best practice, framework, algorithm or technology was a unique solution in their time. Then when exactly one or another solution became widely accepted?

  • There are solutions in computer sciences that can be measured, like algorithms. In order to estimate the cost of algorithm calculation Big O notation was introduced. When someone comes up with a new algorithm for common problem, for example sorting, we can simply compare how fast each algorithm solves the problem using big O, and choose the one that meets our criteria. And if the new algorithm beats the old one in performance working on the same data, then it becomes the new standard.
  • But most of the technologies, frameworks, patterns and best practices can not be compared by introducing measurable parameters. Think about programming languages or frameworks. There is noway you can make an unambiguous claim which language or framework is better. Then how to find out what is the best language or framework for the given problem. Let me share a story from my youth that might give some insight how the new solution become widely accepted. I remember the story that was shared by my Assembler class professor at University. He started his computer scientists career when software was written in punch cards and he became very good at writing machine code and later at Assembler. And when the first compilers emerged such as FORTRAN and ALGOL he was very skeptical regarding these new languages and the compiling technology in general, because he didn’t believe that the compilers algorithm is able to produce the same quality machine code as a skilled Assembler developer. And in the fifties and sixties well written assembly code really metered, because every byte was counted and his reasoning at that time was valid. But time passed by the hardware and compilers improved and we see that most of the code even in embedded systems now days is written in high level programming languages. The same story goes for programming paradigms – the business application world started with procedural programming and today it ended with Object oriented programming. In Windows and Mobile apps we started with Forms and Activities and today ended with MVVM. So it looks like for such things as programming languages and frameworks more or less useful criteria that we can use is
    • the amount of time the technology is used,
    • amount of software written using the the particular technology
    • number of developers actively using the particular technology

And still all these criteria are not so reliable as big O for algorithms.

In case of algorithm I think it is quite clear that if there is already an algorithm that solves the problem it is pointless to write your own, unless you can produce better one. In case of frameworks I would follow the same logic – if there is an well known framework that can solve the problem it is much safer to relay on the majorities opinion (although the majority can be wrong) rather than on your own single point of view. Even if the framework or pattern will be rejected in the near future at this moment:

  • it is much more tested than your own solution, and therefore there are much more chances to have a flaw or even bug in your solution than in the widely accepted solution
  • it will be much easier for other developers to work with widely accepted solution than yours, because there is much more documentation and all kind of sorts information about the widely accepted solution than you can ever produce. And this will become crucial when your solution will start to live it’s own live

Visit our website: https://www.fitradar.me/ and join the mailing list. Our app is coming soon.

P.S. We are hiring: http://blog.fitradar.me/we-are-hiring/