Friday, August 11, 2017

From If/Else & Switch/Case to IoC

Background


Switch Case has been around for as long as I remember, was probably introduced to it in C language.
Many developers still use this pattern for alternating the sequence of execution.

While the Switch Case pattern has its own uses in the current world, there are specific scenarios where Switch Case pattern can be replaced with more flexible patterns allowing greater code maintainability, flexibility and readability.

A well thought out migration to the new patterns can also result in conformance to the SOLID principles.


consider the following code sample:


switch (notificationType){
    case NotificationTypes.ByDevice:
         DeviceAlerterData deviceAlerterData = configurationData as DeverAlerterData;
         if (deviceAlerterData!=null){
              alertData.Filters = new DeviceFilter{
                  HostNames = new HashSet<string>(deviceAlerterData.DeviceNames),
                  FromDate = dateFrom,
                  ToDate = dateTo,
                   OriginatingUserId = LoggedInUser.UserId
              };
         }
         break;
    case NotificationTypes.ByGroup:
        GroupAleterData groupAlerterData = configurationData as GroupAlerterData;
         if (groupAlerterData!=null){
              alertData.Filters = new DeviceFilter{
                  GroupsToAlert = groupAlerterData.GroupIds,
                  FromDate = dateFrom,
                  ToDate = dateTo,
                   OriginatingUserId = LoggedInUser.UserId
              };
         }
         break;
}


The Problem


Now imagine adding another notification type to the code, keeping the same pattern intact would mean,


  • Duplication of code
  • Impacting code readability
  • Increase in code length directly impacting complexity matrices.
  • Difficult to make changes across the pattern e.g. renaming ToDate to ToAlertDate or adding a new field across all the switch cases.
  • Coupling between lots of classes.


The Solution


There are various ways to solve these problems. I will discuss the best approach in my opinion here in this article.

Imagine if all that code is reduced to this:


IAlertFilter alertFilter = AlertFilterFactory.CreateFilterFromNotificationType(notificationType, configurationData);

alertData.ApplyFilters(alertFilter);

By doing the above we have already solved the problem of code readability and depending on other things in happening in the class have achieved S,L, and I part of the SOLID principle.
If you notice we have also addressed the length of code issue as well. The code is going to be much cleaner in this case.

What does the AlertFilterFactory contains?


public static class AlertFilterFactory{
    public static IAlertFilter CreateFilterFromNotificationType(NotificationTypes notificationType, ConfigurationData configData){
        IAlertFilter alertFilter = null;
        if (notificationType==NotificationTypes.ByDevice)
            alertFilter  = new DeviceNotificationFilter(configData);
        else if (notificationType==NotificationTypes.ByGroup)
            alertFilter = new GroupNotificationFilter(configData);
         else
             throw new InvalidOperationException("NotificationType: " + notificationType + " is not supported");
        return alertFilter;
    }
}

In the above example we make use of the D part of the SOLID principle. Performing dependency injection gives us the ability to scale functionality while following the O part of the SOLID principle.


The IAlertFilter will look something like this:


public interface IAlertFilter{
    DeviceFilter GetFilter();
}

Then we create a base class:


public abstract class AlertFilterBase
        {
            ConfigurationData data;

            public AlertFilterBase(ConfigurationData configurationData)
            {
                data = configurationData;
            }


            public DeviceFilter CreateDefaultFilter()
            {
                var filter = new DeviceFilter
                {
                    FromDate = data.DateFrom,
                    ToDate = data.DateTo,
                    OriginatingUserId = data.LoggedInUserId
                };

                return filter;
            }
        }

Then we implement the classes this way.


public class DeviceNotificationFilter : AlertFilterBase, IAlertFilter
        {
            public DeviceNotificationFilter(ConfigurationData configurationData) : base(configurationData)
            {
            }

            public DeviceFilter GetFilter()
            {
                var filter = CreateDefaultFilter();

                return filter;
            }
        }

public class GroupNotificationFilter : AlertFilterBase, IAlertFilter
        {
            public GroupNotificationFilter(ConfigurationData configurationData) : base(configurationData)
            {
            }

            public DeviceFilter GetFilter()
            {
                var filter = CreateDefaultFilter();

                return filter;
            }
        }


There we go, there can be other refactoring done based on the real requirements. This refactoring now allows for scaling and tries to comply to SOLID principles while solving problems mentioned above.



Code Quality Series - Nested Function Calls and Debuggability

Background


Coders when writing code are normally focused on the problem at hand. Like an artist who is totally focused on his imagination and view of things when painting. However, the core difference between art and code is having to debug it later on.

One of the common patterns that can cause difficulty during debugging is calling functions inside function calls as parameters. This includes instantiating objects as arguments.

e.g.


parametersCollection.Append(GroupManager.GetGroupsParamString(new HashSet<string>(AvailableGroups)));



The Problem


Issues with such coding style comes forward in the following ways:

  • Interactive debuggers normally do not show return values of the executed function in watch windows.
  • Debuggers that allow interactive evaluation of functions will run the function again on each evaluation.
  • Re-evaluation has a performance penalty
  • Functions are generally non-deterministic due to various factors which may not be under debugger's control, in which case the re-evaluation can result a different value compared to how the function actually failed.
  • Apart from a different return value, the re-evaluation may result in different code path being executed altogether.
  • Trying to evaluate the constructed object, reference to which, is missing.


The Solution


The solution to the problem is very simple. Hold the values in variables and pass the variables to the functions instead of function calls.

e.g.


var groupNames = new HashSet<string>(AvailableGroups);

var groupsParameters = GroupManager.GetGroupsParamString(groupNames);



parametersCollection.Append(groupsParameters);



With above, you now have access to the new constructed HashSet that you can evaluate, the parameters, and the result of function call can now be evaluated. Plus the code is now easier to read and proper validation can be applied to the variables before the values are passed to the intended function.

Thursday, August 10, 2017

Code Quality Series - Use Simple Conditions to Increase Debuggability

Background


A practice commonly adopted by developers is to stuff multiple evaluation in a single assignment or evaluation.

e.g.
The use of functions in evaluations:


var oldAutoOrder = !order.isManual && helper.isOldOrder(order);

alternatively, something like this:


if (!order.isManual && helper.isOldOrder(order))

Property chaining:


if (time1.filledTime.DayName==="Sunday" || time1.filledTime.DayName==="Saturday")

I have used JavaScript code in examples but the problem stands for any programming language in general.

The Problem


The problem with this style of coding is that in case there are errors on this line of code and there is a need to debug the code, specially interactively, the current style would not allow proper debugging for following reasons:


  • Interactive debuggers normally do not show return values of the executed function in watch windows.
  • Debuggers that allow interactive evaluation of functions will run the function again on each evaluation.
  • Re-evaluation has a performance penalty
  • Functions are generally non-deterministic due to various factors which may not be under debugger's control, in which case the re-evaluation can result a different value compared to how the function actually failed.
  • Apart from a different return value, the re-evaluation may result in different code path being executed altogether.



The second problem, is with member (properties and functions) chaining, this deserves an article of it's own, but in the context of evaluations we just need to understand that when the conditions fail, object is null or undefined. Good luck figuring out where the problem has occurred. The above example is a very simplistic example and can arguably be deemed more efficient than the solution that I am going to propose. However, this code style where property names are not reported as part of the error will increase debugging complexity.

The Solution


The solution is very simple. Hold the values of functions in variables and build conditions using the variables instead of calling functions or chaining members.

e.g.


var isManualOrder = order.isManual;
var isOldOrder = helper.isOldOrder(order);

var isOldAutoOrder = !isManualOrder && isOldOrder;

if (isOldAutoOrder)


Similarly, with member chaining.

If there are no guarantees that a property will never be null or a that a function chain will always behave in a deterministic way, then chains should always be broken down into simple statements, so individual members can be evaluated at the time of debugging and this also allows proper validation of each member required to correctly come to the logical (read Boolean) answer.