Friday, August 11, 2017

From If/Else & Switch/Case to IoC

Background


Switch Case has been around for as long as I remember, was probably introduced to it in C language.
Many developers still use this pattern for alternating the sequence of execution.

While the Switch Case pattern has its own uses in the current world, there are specific scenarios where Switch Case pattern can be replaced with more flexible patterns allowing greater code maintainability, flexibility and readability.

A well thought out migration to the new patterns can also result in conformance to the SOLID principles.


consider the following code sample:


switch (notificationType){
    case NotificationTypes.ByDevice:
         DeviceAlerterData deviceAlerterData = configurationData as DeverAlerterData;
         if (deviceAlerterData!=null){
              alertData.Filters = new DeviceFilter{
                  HostNames = new HashSet<string>(deviceAlerterData.DeviceNames),
                  FromDate = dateFrom,
                  ToDate = dateTo,
                   OriginatingUserId = LoggedInUser.UserId
              };
         }
         break;
    case NotificationTypes.ByGroup:
        GroupAleterData groupAlerterData = configurationData as GroupAlerterData;
         if (groupAlerterData!=null){
              alertData.Filters = new DeviceFilter{
                  GroupsToAlert = groupAlerterData.GroupIds,
                  FromDate = dateFrom,
                  ToDate = dateTo,
                   OriginatingUserId = LoggedInUser.UserId
              };
         }
         break;
}


The Problem


Now imagine adding another notification type to the code, keeping the same pattern intact would mean,


  • Duplication of code
  • Impacting code readability
  • Increase in code length directly impacting complexity matrices.
  • Difficult to make changes across the pattern e.g. renaming ToDate to ToAlertDate or adding a new field across all the switch cases.
  • Coupling between lots of classes.


The Solution


There are various ways to solve these problems. I will discuss the best approach in my opinion here in this article.

Imagine if all that code is reduced to this:


IAlertFilter alertFilter = AlertFilterFactory.CreateFilterFromNotificationType(notificationType, configurationData);

alertData.ApplyFilters(alertFilter);

By doing the above we have already solved the problem of code readability and depending on other things in happening in the class have achieved S,L, and I part of the SOLID principle.
If you notice we have also addressed the length of code issue as well. The code is going to be much cleaner in this case.

What does the AlertFilterFactory contains?


public static class AlertFilterFactory{
    public static IAlertFilter CreateFilterFromNotificationType(NotificationTypes notificationType, ConfigurationData configData){
        IAlertFilter alertFilter = null;
        if (notificationType==NotificationTypes.ByDevice)
            alertFilter  = new DeviceNotificationFilter(configData);
        else if (notificationType==NotificationTypes.ByGroup)
            alertFilter = new GroupNotificationFilter(configData);
         else
             throw new InvalidOperationException("NotificationType: " + notificationType + " is not supported");
        return alertFilter;
    }
}

In the above example we make use of the D part of the SOLID principle. Performing dependency injection gives us the ability to scale functionality while following the O part of the SOLID principle.


The IAlertFilter will look something like this:


public interface IAlertFilter{
    DeviceFilter GetFilter();
}

Then we create a base class:


public abstract class AlertFilterBase
        {
            ConfigurationData data;

            public AlertFilterBase(ConfigurationData configurationData)
            {
                data = configurationData;
            }


            public DeviceFilter CreateDefaultFilter()
            {
                var filter = new DeviceFilter
                {
                    FromDate = data.DateFrom,
                    ToDate = data.DateTo,
                    OriginatingUserId = data.LoggedInUserId
                };

                return filter;
            }
        }

Then we implement the classes this way.


public class DeviceNotificationFilter : AlertFilterBase, IAlertFilter
        {
            public DeviceNotificationFilter(ConfigurationData configurationData) : base(configurationData)
            {
            }

            public DeviceFilter GetFilter()
            {
                var filter = CreateDefaultFilter();

                return filter;
            }
        }

public class GroupNotificationFilter : AlertFilterBase, IAlertFilter
        {
            public GroupNotificationFilter(ConfigurationData configurationData) : base(configurationData)
            {
            }

            public DeviceFilter GetFilter()
            {
                var filter = CreateDefaultFilter();

                return filter;
            }
        }


There we go, there can be other refactoring done based on the real requirements. This refactoring now allows for scaling and tries to comply to SOLID principles while solving problems mentioned above.



Code Quality Series - Nested Function Calls and Debuggability

Background


Coders when writing code are normally focused on the problem at hand. Like an artist who is totally focused on his imagination and view of things when painting. However, the core difference between art and code is having to debug it later on.

One of the common patterns that can cause difficulty during debugging is calling functions inside function calls as parameters. This includes instantiating objects as arguments.

e.g.


parametersCollection.Append(GroupManager.GetGroupsParamString(new HashSet<string>(AvailableGroups)));



The Problem


Issues with such coding style comes forward in the following ways:

  • Interactive debuggers normally do not show return values of the executed function in watch windows.
  • Debuggers that allow interactive evaluation of functions will run the function again on each evaluation.
  • Re-evaluation has a performance penalty
  • Functions are generally non-deterministic due to various factors which may not be under debugger's control, in which case the re-evaluation can result a different value compared to how the function actually failed.
  • Apart from a different return value, the re-evaluation may result in different code path being executed altogether.
  • Trying to evaluate the constructed object, reference to which, is missing.


The Solution


The solution to the problem is very simple. Hold the values in variables and pass the variables to the functions instead of function calls.

e.g.


var groupNames = new HashSet<string>(AvailableGroups);

var groupsParameters = GroupManager.GetGroupsParamString(groupNames);



parametersCollection.Append(groupsParameters);



With above, you now have access to the new constructed HashSet that you can evaluate, the parameters, and the result of function call can now be evaluated. Plus the code is now easier to read and proper validation can be applied to the variables before the values are passed to the intended function.

Thursday, August 10, 2017

Code Quality Series - Use Simple Conditions to Increase Debuggability

Background


A practice commonly adopted by developers is to stuff multiple evaluation in a single assignment or evaluation.

e.g.
The use of functions in evaluations:


var oldAutoOrder = !order.isManual && helper.isOldOrder(order);

alternatively, something like this:


if (!order.isManual && helper.isOldOrder(order))

Property chaining:


if (time1.filledTime.DayName==="Sunday" || time1.filledTime.DayName==="Saturday")

I have used JavaScript code in examples but the problem stands for any programming language in general.

The Problem


The problem with this style of coding is that in case there are errors on this line of code and there is a need to debug the code, specially interactively, the current style would not allow proper debugging for following reasons:


  • Interactive debuggers normally do not show return values of the executed function in watch windows.
  • Debuggers that allow interactive evaluation of functions will run the function again on each evaluation.
  • Re-evaluation has a performance penalty
  • Functions are generally non-deterministic due to various factors which may not be under debugger's control, in which case the re-evaluation can result a different value compared to how the function actually failed.
  • Apart from a different return value, the re-evaluation may result in different code path being executed altogether.



The second problem, is with member (properties and functions) chaining, this deserves an article of it's own, but in the context of evaluations we just need to understand that when the conditions fail, object is null or undefined. Good luck figuring out where the problem has occurred. The above example is a very simplistic example and can arguably be deemed more efficient than the solution that I am going to propose. However, this code style where property names are not reported as part of the error will increase debugging complexity.

The Solution


The solution is very simple. Hold the values of functions in variables and build conditions using the variables instead of calling functions or chaining members.

e.g.


var isManualOrder = order.isManual;
var isOldOrder = helper.isOldOrder(order);

var isOldAutoOrder = !isManualOrder && isOldOrder;

if (isOldAutoOrder)


Similarly, with member chaining.

If there are no guarantees that a property will never be null or a that a function chain will always behave in a deterministic way, then chains should always be broken down into simple statements, so individual members can be evaluated at the time of debugging and this also allows proper validation of each member required to correctly come to the logical (read Boolean) answer.



Friday, June 16, 2017

Moodle API .net Wrapper

Background

Working on a Moodle project, where I wanted to add and update content using Moodle API.
I couldn't find any .net wrappers to be able to talk to Moodle with a strict contract.

The Solution

To solve the issue I downloaded the Moodle API documentation. Wrote a piece of code that generated the proxy classes to connect to Moodle's Rest API using HttpClient.


The Nitty Gritty

The project (find download link below), uses the namespace Moodle.API.Wrapper.
It basically consists of two folders major namespaces "Moodle.API.Wrapper.Controllers" and "Moodle.API.Wrapper.Models".

The controllers namespace is then divided into further namespaces belong to each area as organized by the API documentation. Inside these namespaces there are classes again, as organized in the API documentation. The structure is identified through the function name's naming convention. e.g. the API call: core_competency_list_competency_frameworks is organized into Moodle.API.Wrapper.Controllers.Core namespace inside a class called Competency. The class then contains the function called ListCompetencyFramework. Note that the function names are converted to C# naming conventions.

The signature of the functions looks like this:


public CompetencyFrameworksModel ListCompetencyFrameworks(CompetencyFrameworksInputModel competencyFrameworksInputModel)
{
return Post<CompetencyFrameworksModel,CompetencyFrameworksInputModel>("core_competency_list_competency_frameworks", competencyFrameworksInputModel);
}

All the required parameters are converted to Model classes clearly identified by the keyword "InputModel" if the class is to be used as input to the API.

Each controller class is inherited from BaseController which contains the implementations for the Post method. The post method does all the heavy lifting of converting classes into rest format and vice versa.

Naturally, all the converted Models are found inside the Model namespace. To be able to make a call to the API using this wrapper, all you need to do is add a reference to the wrapper and use code like this:


var competencyController = new Moodle.API.Wrapper.Controllers.Competency();

competencyController.SetupController(securityToken, url,WriteProgress);

var frameworks = competencyController.ListCompetencyFrameworks(new Moodle.API.Wrapper.Models.Core.CompetencyFrameworksInputModel {
    context = new Moodle.API.Wrapper.Models.Core.ContextInputModel {
        // assign values here
    }
});

       
The securityToken variable expects the Moodle API token that has been generated through your Moodle Site Administration. The url is obviously the base url of your API. and the WriteProgress variable can be null or if you want to report progress back provide an instance of:

public Func<string, Task> WriteProgress;

Download Link


Click here to download the zip file.
Git Hub Repo here

TeamCity - Update app.config after build step

Background

A new testing initiative required that UI tests run nightly against staging environment. This is in addition to the UI tests already configured to run on each push to the branches. This requirement posed a challenge where the UI tests now required to be pointed to staging URL instead of localhost. Complicating the problem was the fact that VSTest does not allow passing parameters through command-line. Even though there is a way available to supply test run parameters through run settings file, but this was not practical in this case.

Luckily the UI tests were relying on application settings (app.config) to retrieve the base URL. However, the challenge of updating this URL for a specific build configuration inside TeamCity still existed.


The Quick Solution

Include a build step in configuration immediately after the build step to run a PowerShell script to update config file at path \TargetPathIncludingReleaseConfiguration\ProjectAssembly.dll.config. 
The update should replace the existing application setting with the new value.


The Nitty Gritty

The TeamCity server's build steps below:

The AppSettingReplace.ps1 is a modified version of the original script found here: http://stackoverflow.com/questions/37201731/change-value-in-app-config-within-teamcity/37204969 provided by Evolve Software Ltd.

The modified source code is below, 

param (
    [ValidateNotNullOrEmpty()]
    [string] $ConfigurationFile = $(throw "-ConfigurationFile is mandatory, please provide a value."),
    [ValidateNotNullOrEmpty()]
    [string] $ApplicationSetting = $(throw "-ApplicationSetting is mandatory, please provide a value."),
    [ValidateNotNullOrEmpty()]
    [string] $ApplicationSettingValue = $(throw "-ApplicationSettingValue is mandatory, please provide a value."),
 [ValidateNotNullOrEmpty()]
    [string] $ProjectSettingsNameSpace = $(throw "-ProjectSettingsNameSpace is mandatory, please provide a value.")
)

function Main() 
{
#sample value for $ProjectSettingsNameSpace = [ProjectNamespace].Properties.Settings
    $CurrentScriptVersion = "1.0"

    Write-Host "================== Config Transform - Version"$CurrentScriptVersion": START =================="

    # Log input variables passed in
    Log-Variables
    Write-Host

    try {
        $xml = [xml](get-content($ConfigurationFile))
        $conf = $xml.configuration.applicationSettings[$ProjectSettingsNameSpace]
  
        $conf.setting | foreach {
   if ($_.name -eq $ApplicationSetting) { $_.value = $ApplicationSettingValue } }
        $xml.Save($ConfigurationFile)
    } 
    catch [System.Exception] {
        Write-Output $_
        Exit 1
    }

    Write-Host "================== Config Transform - Version"$CurrentScriptVersion": END =================="
}

function Log-Variables
{
    Write-Host "ConfigurationFile: " $ConfigurationFile
    Write-Host "ApplicationSetting: " $ApplicationSetting
    Write-Host "ApplicationSettingValue: " $ApplicationSettingValue
    Write-Host "Computername:" (gc env:computername)
}

Main

Monday, March 13, 2017

LDAP server is unavailable error de-mystified

Background

Recently, I was presented with an issue when trying to connect to AD LDS (Active Directory Lightweight Directory Services (previously known as ADAM) using C# and System.DirectoryServices.AccountManagement.PrincipalContext. The integration intermittently resulted in an exception with message "LDAP server is unavailable".

In this blog, I am going to discuss how this problem was tackled.

The Quick Solution

To save you time, please ensure that the AD LDS servers machine is accessible by hostname from your connecting client's location.

The Nitty Gritty

Performing lots of searches on google to no avail, I decided to solve this issue the old fashioned way.

Given the following:
  • The problem only occurred on development machines and not on TeamCity Build Agent.
  • The same code was working/passing tests on development machine before.
  • The test AD LDS instance was configured on the same TeamCity Build Agent
  • The DirectorySearcher was able to connect to the AD LDS and return the properties of the username in question. 
  • The AD LDS configuration caused similar issues before, the issues were with user authentication as the password was not being set properly on the user in question.

The piece of code causing the issue was this line:

var result = context.ValidateCredentials(userId, password, ContextOptions.SimpleBind);

Where context is an instance of PrincipalContext and userId is the distinguished name of the user.

Thinking this may be an issue with the way the PrincipalContext was being instantiated with ContextOptions, I wrote a console application which dynamically generated a list of PrincipalContext constructors with all possible combinations of ContextOptions and all parameters required to work with ApplicationDirectory context type (which is a must if you are connecting with AD LDS). I then tried calling ValidateCredentials with each constructor (about 34 calls) all the calls failed with the same error (except in some cases with Unknown Error, basically due to SecuredSocket binding not being supported).

I then resorted to applying the all "ContextOptions" combinations with the ValidateCredentials calls for each context as well, which resulted in 34x63 calls. All calls also resulted in the same error.

With the peace of mind that this has nothing to do with the "ContextOptions" as I have already tested all possible combinations, I had to step my effort up. I decided to decompile the System.DirectoryServices assembly (3 of them) using JetBrains dotPeek and followed the stack-trace returned in the exception. I ran my code in debug mode when it broke on exception I did a dry run of the decompiled code by looking at internal private variables made accessible through Visual Studio Non-Public member view.

I noticed that the CredentialValidator class inside the assembly resolved the AD LDS' local machine name and was trying to connect to the server with that hostname. The development machine was trying to connect to the AD LDS server over the VPN using an IP Address and not the hostname. The development machine was unable to resolve the IP Address for that hostname.

The Solution

Adding an entry to "hosts" files and modified relevant test cases to confirm before running that if the hostname is not resolving to the known IP Address then it should fail at test setup with a detailed error notification. After performing these steps the code starting working properly.

Wishful Thinking

Given that principal context is provided an IP Address to connect to the server, either connecting by name should be disabled automatically or the error should mention the location it tried to access, this would have instantly allowed the developers to know what the problem could be.

Conclusion

There is nothing that cannot be debugged when you have a good decompiler at hand. Apart from that, please ensure that your AD or AD LDS servers are accessible by "hostname" as well as IP Addresses. You can update hosts file in the c:\windows\system32\drivers\etc folder, alternatively, update your DNS server so it can resolve the domain name to the proper IP Address.