Monday, 25 September 2017

Make your Delphi applications pop with Font Awesome!

Yes you heard right, Font Awesome? You can use their icons to make your desktop applications pop. Nowadays I use it to make my websites look nicer and without having to worry about finding icons for my apps and edit them and so on. Font Awesome is one the smartest things you can use to make your applications pop.

Use Font Awesome icons in Desktop apps

First download Font Awesome and install it in your computer. At the time of this article I was using version 4.7.0 so I downloaded font-awesome-4.7.0.zip and installed the FontAwesome.otf file on my Windows 10 machine:



Font Awesome provides a cheatsheet that can be used to copy and paste the icons directly in your app so you don't have to worry about memorising any particular code to make the icon appear:



Nowadays I use the common approach where I buy certain icons or draw them myself using Photoshop (although this second option is quite time consuming and I only do it when I want to achieve the best results).


I'm sure you are all familiar with this approach, you add your icon in bmp format into one ImageList component, then link the ImageList to the button and select the icon index so it appears in your button as displayed in the image above. The problem arises when you want to have different sizes of that button as you will have to have different icon sizes to match it and so on.

So one of the things that I tend to do now in my applications and that makes it look a bit more standard (in terms of user experience as the user sees the same type of icon throughout the application) is by using Font Awesome:


The animation above displays how to replace one of you icons with a Font Awesome icon easily:

  • Locate the icon you want in the Font Awesome cheat-sheet.
  • Copy the icon image (not the Unicode code).
  • Locate the component you want to add the icon to.
  • Select Font Awesome font.
  • Paste the icon in the caption or text zone.
  • Adjust size to your needs.

In the images below you can compare the before/after and you'll see that the difference is noticeable:

Before (mixture of icons in bmp format plus some png images made with Photoshop):


After (Font Awesome fonts replacing all the icons):

Notice that now I can even include icons where there should only be text! so using this way I can compose my headers in a nicer way and include a very descriptive icon.

You will need Font Awesome font installed on your machine or target machine in order to take advantage of this functionality. I've used the latest Delphi 10.2 Tokyo on this one if anyone was wondering about it.

The example above is for VCL only and it should also for for FMX applications.

Example with FMX:


Jordi
Embarcadero MVP

Thursday, 27 July 2017

JSON RTTI Mapper with Delphi

One of the patterns that I have observed a lot during the time I have been playing with JSON streams is that I create my object based on the JSON stream and then I set the properties of that particular object manually so I can work with it rather than with the JSON object itself. 

Let's observe the following example. Imagine that we have the following JSON stream:


As you can see this JSON represents a list of Employees and each employee has the properties Name, Surname, Age and Address. So if we want to hold this in a TList<T> then we will have to create a class TEmployee with those properties and then manually assign each JSON parameter to each property like the example below:

The code is quite straight forward. We need to loop through the JSON array and populate the list.

If you look closely, you will see that basically we are mapping a field in the JSON object that it's called "name" to an object property called "Name". So to make it simpler this would literally be something like this:

Any mapper out there does one simple job and it's the job of mapping one field from one source to another.

So the question here is how to achieve this in a more clever way? Easy, let's use RTTI to map those properties!

Using the methods TypInfo.SetStrProp and TypInfo.GetPropList you can easily explore and the list of published properties of your class and set the value of them. To make use of the RTTI capabilities, you will have to move those properties to the published section of the class so they are visible through the RTTI.

Now you know how to use the RTTI to read the list of published properties and set them to a specific value. These examples have been coded with Delphi 10.2 Tokyo and you can find part of the mapper in one of the projects I'm currently working on: COCAnalytics.

There are many libraries out there that do amazing things with JSON so it's up to you to explore them. At least now you know how to map using the RTTI.

Happy coding!.

Jordi
Delphi MVP.

Tuesday, 25 July 2017

Writing quality code with NDepend v2017

The new version of NDepend v2017 has totally blown my mind. I can't stop exploring the new and enhanced Dashboard with features like Technical debt estimation, Quality Gates, Rules and Issues. In this post I will try to summarise what's new with NDepend v2017 and how to use these new features to write quality code.

I'm sure that you have experienced the feeling when you start typing code and after few weeks down the project you don't really know If what you have designed and coded is actually good or bad (by bad I mean that it's in some way rigid, fragile or non-reusable). I'm an advocate of Continuous Integration so I do have loads of metrics that help me identify broken windows or code smells easily in my code during the check-in stage. All these metrics encompass standards such as Design, Globalisation, Interoperability, Mobility, Naming, Performance, Portability, Security, Usage and so on. But none of them give a global rating that I could easily use to check if my project is actually good or bad.

Enhanced Dashboard

This new version contains a new set of application metrics that really improved the overall quality of the product. I will start integrating this as part of my release procedure as it gives a really good grasp of the status of the project from the coding side.

Here is a quick sneak peek of the new dashboard:


The aspects I'm most interested in and that I will delve into detail are the following ones:

Technical Debt Estimation

This new feature is a MUST for me. Just analyse your project with NDepend v2017 and let it give you the percentage of technical debt according to the rules that you have configured in your project. After every analysis you can see the trend and act accordingly using this metric:

This section considers the settings I have configured for my project. In this case the debt has increased from 7.75% to 7.93% due to the increase of number of issues in the solution. It also determines that the time needed to reach band "A" is of 3 hours and 32 min. The total amount of days needed to fix all the issues is the Debt (1 day and 1 hour).

To get values closer to reality, you have to configure your project to specify how long it will take you or any member of your team to fix an issue (most of the times I just specify half a day per issue as a rule). Here you can see the settings I have specified in my solutions as a rule of thumb and that you can consider in your projects:


These settings use the following considerations:

  • Your team will mostly code 6 hour a day. The rest of the time is spent with meetings, emails, research, etc.
  • The estimated effort to fix one issue is of 4 hours. That's the minimum I would give as average. There are issues that are fixed in 5 min and there are others that might take quite a bit of time. Don't forget that this time also includes filling the ticket details in your scrum environment and documentation, etc.
  • Then depending on the severity of the issue there is a threshold specified too as you can see in the figure above.

Another aspect to consider to get a proper estimation is also the code coverage. If you configure the coverage correctly in your solution then NDepend can get that data and use it to get a more comprehensive estimation.

To configure code coverage for NDepend you can follow my steps below:


Configuring JetBrains DotCover.

Once you've run your initial analysis, NDepend will also ask you to configure Code Coverage to get more information about your project and some additional metrics.

Go to NDepend project coverage settings under the Analysis tab and in there you'll have to select the XML file generated by DotCover.


If you run your tests with ReSharper you can select the coverage option and then in that menu go to the export button and select "Export to XML for NDepend". Leave this file in a known folder so you can automate this easily later on. The goal here is to configure everything manually but then you will have to do the work around so you can trigger all this with your build agent and get the report at the end of the run.

Chose the exported file and run again your analysis:


Now with all these details if you run NDepend you should get something like this:


Now you can see proper debt and the coverage. This is a little project that I'm currently working on and that it really works to demonstrate how good NDepend is in this case. If you don't know what one of the terms means, you can just click on it and you'll be redirected to the panel with all the details about that specific metric and its description.


The following three additional panels help shaping the technical debt information: Quality Gates, Rules and Issues. Below you'll find a quick introduction on each section and its relevance.


Quality Gates

Quality gates are based on Rules, Issues and Coverage. Basically this section determines certain parameters that your project should match in order to pass "quality". So for example: your project should contain a % of code coverage, your project should not contain Blocker or Critical issues, etc.

Here are some of these gates used for your reference:


Rules

Rules are defined as Project Rules and they check for violations in your code. This is like the rules defined by FXCop and that provide real arguments as to why your code is breaking a rule or that it needs to be better. Once you've gone through several iterations of fixing these, then your code will get cleaner and better (I promise you!). And most of all, you will understand the reason behind the rule!.

Here are some of these rules:

If you think that one of these rules does not apply to your project, you can just uncheck it and the framework will take of it so you don't have to worry about it anymore.


Issues

The number of issues are just a way of grouping the rules so you can determine which ones are the important ones to fix. So you can violate few rules but then these rules are categorised between blocker and low. So even though the project is violating 18 rules, 1 of these rules is just Low. This gives you an understanding of what's important to fix and what can wait.

Then each issue has a clear definition of the time that could take to fix:




Conclusion

To conclude, writing quality code is one of my main concerns nowadays. It's really easy to write code and also code that works but the difference between code that works and excellent code is this: quality and NDepend has the solution for you.

I have been fiddling with tools like FXCop and NDepend for a while now and I must say that NDepend is a must have in my toolkit belt. Really easy to use and with just one click you can have real arguments on the issues that need to be fixed in your solution and how long the team should take to fix them.

Tuesday, 23 May 2017

Pushing Messages from Server to Client Using SignalR2 and MVC5

One of the biggest disadvantages of any web client is that they are stateless. They don't know if anything happens in the server side unless they request the information again and again. In this article you will learn a very useful way of pushing updates from the server to the client using SignalR. The idea behind this concept is very simple; I need to find a way to inform the user that there will be some maintenance occurring shortly on the site and that I need them to close the browser to avoid any data loss. 

This is a very simple and elegant way using the ASP.NET library SignalR. This amazing library is designed to use the existing web transport layer (HTML5 Websockets and other technologies for old browsers) and it's capable of pushing the data to a wide array of clients like web pages, windows apps, mobile apps, etc. It's extremely easy to use, real-time and it allows Developers to focus on real problem leveraging the communication issues to SignalR.

Overview

The image above shows the idea behind the implementation of the SignalR ecosystem. We need to be able to push a notification from a client and that this message gets broadcasted to every single client that's listening to the SignalR Hub.

Install SignalR

In order to use this functionality, first we need to install the SignalR (v2.2.1) library via NuGet package:

Create a folder called "Hub" in your main MVC solution and then add a new SignalR Hub class called NotificationHub.cs as show below:

This will create a class that inherits from the Hub base class.

Creating the Notification Hub

Copy the following template to generate the notification Hub. This Hub needs a method "Broadcast Message to Clients" which accepts a message as a string a user as a string and that every client will receive. It uses the Clients.All property to access all of the clients that are currently connected to the server (hub). This function is just a client side callback function that we will call from the client JavaScript side.

Next step is to create your Startup class where you will be able to enable SignalR. Under App_Start folder, create a new class called Startup,cs and add the following code:

This will allow you to map the available hubs using the Owin startup. Now that the Hub is ready, we need to focus on the client that will display the message received by it.

Showing the notification on the client

Now that the server side code is done, we need to be able to display the notification received by it on the client side. To do this we just need to add the relevant script references and the following code to the _layout.cshtml page:

This page contains the jQuery and SignalR scripts, the SignalR Hub and the proxy hub object using the "var notificationHub = $.connection.notificationHub;" command. Notice that notificationHub starts with lower case. This is actually very important! because if you don't write it in lower case the reference will not work!.

The code works in the following way. When the client connects to the hub, the message "connected to the notification hub" should be visible in your browser console and when a new message is received, the div #notificaiton should empty itself and populate itself with the message received. This div sits on a separate page:

This is the aspect of the page without any notification:


Sending the notification to the client

Now the interesting part. To send the notification to the client, we can either create a separate screen on our MVC application, or just create a small utility to send the message separately. In this case I will choose the latter as it looks to me like a cleaner approach. So here is the code of my submit message functionality (WinForms) which allows me to send push notifications to all the clients connected to the Hub with just one click:

Here is the simple screen:

Finally, if you want to see the system in action, see the animation below for reference:


With this approach, you will be able to inform all your connected users easily, irrespective of the client technology or platform. The source code of the project is still not available on my GitHub page, but I will make sure to make it available so you can test it locally and see it by yourselves.

Jordi.

Saturday, 20 May 2017

Inside refactoring with Delphi

In this article I will show you two common techniques that I use in my C# projects that they are quite relevant for any other programming language out there. In this case Delphi as I'm sure many developers out there can refer to the same principles. The following code has been tested under Delphi 10.2 Tokyo version.

The first technique is quite used in Functional Programming but it can be related to OOP and it's called Imperative Refactoring. The second technique helps reducing common code and eliminates inconsistencies and it's called Inline Refactoring. See the examples below for guidance.

Imperative Refactoring

This technique is quite easy to understand and I'm sure you've applied this many times in your projects.

In this case, we have a method or function that has some code that we would like to reuse. The principle says that this code needs to be extracted and placed externally onto another function and then add the call where the previous code was. This technique is very simple and very easy to embrace for code reusability.

Here you can see a typical example:

Before Refactoring:
As you can see this is a very simple  example where I request a web page and then I do some parsing to get the list of urls that are part of the html document. Let's see how to refactor it to make it more reusable.

After Refactoring:
Notice that I've extracted the parsing functionality and I've created a parseHTML function that gets the response and parses it and returns the list of urls. Now I can reuse my parsing functionality should I have any other page where this functionality is required. No-brainer here.

Inline Refactoring

This one is a bit different and it relates to the outer code as a reusable code. Imagine that we would like to refactor the inline functionality: In this example, I'm repeating quite a lot the functionality to fetch an item from the internet but I would like to reuse it so I can a) replace the http component at any time without impacting the rest of the code and b) replace the parsing part so it can return any kind of object:

The idea behind this refactoring is to be able to reuse the external call also using anonymous methods and generics.

Here is the after refactoring code:

After Refactoring:
As you can see the idea is to use anonymous methods and generics heavily to be able to reuse most of the functionality and allow the developer to separate the concerns of downloading the page and parsing it. It also allows you to rebuild the component in a different way e.g. in this case I'm using Indy components to request the page but you might like to use another component. Using this approach everything is quite modular and it gives room for testing. Notice that no functionality has changed here.

You can find the full source code of this example in my personal repository on Github.

Jordi.
Embarcadero MVP.