Wednesday 28 July 2021

CI using TFS

 https://www.dotnetcurry.com/visualstudio/1050/visual-studio-continuous-integration-deployment

 

 

One of the development practices that agile teams follow is Continuous Integration. A check-in into the source control by a developer needs to be ensured to integrate with earlier check-ins, in the same project, by other developers. This check of integration is done by executing a build on the latest code including the latest check-in done by the developer. Agile teams also should follow the principle of keeping the code in deployable condition. This can be checked by doing a deployment of the application to a testing environment after the build and running the tests on it, so that integrity of the recent changes is tested. Those tests can be either manual or automated. If we can automatically deploy the application after the build, then the team has achieved Continuous Deployment. It can be extended further to deploy the application to production environment after successful testing.

This article is published from the DotNetCurry .NET Magazine – A Free High Quality Digital Magazine for .NET professionals published once every two months. Subscribe to this eMagazine for Free and get access to hundreds of free .NET tutorials from experts

Microsoft Team Foundation Server 2013 provides features for implementing both Continuous Integration (CI) and Continuous Deployment (CD). For my examples I have used TFS 2013 with Visual Studio 2013 Update 2.

Let us first see the flow of activities that form these two concepts:

cicd-concepts

TFS supported CI right from the beginning, when it came into existence in 2005. It has a feature for build management that includes build process management and automated triggering of the build. There are number of automated build triggers supported by TFS. Scheduled build is one of them but for us, more important is Continuous Integration trigger. It triggers a build whenever a developer checks in the code that is under the scope of that build.

Let us view how it is to be configured and for this example, I have created a solution named SSGS EMS that contains many projects.

Entire solution is under the source control of team project named “SSGS EMS” on TFS 2013. I have created a build definition named SSGSEMS to cover the solution. I have set the trigger to Continuous Integration so that any check-in in any of the projects under that solution, will trigger that build.

build-def-creation-wizard

I have created a test project to store Unit tests of a project. The unit tests project name is ‘PasswordValidatorUnitTestProject’ which has a unit test to test the code that is under the project “SSGS Password Validator”. I wanted to run that unit test as part of the build after the projects are successfully compiled. It is actually a standard setting in Team Build to run any tests under the test project that is under the solution that is in the scope of that build. Just to ensure, I open the Process node of the build definition in the editor and scroll down to the Test node under it.

build-process-params

I added only the value to the parameter Test run name. I gave it a name BVTRun, a short form for Build Verification Tests Run.

Now I am ready to execute that build. I edited a small code snippet under the project “SSGS Password Validator” and checked in that code. The build triggered due to that check-in and completed the compilation successfully as well as ran that test to verify the build. I have validated that the Continuous Integration part of TFS is working properly.

Microsoft included deployment workflow recently in the services offered by Team Foundation Server. That feature is called Release Management. It offers the following functionality:

release-management-overview

To implement Release Management and tie it up with TFS 2013 Update 2, we have to install certain components:

release-management-components

You may find details of hardware and software prerequisites over here http://msdn.microsoft.com/en-us/library/dn593703.aspx. I will guide you now to install these components of Release Management and use them to configure releases.

Install and configure Release Management

Release Management components

1. Release Management Server 2013 – Stores all the data in a SQL Server Database. Install this on any of the machines that is in the network with TFS and with those machines where you want to deploy built application. It can be installed on TFS machine itself.

2. Release Management Client 2013 – Used to configure the RM Server and to trigger the releases manually when needed. Install this on your machine. It can be also installed on TFS.

3. Microsoft Deployment Agent 2013 – These need to be installed on machines where you want to deploy the components of the built application

release-management

While installing these components, I provided the account credentials which has administrative privileges in the domain so that it can do automatic deployment on any of the machines in the domain. I suggest to keep the Web Service Port as 1000 if it is not being used by any other service.

release-management-server-configuration

In Addition to these components, if you need to execute automated tests as part of your deployment process, then you also need to install Test Controller and configure Lab Management.

Once the installation is over, let us do the setup of the release management so that releases can be automated on that. Most important part of that setup is to integrate TFS so that build and release can be used hand in hand.

Initial Setup of Release Management

To configure the settings of Release Management, open the RM Client application and select Administration activity.

Let us first connect RM Client with RM Server. To do that, select Settings. Enter the name of RM Server in the text box of Web Server Name. Click OK to save the setting.

rm-link-tfs

Now we will connect RM with our TFS. Click on the Manage TFS link. Click the New button and enter the URL of TFS with Collection name. Click OK to save the changes.

rm-tfs-connection

Next step is to add the names of supported stages of deployment and technologies. Generally there are at the least two stages – Testing and Production. Sometimes there are more stages like Performance Testing, UAT etc. For specifying the stages of our releases, click the Manage Pick Lists link and select Stages and enter stages like Development, Testing and Production. Similarly select Technologies and enter the names like ASP.NET, .NET Desktop Applications, WCF Services etc.

You can now add users who can have different roles in the Release Management by clicking on Manage Users. You may add groups to represent various roles and add users to those roles. These users will be useful when you want to have manual intervention in the deployment process for validation and authorization.

Each release follows a certain workflow. Path of that workflow needs to be defined in advance. Let us now find out how to define that release path.

 

Configure Release Management Paths

To start building the RM paths, we will need to register the building blocks that are the servers on which the deployment will be done. To do that, first click on the Configure Paths tab and then on the Servers link. RM Client will automatically scan for all the machines in the network that have the Microsoft Deployment Agent 2013 installed. Since this is first time that we are on this tab, it will show the list as Unregistered Servers.

rm-add-servers

You can now select the machines that are needed for deployment in different stages and click on the Register button to register those servers with RM Server.

rm-servers-added

Let us now create the environments for different Stages. Each environment is a combination of various servers that run the application for that stage. Let us first create a Testing environment made up of two of the machines. One of them, an IIS Server that will host our application when deployed, and another a SQL Server for deploying. Provide the name of the Environment as Testing (although there may be a stage with similar name, they are not automatically linked). Link the required machines from the list of servers – select the server and click on the link button.

rm-create-environment

You may create another environment named Production and select other servers for that environment.

Next step is to configure Release Paths that will be used by our releases. These paths are going to use various entities that we have created so far like the stage, environments and users if necessary.

Click on the Release Paths under Configure Paths activity. Select the option to create new release path. Give it a Name, Normal Release and a Description if you want. Under the stages tab, select the first stage in our deployment process i.e. Testing. Under that stage, now select the environment Testing that will be used to deploy the build for testing.

rm-create-release-path

Set the other properties so that Acceptance and Validation Steps are ‘Automated’. Set the names of users for Approver for Acceptance step, Validator and Approver for Approval step. Approver is the person who decides when the deployment to next stage should take place. It is a step where release management workflow depends upon manual intervention.

Click on the Add button under stages and add a stage of Production. Set properties similar to Testing stage.

rm-release-path

Save and close the Release Path with a name for example I have named it as Normal Release.

Create Release Template

Now we will tie all ends to create a release template that will be used to create multiple releases in the future. In each release in each stage, we want to do deployment of components of our application, run some tests either automatically or manually and finally approve the deployment for next stage. Let us take an example where we will design a release template that has following features:

1. It uses two stages – Testing and Production

2. In each stage, it deploys a web application on a web server and a database that is to be deployed on a database server.

3. It has to run an automated test on the web application after the deployment on Testing environment. It may also allow tester to run some manual tests.

4. It should have an approval from a test lead after testing is successful for the application to be deployed on Production environment.

Set properties of Release Template

Release template is the entity where we will bring together the Release Path, Team Project with the build that needs to be deployed and the activities that need to be executed as part of the deployment on each server. Release template can use many of the basic activities that are built in and can also use many package tools to execute more complex actions.

First select the Configure App activity and then from Release Template tab, select New to create a new release template.

As a first step, you will give template a name, optionally some description and the release path that we have created in earlier exercise. You can now set the build to be used. For that, you will have to first select the team project that is to be used and then the build definition name from the list of build definitions that exist in that team project. You also have the option to trigger this release from the build itself. Do not select this option right now. We will come back to that exercise later.

rm-release-template

Configure Activities on Servers for Each Stage

Since we have selected the configured path, you will see that the stages from that path are automatically appearing as part of the template. Select the Testing stage if it is not selected by default. On the left side, you will see a toolbox. In that tool box there is a Servers node. Right click on it and select and add necessary servers for this stage. Select a database server and an IIS server.

Now we can design the workflow of deployment on the stage of Testing.

Configure IIS Server Deployment

As a first step, drag and drop IIS Server on the designer surface of the workflow.

We need to do three activities on this server. Copy the output of build to a folder on IIS server, create an application pool for the deployed application and then create an application that has the physical location which is the copied folder and the application pool. We will do the first step using a tool that is meant for XCOPY. For that, right click on Components in the tool box and click Add. From the drop down selection box, select the tool for XCOPY. In the properties of that tool, as a source, provide the UNC path (share) of the shared location where build was dropped. For example in my case it was \\SSGS-TFS2013\BuildDrops\TicketM, the last being the name of the selected build definition. Close the XCOPY tool. Drag and drop that from the component on to the IIS Server in the designer. For the parameter installation path, provide the path of folder where you want the built files to be dropped for example C:\MyApps\HR App.

Now expand the node of IIS in the tool box. From there, drag and drop the activity of Create Application Pool below the XCOPY tool on the IIS Server in the designer surface. For this activity, provide the Name of the Application Pool as HR App Pool. Provide the rest of the properties as necessary for your application.

Next activity is the Create Web Application again from the IIS group. Drag and drop that below the Create App Pool activity. Set properties like physical folder, app pool that were created in earlier steps.

rm-release-workflow

Configure Database Deployment

Database deployment can be done with the help of a component that can use SQL Server Database Tools (SSDT). We should have a SSDT project in the solution that we have built. Support for SSDT is built in Visual Studio 2013. We have to add a Database Project in the solution and add the scripts for creation of database, stored procedures and other parts of the database. When compiled, the SSDT creates a DACPAC – a database installation and update package. That becomes the part of the build drop.

Drag and drop the database server from the servers list in toolbox to the designer surface. Position it below the IIS that was used in earlier steps. Now add a component for DACPAC under the components node. In that component set the properties of Database server name, database name and name of the DACPAC package. Now drag and drop that component on the database server on the designer.

rm-database-deployment

This completes the deployment workflow design for the Testing stage and now we can save the release template that was created.

Starting a Release

Trigger release from RM Client

To start a release, you will select the activity of Releases. Click the ‘New’ button to create a new release. Provide a name to the release. Select the release template that you had created earlier. Set the target stage as Testing since you have not yet configured the Production stage. You can start the release from here after saving or you can click Save and Close and then from the list of releases which includes recently created release, you can select and start it.

release-trigger

Once you click on the Start button, you will have to provide path of the build with the name of the latest build which is to be deployed. After providing that information, you can now start the run of the new release.

release-status

The RM Client will show you the status of various steps that are being run on the servers configured for that release. It will do the deployment on both the servers and validate the deployment automatically.

Since we had configured the Testing stage to have approval at the end, the release status will come to the activity of Approve Release and wait. After testing is over, the test lead who is configured to give approval, can open the RM Client and select the Approved button to approve the release.

We have now configured Continuous Integration with the help of team build and then configured continuous deployment with the help of release management feature of TFS 2013. Let us now integrate both of these so that automated deployment starts as soon as the build is over. To do that, we need to configure the build to have appropriate activity that triggers the release.

Trigger the release from Build

To trigger the release from Build, we need to use a special build process template that has the activities related to triggering the build. That process template is available when you install Release Management Client on TFS Build Server (build service restart is needed). It can also be downloaded from http://blogs.msdn.com/b/visualstudioalm/archive/2013/12/09/how-to-modify-the-build-process-template-to-use-the-option-trigger-release-from-build.aspx. After that, you will need to put it under the Build Templates folder of your source and check-in.

Once the special build process template is in place, you can start the wizard for creating new build definition. Let the trigger be manual but if you want to make it part of the Continuous Integration, then you can change the trigger to that. In the build defaults, provide the shared path like \\SSGS-TFS2013\BuildDrops. If you do not have a share like that then you can create a folder and share it for everyone. You should give write permission to that share for the user that is the identity of build service.

Under the process node select the template ReleaseTfvcTemplate.12.xaml. It adds some of the parameters under the heading of Release. By default the parameter Release Build is set to False. Change that to True. Set Configuration to Release as Any CPU and Release Target Stage as Testing.

release-from-build

Under the Test section provide the Test Run Name as BVTRun

build-process-params

Save the build definition.

Open the RM Client and Create a new Release Template that is exactly same as the one created earlier. In the properties, select the checkbox of “Can Trigger a Release from a Build”. In each of the components used under the source tab, select the radio button of Builds with application and in the textbox of the Path to package, enter \ character.

Now you can trigger the build and watch the build succeed. If you go to the Release Management Client after the build is successfully over, you will find that a new release has taken place.

We may also need to run some automated tests as part of the release. For that we need to configure release environment that has automated tests that run on the lab environment. So it is actually a combination of lab and release environment that helped us to run automated tests as part of the release.

Automated testing as part of the release

Automated testing can be controlled from Lab Management only. For this, it is necessary that we have a test case with associated automation that runs in an environment in Lab Management. The same machines that is used to create Release Management environment can be used to create the required Lab Environment.

Create Lab Environment for Automated Testing

For this we will create a Standard Environment containing the same machines that were included in the Testing environment. Name that environment as RMTesting

Before the environment is created, ensure that Test Controller is installed either on TFS or other machine in the network and is configured to connect to the same TPC that we are using in our lab. While adding the machines to Lab Environment if those machines do not have Test Agent installed, then it will be installed.

Create and configure Test Case to Run Automatically

Open Visual Studio 2013 and then open Team Explorer – Source Control tab. Create a new Unit Test for that method by adding a new Test project to the same solution. Check-in both the projects.

In the Testing Center of the MTM 2013, add a Test Plan if it does not exist. Create a new Test Suite named BVTests under that test plan. Open the properties page of the selected test plan and assign the created environment to that test plan.

Create a test case from Organize Activity – Test Case Manager. There is no need to add steps to the test case as we will run this test case automatically. Click on the tab of Associate Automation. Click the button to select automated test to associate. From the list presented to us, select the unit test that you had created in earlier steps. Save test case. Add that test case to a test suite named BVTests.

Add Component to Release Management Plan to Run Automated Test

In the RM Client, open the Release Template that was used in the earlier exercise to trigger the release from build. Add a new component and Name it as Automated Tester. Under the Deployment tab of that component, select the tool MTM Automated Tests Manager. Let the other parameters as they are. Save and close the component. Drag and drop that component from toolbox on the designer. It should be on the IIS server below all the deployment activities. Open that component and set the properties as shown below. Test Run name is what you have provided in the build definition.

image

Now you can trigger the build again. After the build, the release is triggered. During the release after the deployment is over, automated test will be run and the run results will be available in MTM 2013.

Summary

Today, teams are becoming more and more agile. It is necessary for them to be agile with the demonstrable application at any time. To support that, the practices of Continuous Integration and Continuous Deployment have become very important factors in all the practices. In this article, we have seen how Team Foundation Server 2013 with the component of Release Management helps us to implement Continuous Integration and Continuous Deployment.

Thursday 6 February 2020

Explicit Interface Implementation

If a class implements two interfaces that contain a member with the same signature, then implementing that member on the class will cause both interfaces to use that member as their implementation. In the following example, all the calls to Paint invoke the same method.



class Test 
{
    static void Main()
    {
        SampleClass sc = new SampleClass();
        IControl ctrl = sc;
        ISurface srfc = sc;

        // The following lines all call the same method.
        sc.Paint();
        ctrl.Paint();
        srfc.Paint();
    }
}

interface IControl
{
    void Paint();
}
interface ISurface
{
    void Paint();
}
class SampleClass : IControl, ISurface
{
    // Both ISurface.Paint and IControl.Paint call this method. 
    public void Paint()
    {
        Console.WriteLine("Paint method in SampleClass");
    }
}

// Output:
// Paint method in SampleClass
// Paint method in SampleClass
// Paint method in SampleClass
If the two interface members do not perform the same function, however, this can lead to an incorrect implementation of one or both of the interfaces. It is possible to implement an interface member explicitly—creating a class member that is only called through the interface, and is specific to that interface. This is accomplished by naming the class member with the name of the interface and a period. For example:
C#
public class SampleClass : IControl, ISurface
{
    void IControl.Paint()
    {
        System.Console.WriteLine("IControl.Paint");
    }
    void ISurface.Paint()
    {
        System.Console.WriteLine("ISurface.Paint");
    }
}
The class member IControl.Paint is only available through the IControl interface, and ISurface.Paint is only available through ISurface. Both method implementations are separate, and neither is available directly on the class. For example:
C#
// Call the Paint methods from Main.

SampleClass obj = new SampleClass();
//obj.Paint();  // Compiler error.

IControl c = obj;
c.Paint();  // Calls IControl.Paint on SampleClass.

ISurface s = obj;
s.Paint(); // Calls ISurface.Paint on SampleClass.

// Output:
// IControl.Paint
// ISurface.Paint
Explicit implementation is also used to resolve cases where two interfaces each declare different members of the same name such as a property and a method:
C#
interface ILeft
{
    int P { get;}
}
interface IRight
{
    int P();
}
To implement both interfaces, a class has to use explicit implementation either for the property P, or the method P, or both, to avoid a compiler error. For example:
C#
class Middle : ILeft, IRight
{
    public int P() { return 0; }
    int ILeft.P { get { return 0; } }
}

Tuesday 7 January 2020

Why do you use a Singleton class if a Static class serves the purpose?


Why do you use a Singleton class if a Static class serves the purpose?

What is the difference between Singleton and Static classes and when do you use each one in your program?


Singleton is a design pattern that makes sure that your application creates only one instance of the class anytime. It is highly efficient and very graceful. Singletons have a static property that you must access to get the object reference. So you are not instantiating an object such as in the manner we do for a normal class.

Why and what is the use of Singleton pattern?

  • To preserve global state of a type.
  • To share common data across application.
  • To reduce overhead of instantiating a heavy object again and again.
  • Suitable for Facades and Service proxies.
  • To cache objects in-memory and reuse them throughout the application.

Example scenarios where Singleton can be used

  • Service Proxies: In an application invoking a service aka API is an extensive operation. And creating a Service client itself time consuming process. By having a Service proxy as a Singleton this overhead can be reduced.
  • Facades: Like service proxy, Database connections are another one example where Singleton can be used to produce better performance and synchronization.
  • Logs: I/O is a heavier operation, by having a single instance of a Logger, required information can be written to same file as logs.
  • Data sharing: Configuration values and any constant values can be kept in Singleton to read by other components of the application.
  • Caching: Data fetching is a time taking process whereas caching required data in the application memory avoids DB calls and Singleton can be used here to handle the caching with thread synchronization in an efficient manner comparing to static type.

Implementation of Singleton pattern

At any point of time the application should hold only one instance of the Singleton type. So in order to achieve it, we should mark the constructor of the type as private member and an method or a property should be exposed to outer world to give the Singleton instance.




When shall we use singleton class and when to go for static classes?

Static classes are basically used when you want to store a single instance, data which should be accessed globally throughout your application. The class will be initialized at any time but mostly it is initialized lazily. Lazy initialization means it is initialized at the last possible moment of time. There is a disadvantage of using static classes. You never can change how it behaves after the class is decorated with the static keyword.

Singleton Class instance can be passed as a parameter to another method whereas static class cannot

The Singleton class does not require you to use the static keyword everywhere. Static class objects cannot be passed as parameters to other methods whereas we can pass instances of a singleton as a parameter to another method.

For example we can modify our normal class to have a method which takes a singleton class instance as a parameter. We cannot do this with static classes.

Singleton classes support Interface inheritance whereas a static class cannot implement an interface:
 static classes cannot implement interfaces.

Static class contains only static members, you cannot inherit or create object of static class. A Singleton class can be inherited and also have base class.

 Singleton vs Static class
  1. Singleton is an object creational design pattern and is designed following some guidelines to ensure it returns only one object of its class whereas static is a reserved keyword in C# which is used before class or method to make it static.
  2. Singleton classes or design pattern can contain static as well as non-static members whereas if a class is marked static, it only should contain static members. For e.g. if a class is static, it should have static methods, static properties and static variables only.
  3. Singleton methods could be passed as a reference to other methods or objects as a parameter but static members could not be passed as a reference.
  4. Singleton objects could be created in a way where they support disposal; that means they could be disposed of.
  5. Singleton objects are stored on heap whereas static objects are stored on the stack.
  6. Singleton objects can be cloned.
  7. Singleton promotes code re-usability and code-sharing and could implement interfaces as well. Singleton can inherit from other classes and promotes inheritance and have better control of object state whereas a static class cannot inherit its instance members.
  8. Singleton class could be designed in a way where it can leverage lazy initialization or asynchronous initialization and is taken into consideration directly by CLR whereas static class is firstly initialized at the time of its first load.
  9. Static classes cannot contain instance constructors and cannot be instantiated whereas the singleton classes have private instance constructor.

 https://codeburst.io/singleton-design-pattern-implementation-in-c-62a8daf3d115

When to use Singleton

Singleton design pattern is a unique design pattern; you cannot find the alternative of Singleton. You can use this when you are creating or using this concept in the application like thread pools, Registry objects, SQL Connection Pools, Objects handling user preferences, Builder classes and Statistics, log4net, etc.

There is one more reason to use Singleton Design Pattern, if you think that your object is large or heavy and takes up a reasonable amount of memory and you make sure that it’s not going to be instantiated multiple times then in that scenario you can use Singleton Design Pattern. A Singleton will help prevent such a case to ever happen.
 

Wednesday 29 May 2019

How to increase the performance of asp.net application

Here are a few tips to improve the performance of your ASP.Net application.

Viewstate

View state is the wonder mechanism that shows the details of the entry posted on the server. It is loaded every time ifrom the server. This option looks like an extra feature for the end users. This needs to be loaded from the server and it adds more size to the page but it will affect the performance when we have many controls in the page, like user registration. So, if there is no need for it then it can be disabled.

EnableViewState = "false" needs to be given based on the requirements. It can be given at the control, page and config level settings.

Avoid Session and Application Variables

A Session is a storage mechanism that helps developers to take values across the pages. It will be stored based on the session state chosen. By default it will be stored in the Inproc. That default settings uses IIS. When this Session variable is used in a page that is accessed by many numbers then it will occupy more memory allocation and gives additional overhead to the IIS. It will make the performance slow.

It can be avoided for most scenarios . If you want to send the information across pages then we can use a Cross Post-back, Query string with encryption. If you want to store the information within the page then caching the object is the best way.

Use Caching

ASP.Net has the very significant feature of a caching mechanism. It gives more performance and avoids the client/server process. There are three types of caching in ASP.Net.

If there is any static content in the full pages then it should be used with the Output cache. What it does is, it stores the content on IIS. When the page is requested it will be loaded immediately from the IIS for the certain period of time. Similarly Fragment paging can be used to store the part of the web page.

Effectively use CSS and Script files

If you have large CSS files that are used for the entire site in multiple pages, then based on the requirements, it can be split and stored with different names. It will minimize the loading time of the pages.

Images sizes

Overuse of images in the web site affect the web page performance. It takes time to load the images, especially on dial-up connections. Instead of using the background images, it can be done on the CSS colors or use light-weight images to be repeated in all of the pages.

CSS based layout

The entire web page design is controlled by the CSS using the div tags instead of table layout. It increases the page loading performance dramatically. It will help to enforce the same standard guideline throughout the website. It will reduce the future changes easily. When we use the nested table layout it takes more time for rendering.

Avoid Round trips

We can avoid unnecessary database hits to load the unchanged content in the database. We should use the IsPostBack method to avoid round trips to the database.

Validate using JavaScript

Manual validation can be done at the client browser instead of doing at the server side. JavaScript helps us to do the validation at the client side. This will reduce the additional overhead to the server.

The plug-in software helps to disable the coding in the client browser. So, the sensitive application should do the server side validation before going into the process.

Clear the Garbage Collection

Normally .Net applications use Garbage Collection to clean the unused resources from memory. But it takes time to clear the unused objects from memory.

There are many ways to clean the unused resources. But not all of the methods are recommended. But we can use the dispose method in the finally block to clean up the resources. Moreover we need to close the connection. It will immediately free the resources and provides space in memory.

Avoid bulk data store on client side

Try to avoid more data on the client side. It will affect the web page loading. When we store more data on the hidden control then it will be encrypted and stored on the client side. It can be tampered with by hackers as well.

Implement Dynamic Paging

When we load a large number of records into the server data controls like GridView, DataList and ListView it will take time to load. So we can show only the current page data through the dynamic paging.

Use Stored Procedure

Try to use Stored Procedures. They will increase the performance of the web pages. Because it is stored as a complied object in the database and it uses the query execution plans. If you pass the query then it will make a network query. In the Stored Procedure a single line will be passed to the backend.

Use XML and XSLT

XML and XSLT will speed up the page performance. If the process is not more complex then it can be implemented in XSLT.

Use Dataset

A DataSet is not lightweight compared with DataReader. But it has the advantages of a disconnected architecture. A DataSet will consume a substantial amount of memory. Even though it can have more than one day. If you want to perform many operations while loading the page itself, then it might be better to go with a DataSet. Once data is loaded into the DataSet it can be used later also.

Use String Builder in place of String

When we append the strings like mail formatting in the server side then we can use StringBuilder. If you use string for concatenation, what it does every time is it creates the new storage location for storing that string. It occupies more spaces in memory. But if we use the StringBuilder class in C# then it consumes more memory space than String.

Use Server.Transfer

If you want to transfer the page within the current server then we can use the Server.Transfer method. It avoids roundtrips between the browser and server. But it won't update the browser history.

Use Threads

Threads are an important mechanism in programming to utilize the system resources effectively. When we want to do a background process then it can be called a background process.

Consider the example of when clicked on send, it should send the mail to 5 lakhs members yet there is no need to wait for all the processes to complete. Just call the mail sending process as a background thread then proceed to do the further processing, because the sending of the mail is not dependent on any of the other processes.

Tuesday 28 May 2019

SOLID Principles Of OOPS

Overview Of S.O.L.I.D

While Designing a class the principles of S.O.L.I.D are guidelines that can be applied to remove code smells.

S.O.L.I.D
  1. Single Responsibility Principle (SRP)
  2. Open/Close Principle (OCP)
  3. Liskov Substitution Principle (LSP)
  4. Interface Segregation Principle (ISP)
  5. Dependency Inversion Principle (DIP)


     Single Responsibility Principle (SRP)

     It states that every object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. There should not be more than one reason to change the class. This means class should be designed for one purpose only. This principle states that if we have 2 reasons to change for a class, we have to split the functionality in two classes. Each class will handle only one responsibility and on future if we need to make one change we are going to make it in the class which handle it. When we need to make a change in a class having more responsibilities the change might affect the other functionality of the classes.

    Example- Employee class: This class is using for CRUD functionality of Employee, Then it should not have any method to MAP Employee attributes with Databse columns, Because later if there is any new column added in Databse, the class need to be modified which is violationg the rule of Single Responsibility Principle.

    Open/Close Principle (OCP)

    It states that Class should be open for extension not for modification. Usually, many changes are involved when a new functionality is added to an application. Those changes in the existing code should be minimized, since it's assumed that the existing code is already unit tested and changes in already written code might affect the existing functionality. This is valuable for Production environment where the source codes has already been reviewed and tested. Adding the New functionality may causes the problem on existing  code.

    Liskov Substitution Principle (LSP)



    Basically when we designed a class, we use maintain Class hierarchies. We must make sure that the new derived classes just extend without replacing the functionality of old classes. Otherwise the new classes can produce undesired effects when they are used in existing program modules.

    LSV states that if the module using any base class, then the reference to the Base class can be replaced with a Derived class without affecting the functionality of the program module.

    Example : - A typical example that violates LSP is a Square class that derives from a Rectangle class, assuming getter and setter methods exist for both width and height. The Square class always assumes that the width is equal with the height. If a Square object is used in a context where a Rectangle is expected, unexpected behavior may occur because the dimensions of a Square cannot (or rather should not) be modified independently. This problem cannot be easily fixed: if we can modify the setter methods in the Square class so that they preserve the Square invariant (i.e. keep the dimensions equal), then these methods will weaken (violate) the postconditions for the Rectangle setters, which state that dimensions can be modified independently. If Square and Rectangle had only getter methods (i.e. they were immutable objects), then no violation of LSP could occur.

    Interface Segregation Principle (ISP)



    It states avoid tying a client class to a big interface if only a subset of this interface is really needed. Many times you see an interface which has lots of methods. This is a bad design choice since probably a class implementing. . This can make it harder to understand the purpose of a component, but it can also cause increase coupling, where by components that make use of such a component are exposed to more of that components capabilities that are appropriate.



    The Interface Segregation Principle (or ISP) aims to tackle this problem by breaking a components interface into functionally separate sub-interfaces. Although a component may still end up with the same set of public members, those members will be separated into separate interfaces such that a calling component can operate on the component by referring only to the interface that concerns the calling component.



    Dependency Inversion Principle (DIP)

    In an application we have low level classes which implement basic and primary operations and high level classes which encapsulate complex logic and rely on the low level classes. A natural way of implementing such structures would be to write low level classes and once we have them to write the complex high level classes. Since the high level classes are defined in terms of others this seems the logical way to do it. But this is not a flexible design.

    High-level modules should not depend on low-level modules. Both should depend on abstractions.
         Abstractions should not depend on details. Details should depend on abstractions.

     
     

Monday 25 June 2018

DevOps Solutions | Microsoft Azure

DevOps brings together people, processes and technology, automating software delivery to provide continuous value to your users. With Azure DevOps solutions, deliver software faster and more reliably—no matter how big your IT department or what tools you are using.

Continuous integration (CI)

 

Continuous delivery (CD)

 

Continuous deployment with CI/CD

 

 Realizing Continuous Integration with Cruise Control.Net (CC.Net)

Cruise Control
Cruise Control is a free and open source build scheduler implemented using the .Net Framework. It is composed of the following two components:
  1. Build Loop: The build loop is designed to run as a background process, that periodically checks the defined repository for changes in the codebase, builds it and provides the status as the final output.
  2. Reporting: Cruise Control provides a reporting application to browse the results of the builds and provides a dashboard for the visual representation of the status.
As mentioned in my last article Cruise Control works with many source control systems. (Some of the better known are TFS, SVN, and VSS and so on.) Input for the build process is any parseable format so it can be integrated with any build tool (MSBuild, Nant, Maven and so on) that produces a parseable format. In addition to that this is widely used because of its extensive documentation. It also provides the option of a Mailing list.
Process of Cruise Control
  1. Developer Checks-in the code
  2. Cruise Controls polls the Version control system (Repository) to see if there are any changes in the codebase
  3. If the changes are there, then Cruise Control triggers the build using the defined build tool, captures the build data and produces the Build Status Report
CCTray
It is a standalone application that enables the developers or any other team member to check the status of the builds on their local machines that can access the CC.Net Server.

High Level Architecture of CC.Net


Setup Cruise Control.Net

First of all download CC.Net EXE (CruiseControl.NET-1.8.3.0-Setup.exe) from http://sourceforge.net/projects/ccnet/

Run that EXE as Administrator. When installing, select all three components.





After the installation is complete, you can now access the dashboard by typing in the following URL:

http://localhost:8080/
At the physical path of installation you will see that there are three folders that were created named:

  1. webdashboard
  2. Examples
  3. server
In the "server" folder there is a file named "ccnet.config". Open this file to do the configuration for automated builds. We need to add four blocks named:
  1. Project Configuration Block: Information about the project that needs to be built, there can be multiple projects present in this config
  2. SourceControl block: From where the code needs to be checked out for the build
  3. Tasks block: Steps/process of the build
  4. Publishers block: Generate output and produce reports (if needed, dispatch emails also)
For adding these you can either refer to the links above or I am providing you with the sample config below:
<cruisecontrol xmlns:cb="urn:ccnet.config.builder">
    <!-- This is your CruiseControl.NET Server Configuration file. Add your projects below! -->
    <project name="Sample Application">
        <webURL>http://localhost:8080/ccnet/server/local/project/SampleApplication/ViewLatestBuildReport.aspx</webURL>
        <!--set the sourcecontrol type to subversion and point to the subversion exe-->
        <sourcecontrol type="svn">
            <executable>C:\Program Files\TortoiseSVN\bin\svn.exe</executable>
            <workingDirectory>
        <PATHOFYOURAPPLICATION>\SampleApplication
      </workingDirectory>
            <trunkUrl>
        <REPOSITORYPATH/URL>/SampleApplication
      </trunkUrl>
            <autoGetSource>true</autoGetSource>
            <username>XXXX</username>
            <password>XXXX</password>         
    </sourcecontrol>
        <triggers>
            <intervalTrigger name="Cruise Control Continuous Integration" seconds="60" buildCondition="IfModificationExists" initialSeconds="30" />         
    </triggers>
        <tasks>
            <!-- Configure MSBuild to compile the updated files -->c:\
            <msbuild>
                <executable>c:\windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe</executable>
                <workingDirectory>
          <PATHOFYOURAPPLICATION>\SampleApplication\
        </workingDirectory>
                <projectFile>SampleApplication.sln</projectFile>
                <buildArgs>/noconsolelogger /p:Configuration=Debug /nologo</buildArgs>
                <targets></targets>
                <timeout>60</timeout>
                <logger>C:\Program Files (x86)\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MsBuild.dll</logger>             
      </msbuild>
            <exec>
                <executable>del.bat</executable>
                <buildArgs></buildArgs>
                <buildTimeoutSeconds>30</buildTimeoutSeconds>             
      </exec>       
            <exec>
                <!--Call mstest to run the tests contained in the TestProject -->
                <executable>C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe</executable>
                <baseDirectory>
          <PATHOFYOURAPPLICATION>\\SampleApplication\Web\bin\debug
        </baseDirectory>
                <!--testcontainer: points to the DLL that contains the tests -->
                <!--runconfig: points to solutions testrunconfig that is created by vs.net, list what test to run -->
                <!--resultsfile: normally the test run log is written to the uniquely named testresults directory  -->
                <!--                   this option causes a fixed name copy of the file to be written as well -->
                <buildArgs>                            
        </buildArgs>
                <buildTimeoutSeconds>60</buildTimeoutSeconds>             
      </exec>         
    </tasks>
        <!--Publishers will be done after the build has completed-->
        <publishers>
            <buildpublisher>
                <sourceDir>
          <PATHOFYOURAPPLICATION>\\SampleApplication\Web\
        </sourceDir>
                <publishDir>
          <PATHOFYOURAPPLICATION>\\Sample Application 9091
        </publishDir>
                <useLabelSubDirectory>false</useLabelSubDirectory>
                <alwaysPublish>false</alwaysPublish>
             
      </buildpublisher>
            <merge>
                <files>
                    <file action="Merge" deleteAfterMerge="true">msbuild-results.xml</file>
                    <file action="Merge" deleteAfterMerge="true">
            <PATHOFYOURAPPLICATION>\\SampleApplication\Web\bin\Debug\testResults.trx
          </file>
                 
        </files>
             
      </merge>
            <xmllogger/>
            <email from="XXXX@XXXX.com" mailhost="smtp.XXXX.com" mailport="25" useSSL="FALSE" mailhostUsername=" XXXX@XXXX.com" includeDetails="TRUE" mailhostPassword="XXXX" >
                <users>
                    <user name="Abhishek" group="buildmaster" address=" XXXX@XXXX.com" />
                    <user name="Devs" group="developers" address=" XXXX@XXXX.com" />                 
               </users>
                <groups>
                <group name="developers">
                <notifications>
                <notificationType>Failed</notificationType>                         
            </notifications>                     
          </group>
                    <group name="buildmaster">
                    <notifications>
                    <notificationType>Always</notificationType>                         
            </notifications>                     
          </group>                 
        </groups>             
      </email>         
    </publishers>
        <modificationDelaySeconds>0</modificationDelaySeconds>     
  </project>
</cruisecontrol>
 
Once we are done with the configurations we need to start the CruiseControl.Net Server. For starting the server open Services.msc ->CruiseControl.Net Server. Start this service.



As soon as you are done with that you are ready with your entire configuration to produce automated builds. Access the URL: http://localhost:8080/ViewFarmReport.aspx  and you will see the project that you configured on this dashboard.