SOLID principles represent a set of principles that, if implemented properly should improve our code significantly.

As with any other principle in life, every SOLID principle can be misused and overused to the point of being counterproductive. Instead of getting an understandable, maintainable and flexible code, we could end up with the code that’s in the worse shape than without SOLID.

SOLID is a mnemonic acronym and each of the letters in it stands for:

  • S – Single Responsibility Principle
  • O – Open/Closed Principle
  • L – Liskov Substitution Principle
  • I – Interface Segregation Principle
  • D – Dependency Inversion Principle

We hope these articles will help you discern when and how to implement these principles the right way.

  1. Single Responsibility Principle: — a class should have one, and only one, reason to change, meaning that a class should have only one job.
  2. Open Closed Principle: — you should be able to extend a class’s behavior, without modifying it.
  3. Liskov Substitution Principle: — If any module is using a Base class then the reference to that Base class can be replaced with a Derived class without affecting the functionality of the module.
  4. Interface Segregation Principle: — make fine grained interfaces that are client specific.
  5. Dependency Inversion Principle — depend on abstractions not on concrete implementations.

we are going to show you, through an example, how to create a code which abides by Single Responsibility Principle (SRP) rules. We will start with the code which isn’t SRP compliant and then refactor it to be in accordance with SRP. To finish our example, we will add a bit of reusability to our code, because we don’t want to repeat ourselves while coding.

We are going to start with a simple console application.

Imagine if we have a task to create a WorkReport feature that, once created, can be saved to a file and perhaps uploaded to the cloud or used for some other purpose.

So we are going to start with a simple model class:C#

public class WorkReportEntry {
     public string ProjectCode {
         get;
         set;
     }
     public string ProjectName {
         get;
         set;
     }
     public int SpentHours {
         get;
         set;
     }
 }

Next step is creating a WorkReport class which will handle all the required features for our project:C#

public class WorkReport {
     private readonly List < WorkReportEntry > _entries;
     public WorkReport() {
         _entries = new List < WorkReportEntry > ();
     }
     public void AddEntry(WorkReportEntry entry) = >_entries.Add(entry);
     public void RemoveEntryAt(int index) = >_entries.RemoveAt(index);
     public override string ToString() = >string.Join(Environment.NewLine, _entries.Select(x = >$ "Code: {x.ProjectCode}, Name: {x.ProjectName}, Hours: {x.SpentHours}"));
 }

In this class, we are keeping track of our work report entries by adding and removing them to/from a list. Furthermore, we are just overriding ToString() method to adjust it to our requirements.

Because we have our WorkReport class, it is quite fine to add our additional features to it, like saving to a file:C#

public class WorkReport {
     private readonly List < WorkReportEntry > _entries;
     public WorkReport() {
         _entries = new List < WorkReportEntry > ();
     }
     public void AddEntry(WorkReportEntry entry) = >_entries.Add(entry);
     public void RemoveEntryAt(int index) = >_entries.RemoveAt(index);
     public void SaveToFile(string directoryPath, string fileName) {
         if (!Directory.Exists(directoryPath)) {
             Directory.CreateDirectory(directoryPath);
         }
         File.WriteAllText(Path.Combine(directoryPath, fileName), ToString());
     }
     public override string ToString() = >string.Join(Environment.NewLine, _entries.Select(x = >$ "Code: {x.ProjectCode}, Name: {x.ProjectName}, Hours: {x.SpentHours}"));
 }

Problems With This Code

We can add even more features in this class, like the Load or UploadToCloud methods because they are all related to our WorkReport, but, just because we can doesn’t mean we have to do it.

Right now, there is one issue with the WorkReport class.

It has more than one responsibility.

Its job is not only to keep track of our work report entries but to save the entire work report to a file. This means that we are violating the SRP and our class has more than one reason to change in the future.

The first reason to change this class is if we want to modify the way we keep track of our entries. But if we want to save a file in a different way, that is entirely a new reason to change our class. And imagine what this class would look like if we added additional functionalities to it. We would have so many unrelated code parts in a single class.

So, in order to avoid that, let’s refactor the code.

Refactoring Towards SRP

The first thing we need to do is to separate the part of our code that is unlike others. In our case, that  is obviously the SaveToFile method, so we are going to move it to another class which is more appropriate:C#

public class FileSaver {
     public void SaveToFile(string directoryPath, string fileName, WorkReport report) {
         if (!Directory.Exists(directoryPath)) {
             Directory.CreateDirectory(directoryPath);
         }
         File.WriteAllText(Path.Combine(directoryPath, fileName), report.ToString());
     }
 }
public class WorkReport {
     private readonly List < WorkReportEntry > _entries;
     public WorkReport() {
         _entries = new List < WorkReportEntry > ();
     }
     public void AddEntry(WorkReportEntry entry) = >_entries.Add(entry);
     public void RemoveEntryAt(int index) = >_entries.RemoveAt(index);
     public override string ToString() = >string.Join(Environment.NewLine, _entries.Select(x = >$ "Code: {x.ProjectCode}, Name: {x.ProjectName}, Hours: {x.SpentHours}"));
 }

In this case, we have separated our responsibilities in two classes. The WorkReport class is now responsible for keeping track of work report entries and the FileSaver class is responsible for saving a file.

Having done this, we have separated the concerns of each class thus making them more readable and maintainable as well. As a result, if we want to change how we save a file, we only have one reason to do that and one place to do it, which is the FileSaver class.

We can check that everything is working as it supposed to do:

class Program {
     static void Main(string[] args) {
         var report = new WorkReport();
         report.AddEntry(new WorkReportEntry {
             ProjectCode = "123Ds",
             ProjectName = "Project1",
             SpentHours = 5
         });
         report.AddEntry(new WorkReportEntry {
             ProjectCode = "987Fc",
             ProjectName = "Project2",
             SpentHours = 3
         });
         Console.WriteLine(report.ToString());
         var saver = new FileSaver();
         saver.SaveToFile(@"Reports", "WorkReport.txt", report);
     }
 }

Making the Code Even Better

If we look at our SaveToFile method, we see that it does its job which is saving a work report to a file, but can it do it even better? This method is tightly coupled with the WorkReport class, but what if we want to create a Scheduler class that keeps track of its scheduled tasks? We would still like to save it to a file.

Well, in that case, we are going to create some changes to our code:

public interface IEntryManager < T > {
     void AddEntry(T entry);
     void RemoveEntryAt(int index);
 }

The only change to the WorkReport class is to implement this interface:

public class WorkReport: IEntryManager

Finally, we have to change the SaveToFile method signature:

public void SaveToFile(string directoryPath, string fileName, IEntryManager workReport)

After these modifications, we are going to have the same result, but now if we have a task to implement Scheduler, it is going to be quite simple to implement that:

public class ScheduleTask {
     public int TaskId {
         get;
         set;
     }
     public string Content {
         get;
         set;
     }
     public DateTime ExecuteOn {
         get;
         set;
     }
 }
public class Scheduler: IEntryManager < ScheduleTask > {
     private readonly List < ScheduleTask > _scheduleTasks;
     public Scheduler() {
         _scheduleTasks = new List < ScheduleTask > ();
     }
     public void AddEntry(ScheduleTask entry) = >_scheduleTasks.Add(entry);
     public void RemoveEntryAt(int index) = >_scheduleTasks.RemoveAt(index);
     public override string ToString() = >string.Join(Environment.NewLine, _scheduleTasks.Select(x = >$ "Task with id: {x.TaskId} with content: {x.Content} is going to be executed on: {x.ExecuteOn}"));
 }
class Program {
     static void Main(string[] args) {
         var report = new WorkReport();
         report.AddEntry(new WorkReportEntry {
             ProjectCode = "123Ds",
             ProjectName = "Project1",
             SpentHours = 5
         });
         report.AddEntry(new WorkReportEntry {
             ProjectCode = "987Fc",
             ProjectName = "Project2",
             SpentHours = 3
         });
         var scheduler = new Scheduler();
         scheduler.AddEntry(new ScheduleTask {
             TaskId = 1,
             Content = "Do something now.",
             ExecuteOn = DateTime.Now.AddDays(5)
         });
         scheduler.AddEntry(new ScheduleTask {
             TaskId = 2,
             Content = "Don't forget to…",
             ExecuteOn = DateTime.Now.AddDays(2)
         });
         Console.WriteLine(report.ToString());
         Console.WriteLine(scheduler.ToString());
         var saver = new FileSaver();
         saver.SaveToFile(@"Reports", "WorkReport.txt", report);
         saver.SaveToFile(@"Schedulers", "Schedule.txt", scheduler);
     }
 }

After we execute this code, we will have our file saved in a required location on a defined schedule.

We are going to leave it at that. Now every class we have is responsible for one thing and one thing only.

Open Closed Principle

The Open Closed Principle (OCP) is the SOLID principle which states that the software entities (classes or methods) should be open for extension but closed for modification.

But what does this really mean?

Basically, we should strive to write a code which doesn’t require modification every time a customer changes its request. Providing such a solution where we can extend the behavior of a class (with that additional customer’s request) and not modify that class, should be our goal most of the time.

Let’s imagine that we have a task where we need to calculate the total cost of all the developer salaries in a single company. Of course, we are going to make this example simple and focus on the required topic.

To get started, we are going to create the model class first:C#

public class DeveloperReport {
     public int Id {
         get;
         set;
     }
     public string Name {
         get;
         set;
     }
     public string Level {
         get;
         set;
     }
     public int WorkingHours {
         get;
         set;
     }
     public double HourlyRate {
         get;
         set;
     }
 }

Once we’ve created our model, we can transition to the salary calculation feature:C#

public class SalaryCalculator {
     private readonly IEnumerable < DeveloperReport > _developerReports;
     public SalaryCalculator(List < DeveloperReport > developerReports) {
         _developerReports = developerReports;
     }
     public double CalculateTotalSalaries() {
         double totalSalaries = 0D;
         foreach(var devReport in _developerReports) {
             totalSalaries += devReport.HourlyRate * devReport.WorkingHours;
         }
         return totalSalaries;
     }
 }

Now, all we have to do is to provide some data for this class and we are going to have our total costs calculated:C#

static void Main(string[] args) {
     var devReports = new List < DeveloperReport > {
         new DeveloperReport {
             Id = 1,
             Name = "Dev1",
             Level = "Senior developer",
             HourlyRate = 30.5,
             WorkingHours = 160
         },
         new DeveloperReport {
             Id = 2,
             Name = "Dev2",
             Level = "Junior developer",
             HourlyRate = 20,
             WorkingHours = 150
         },
         new DeveloperReport {
             Id = 3,
             Name = "Dev3",
             Level = "Senior developer",
             HourlyRate = 30.5,
             WorkingHours = 180
         }
     };
     var calculator = new SalaryCalculator(devReports);
     Console.WriteLine($ "Sum of all the developer salaries is {calculator.CalculateTotalSalaries()} dollars");
 }

This is working great, but now our boss comes to our office and says that we need a different calculation for the senior and junior developers. The senior developers should have a bonus of 20% on a salary.

Of course, to satisfy this requirement, we are going to modify our CalculateTotalSalaries method like this:C#

public double CalculateTotalSalaries() {
     double totalSalaries = 0D;
     foreach(var devReport in _developerReports) {
         if (devReport.Level == "Senior developer") {
             totalSalaries += devReport.HourRate * devReport.WorkingHours * 1.2;
         } else {
             totalSalaries += devReport.HourRate * devReport.WorkingHours;
         }
     }
     return totalSalaries;
 }

Even though this solution is going to give us the correct result, this is not an optimal solution.

Why is that?

Mainly, because we had to modify our existing class behavior which worked perfectly. Another thing is that if our boss comes again and ask us to modify calculation for the junior dev’s as well, we would have to change our class again. This is totally against of what OCP stands for.

It is obvious that we need to change something in our solution, so, let’s do it.

OCP implemented

To create a code that abides by the Open Closed Principle, we are going to create an abstract class first:C#

public abstract class BaseSalaryCalculator {
     protected DeveloperReport DeveloperReport {
         get;
         private set;
     }
     public BaseSalaryCalculator(DeveloperReport developerReport) {
         DeveloperReport = developerReport;
     }
     public abstract double CalculateSalary();
 }

As a continuation, we are going to create two classes which will inherit from the BaseSalaryCalculator class. Because it is obvious that our calculation depends on the developer’s level, we are going to create our new classes in that manner:

public class SeniorDevSalaryCalculator: BaseSalaryCalculator {
     public SeniorDevSalaryCalculator(DeveloperReport report) : base(report) {}
     public override double CalculateSalary() = >DeveloperReport.HourlyRate * DeveloperReport.WorkingHours * 1.2;
 }
public class JuniorDevSalaryCalculator: BaseSalaryCalculator {
     public JuniorDevSalaryCalculator(DeveloperReport developerReport) : base(developerReport) {}
     public override double CalculateSalary() = >DeveloperReport.HourlyRate * DeveloperReport.WorkingHours;
 }

Excellent. Now we can modify the SalaryCalculator class:C#

public class SalaryCalculator {
     private readonly IEnumerable < BaseSalaryCalculator > _developerCalculation;
     public SalaryCalculator(IEnumerable < BaseSalaryCalculator > developerCalculation) {
         _developerCalculation = developerCalculation;
     }
     public double CalculateTotalSalaries() {
         double totalSalaries = 0D;
         foreach(var devCalc in _developerCalculation) {
             totalSalaries += devCalc.CalculateSalary();
         }
         return totalSalaries;
     }
 }

This looks so much better because we won’t have to change any of our current classes if our boss comes with another request about the intern payment calculation or any other as well.

All we have to do now is to add another class with its own calculation logic. So basically, our SalaryCalculatorclass is now closed for modification and opened for an extension, which is exactly what OCP states.

To finish this example, let’s modify the Program.cs class:

class Program {
     static void Main(string[] args) {
         var devCalculations = new List < BaseSalaryCalculator > {
             new SeniorDevSalaryCalculator(new DeveloperReport {
                 Id = 1,
                 Name = "Dev1",
                 Level = "Senior developer",
                 HourlyRate = 30.5,
                 WorkingHours = 160
             }),
             new JuniorDevSalaryCalculator(new DeveloperReport {
                 Id = 2,
                 Name = "Dev2",
                 Level = "Junior developer",
                 HourlyRate = 20,
                 WorkingHours = 150
             }),
             new SeniorDevSalaryCalculator(new DeveloperReport {
                 Id = 3,
                 Name = "Dev3",
                 Level = "Senior developer",
                 HourlyRate = 30.5,
                 WorkingHours = 180
             })
         };
         var calculator = new SalaryCalculator(devCalculations);
         Console.WriteLine($ "Sum of all the developer salaries is {calculator.CalculateTotalSalaries()} dollars");
     }
 }

Awesome. We fixed acoording SOLID principles.

Liskov Substitution Principle

The Liskov Substitution Principle (LSP) states that child class objects should be able to replace parent class objects without compromising application integrity. What this means essentially, is that we should put an effort to create such derived class objects which can replace objects of the base class without modifying its behavior. If we don’t, our application might end up being broken.

Does this make sense to you? To make things clear, we are going to use a simple „Sum Calculator“ example, which will help us to understand how to implement the LSP better.

In this example, we are going to have an array of numbers and a base functionality to sum all the numbers from that array. But let’s say we need to sum just even or just odd numbers.

How would we implement that? Let’s see one way to do it:

public class SumCalculator {
     protected readonly int[] _numbers;
     public SumCalculator(int[] numbers) {
         _numbers = numbers;
     }
     public int Calculate() = >_numbers.Sum();
 }
public class EvenNumbersSumCalculator: SumCalculator {
     public EvenNumbersSumCalculator(int[] numbers) : base(numbers) {}
     public new int Calculate() = >_numbers.Where(x = >x % 2 == 0).Sum();
 }

Now if we test this solution, whether we calculate the sum of all the numbers or the sum of just even numbers, we are going to get the correct result for sure:C#

class Program {
     static void Main(string[] args) {
         var numbers = new int[] { 5, 7, 9, 8, 1, 6, 4 };
         SumCalculator sum = new SumCalculator(numbers);
         Console.WriteLine($ "The sum of all the numbers: {sum.Calculate()}");
         Console.WriteLine();
         EvenNumbersSumCalculator evenSum = new EvenNumbersSumCalculator(numbers);
         Console.WriteLine($ "The sum of all the even numbers: {evenSum.Calculate()}");
     }
 }

As we can see, this is working just fine. But what is wrong with this solution then?

Why are we trying to fix it?

Well, as we all know, if a child class inherits from a parent class, then the child class is a parent class. Having that in mind, we should be able to store a reference to an EvenNumbersSumCalculator as a SumCalculator variable and nothing should change. So, let’s check that out:C#

SumCalculator evenSum = new EvenNumbersSumCalculator(numbers);
Console.WriteLine($"The sum of all the even numbers: {evenSum.Calculate()}");

As we can see, we are not getting the expected result because our variable evenSum is of type SumCalculator which is a higher order class (a base class). This means that the Count method from the SumCalculator will be executed. So, this is not right, obviously, because our child class is not behaving as a substitute for the parent class.

Luckily, the solution is quite simple. All we have to do is to implement small modifications to both of our classes:

public class SumCalculator {
     protected readonly int[] _numbers;
     public SumCalculator(int[] numbers) {
         _numbers = numbers;
     }
     public virtual int Calculate() = >_numbers.Sum();
 }
public class EvenNumbersSumCalculator: SumCalculator {
     public EvenNumbersSumCalculator(int[] numbers) : base(numbers) {}
     public override int Calculate() = >_numbers.Where(x = >x % 2 == 0).Sum();
 }

As a result, when we start our solution, everything works as expected and the sum of even numbers is 18 again.

So, let’s explain this behavior.  If we have a child object reference stored in a parent object variable and call the Calculate method, the compiler will use the Calculate method of the parent class. But right now because the Calculate method is defined as „virtual“ and is overridden in the child class, that method in the child class will be used instead.

Implementing the Liskov Substitution Principle

Still, the behavior of our derived class has changed and it can’t replace the base class. So we need to upgrade this solution by introducing the Calculator abstract class:

public abstract class Calculator {
     protected readonly int[] _numbers;
     public Calculator(int[] numbers) {
         _numbers = numbers;
     }
     public abstract int Calculate();
 }

Then we have to change our other classes:

public class SumCalculator: Calculator {
     public SumCalculator(int[] numbers) : base(numbers) {}
     public override int Calculate() = >_numbers.Sum();
 }
public class EvenNumbersSumCalculator: Calculator {
     public EvenNumbersSumCalculator(int[] numbers) : base(numbers) {}
     public override int Calculate() = >_numbers.Where(x = >x % 2 == 0).Sum();
 }

Excellent. Now we can start making calls towards these classes:

class Program {
     static void Main(string[] args) {
         var numbers = new int[] { 5, 7, 9, 8, 1, 6, 4 };
         Calculator sum = new SumCalculator(numbers);
         Console.WriteLine($ "The sum of all the numbers: {sum.Calculate()}");
         Console.WriteLine();
         Calculator evenSum = new EvenNumbersSumCalculator(numbers);
         Console.WriteLine($ "The sum of all the even numbers: {evenSum.Calculate()}");
     }
 }

We will again have the same result, 40 for all the numbers and 18 for the even numbers. But now, we can see that we can store any subclass reference into a base class variable and the behavior won’t change which is the goal of LSP.

Interface Segregation Principle

The Interface Segregation Principle states that no client should be forced to depend on methods it does not use. So, this is the basic definition which we can read in many different articles, but what does this really mean?

Let’s imagine that we are starting a new feature on our project. We start with some code and from that code, an interface emerges with the required declarations. Soon after, the customer decides that they want another feature which is similar to the previous one and we decide to implement the same interface in another class. But now, as a consequence, we don’t need all the methods from that interface, just some of them. Of course, we have to implement all the methods, which we shouldn’t have to, and that ’s the problem and where the ISP helps us a lot.

Basically, the ISP states that we should reduce code objects down to the smallest required implementation thus creating interfaces with only required declarations. As a result, an interface which has a lot of different declarations should be split up into smaller interfaces.

Let’s see how this looks in an example.

There are vehicles that we can drive, and there are those we can fly with. But there are cars we can drive and fly (yes those are on sale). So, we want to create a code structure which supports all the actions for a single vehicle, and we are going to start with an interface:

public interface IVehicle{    
    void Drive();    
    void Fly();
}

Now if we want to develop a behavior for a multifunctional car, this interface is going to be perfect for us:

public class MultiFunctionalCar: IVehicle {
     public void Drive() { //actions to start driving car        
         Console.WriteLine("Drive a multifunctional car");
     }
     public void Fly() { //actions to start flying        
         Console.WriteLine("Fly a multifunctional car");
     }
 }

This is working great. Our interface covers all the required actions.

But now, we want to implement the Car class and the Airplane class as well:

public class Car: IVehicle {
     public void Drive() { //actions to drive a car        
         Console.WriteLine("Driving a car");
     }
     public void Fly() {
         throw new NotImplementedException();
     }
 }
public class Airplane: IVehicle {
     public void Drive() {
         throw new NotImplementedException();
     }
     public void Fly() { //actions to fly a plane      
         Console.WriteLine("Flying a plane");
     }
 }

Now we can see what the problem with the IVehicle interface is. It contains only one required declaration per each class. The other method, which is not required, is implemented to throw an exception. That is a bad idea because we should be writing our code to do something and not just to throw exceptions. Furthermore, we would have to put an additional effort to document our class so that users know why they shouldn’t be using the not implemented method. A really bad idea.

So, in order to fix this problem, we are going to do some refactoring to our code and write it in accordance to ISP.

Implementing the ISP In the Current Solution

The first thing we are going to do is to divide our IVehicle interface:

public interface ICar{
    void Drive();
}
public interface IAirplane{
    void Fly();
}

As a result, our classes can implement only the methods they need:

public class Car: ICar {
     public void Drive() { //actions to drive a car       
         Console.WriteLine("Driving a car");
     }
 }
public class Airplane: IAirplane {
     public void Fly() { //actions to fly a plane        
         Console.WriteLine("Flying a plane");
     }
 }
public class MultiFunctionalCar:  ICar,  IAirplane {
     public void Drive() { //actions to start driving car       
         Console.WriteLine("Drive a multifunctional car");
     }
     public void Fly() { //actions to start flying       
         Console.WriteLine("Fly a multifunctional car");
     }
 }

We can even use a higher level interface if we want in a situation where a single class implements more than one interface:

public interface IMultiFunctionalVehicle : ICar, IAirplane{}

Once we have our higher level interface, we can implement it in different ways. The first one is to implement the required methods:

public class MultiFunctionalCar: IMultiFunctionalVehicle {
     public void Drive() { //actions to start driving car       
         Console.WriteLine("Drive a multifunctional car");
     }
     public void Fly() { //actions to start flying      
         Console.WriteLine("Fly a multifunctional car");
     }
 }

Or if we already have implemented the Car class and the Airplane class, we can use them inside our class by using the decorator pattern:

public class MultiFunctionalCar: IMultiFunctionalVehicle {
     private readonly ICar _car;
     private readonly IAirplane _airplane;
     public MultiFunctionalCar(ICar car, IAirplane airplane) {
         _car = car;
         _airplane = airplane;
     }
     public void Drive() {
         _car.Drive();
     }
     public void Fly() {
         _airplane.Fly();
     }
 }

We can see from the example above, that smaller interface is a lot easier to implement due to not having to implement methods that our class doesn’t need.

Of course, due to the simplicity of our example, we can make a single interface with a single method inside it. But in real-world projects, we often come up with an interface with multiple methods, which is perfectly normal as long as those methods are highly related to each other. Therefore, we make sure that our class needs all these actions to complete its task.

Another benefit is that the Interface Segregation Principle increases readability and maintainability of our code. We are reducing our class implementation only to required actions without any additional or unnecessary code.

Dependency Inversion Principle

The basic idea behind the Dependency Inversion Principle is that we should create the higher level modules with its complex logic in such a way to be reusable and unaffected by any change from the lower level modules in our application. To achieve this kind of behavior in our apps, we introduce abstraction which decouples higher from lower level modules.

Having this idea in mind the Dependency Inversion Principle states that

  • High-level modules should not depend on low-level modules, both should depend on abstractions.
  • Abstractions should not depend on details. Details should depend on abstractions.

We are going to make all of this easier to understand with an example and additional explanations.

The high-level modules describe those operations in our application that has more abstract nature and contain more complex logic. These modules orchestrate low-level modules in our application.

The low-level modules contain more specific individual components focusing on details and smaller parts of the application. These modules are used inside the high-level modules in our app.

What we need to understand when talking about DIP and these modules is that both, the high-level and low-level modules, depend on abstractions. We can find different opinions about if the DIP inverts dependency between high and low-level modules or not. Some agree with the first opinion and others prefer the second. But the common ground is that the DIP  creates a decoupled structure between high and low-level modules by introducing abstraction between them.

Example Which Violates DIP

Let’s start by creating two enumerations and one model class:

public enum Gender{    
  Male,    
  Female
}
public enum Position {
     Administrator,
     Manager,
     Executive
 }
public class Employee {
     public string Name {
         get;
         set;
     }
     public Gender Gender {
         get;
         set;
     }
     public Position Position {
         get;
         set;
     }
 }

To continue, we are going to create one low-level class which keeps (in a simplified way) track of our employees:

public class EmployeeManager {
     private readonly List < Employee > _employees;
     public EmployeeManager() {
         _employees = new List < Employee > ();
     }
     public void AddEmployee(Employee employee) {
         _employees.Add(employee);
     }
 }

Furthermore, we are going to create a higher-level class to perform some kind of statistical analysis on our employees:

public class EmployeeStatistics {
     private readonly EmployeeManager _empManager;
     public EmployeeStatistics(EmployeeManager empManager) {
         _empManager = empManager;
     }
     public int CountFemaleManagers() { //logic goes here   
     }
 }

With this kind of structure in our EmployeeManager class, we can’t make use of the _employess list in the EmployeeStatistics class, so the obvious solution would be to expose that private list:

public class EmployeeManager {
     private readonly List < Employee > _employees;
     public EmployeeManager() {
         _employees = new List < Employee > ();
     }
     public void AddEmployee(Employee employee) {
         _employees.Add(employee);
     }
     public List < Employee > Employees = >_employees;
 }

Now, we can complete the Count method logic:C#

public class EmployeeStatistics {
     private readonly EmployeeManager _empManager;
     public EmployeeStatistics(EmployeeManager empManager) {
         _empManager = empManager;
     }
     public int CountFemaleManagers() = >_empManager.Employees.Count(emp = >emp.Gender == Gender.Female && emp.Position == Position.Manager);
 }

Even though this will work just fine, this is not what we consider a good code and it violates the DIP.

How is that? Well, first of all, our EmployeeStatistics class has a strong relation (coupled) to the EmployeeManager class and we can’t send any other object in the EmployeeStatistics constructor except the EmployeeManager object. The second problem is that we are using the public property from the low-level class inside the high-level class. By doing so, our low-level class can’t change its way of keeping track of employees. If we want to change its behavior to use a dictionary instead of a list, we need to change the EmployeeStatistics class behavior for sure. And that’s something we want to avoid if possible.

What we want is to decouple our two classes so the both of them depend on abstraction.

So, the first thing we need to do is to create the IEmployeeSearchable interface:

public interface IEmployeeSearchable {
     IEnumerable < Employee > GetEmployeesByGenderAndPosition(Gender gender, Position position);
 }

Then, let’s modify the EmployeeManager class:

public class EmployeeManager: IEmployeeSearchable {
     private readonly List < Employee > _employees;
     public EmployeeManager() {
         _employees = new List < Employee > ();
     }
     public void AddEmployee(Employee employee) {
         _employees.Add(employee);
     }
     public IEnumerable < Employee > GetEmployeesByGenderAndPosition(Gender gender, Position position) = >_employees.Where(emp = >emp.Gender == gender && emp.Position == position);
 }

Finally, we can modify the EmployeeStatistics class:

public class EmployeeStatistics {
     private readonly IEmployeeSearchable _emp;
     public EmployeeStatistics(IEmployeeSearchable emp) {
         _emp = emp;
     }
     public int CountFemaleManagers() = >_emp.GetEmployeesByGenderAndPosition(Gender.Female, Position.Manager).Count();
 }

This looks much better now and it’s implemented by DIP rules. Now, our EmployeeStatistics class is not dependent on the lower-level class and the EmployeeManager class can change its behavior about storing employees as well.

Finally, we can check the result by modifying Program.cs class:

class Program {
     static void Main(string[] args) {
         var empManager = new EmployeeManager();
         empManager.AddEmployee(new Employee {
             Name = "Leen",
             Gender = Gender.Female,
             Position = Position.Manager
         });
         empManager.AddEmployee(new Employee {
             Name = "Mike",
             Gender = Gender.Male,
             Position = Position.Administrator
         });
         var stats = new EmployeeStatistics(empManager);
         Console.WriteLine($ "Number of female managers in our company is: {stats.CountFemaleManagers()}");
     }
 }

The Dependency Inversion Principle is the last part of the SOLID principles which introduce an abstraction between high and low-level components inside our project to remove dependencies between them.

In this article, we are going to learn how to set up GraphQL in .NET Core application. We are going to use different third-party libraries to make this integration easier and will explain in detail how to use GraphQL elements (Type, Query, and Schema) to complete the integration process of GraphQL in .NET Core.

About GraphQL and How it’s Different from REST

GraphQl is a query language. It executes queries by using type systems which we define for our data. GraphQL isn’t tied to any specific language or a database, just the opposite, it is adaptable to our code and our data as well.

Let’s talk a bit about how GraphQL differs from REST:

  • GraphQL requires fewer roundtrips to the server and back to fetch all the required data for our view or template page. With REST, we have to visit several endpoints (api/subjectsapi/professorsapi/students …) to get all the data we need for our page, but that’s not the case with GraphQL. With GraphQL, we create only one query which calls several resolvers (functions) on the server side and returns all the data from different resources in a single request.
  • With REST, as our application grows, the number of endpoints grows as well, and that requires more and more time to maintain. But, with GraphQL we have only one endpoint api/graphql and that is all.
  • By using GraphQL, we never face a problem of getting too much or too few data in our response. That’s because we are defining our queries with the fields which states what we need in return. That way, we are always getting what we have requested. So, if we send a query like this one:

JavaScript

query OwnersQuery {
owners {
name account {
type
}
}
}

We are 100% sure that we will get this response back:JavaScript

{
“data”: {
“owners”: [{
“name”: “John Doe”,
“accounts”: [{
“type”: “Cash”
},
{
“type”: “Savings”
}]
}]
}
}

With REST this is not the case. Sometimes we get more than we need and sometimes less, it depends on how actions on a certain endpoint are implemented.

These are the most important differences between REST and GraphQL. In the next article we will create a project base on our article microservices

Anonymous Classes

An anonymous class is a class that does not have a name. This sound strange but sometimes an anonymous class can be useful, especially when using query expressions.

Let’s see what we mean by that.

We can create an object of anonymous class simply by using the new keyword in front of curly braces:

myAnonymousObj = new { Name = "John", Age = 32 };

This object contains two properties the Name and the Age. The compiler will implicitly assign the types to the properties based on the types of their values. So what this means basically is that the Name property will be of the string type and the Age property of the int type.

But now, we can ask, what type the myAnonymousObj is? And the answer is that we don’t know, which is the point of anonymous classes. But in C# this is not a problem, we can declare our object as an implicitly typed variable by using the var keyword:

var myAnonymousObj = new { Name = "nesto", Age = 32 };

The var keyword causes the compiler to create a variable of the same type as the expression that we use to initialize that object. So let’s see a couple of examples of well-known types:

var number = 15; // the number is of type int
var word = "example"; //the word is of type string
var money = 987.32; //the money is of type double

We can access the properties of our anonymous object the same way we did with regular objects:

Console.WriteLine($"The name of myAnonymousObject is {myAnonymousObj.Name}, the age is {myAnonymousObj.Age}");

Nullable Types

The null value is useful for initializing reference types. So, it is logical that we can’t assign the null value to the value type because the null is itself a reference.

That being said, we can see that the following statement will throw an error:

int number = null;

However, C# provides us with a modifier that we can use to declare a value type as a nullable value type. We can use the ? sign to indicate that value type is nullable:

int? number = null;

We can still assign an integer value to our nullable value type:C#

int? number = null;
int another = 200; 
number = 345;
number = another;

This is all valid. But if we try to assign the variable of an int type with a value of our nullable type, we are going to have a problem:

int? number = null;
int another = 200; 
another = number; //this is the problem

This makes sense if we consider that the variable number might contain the null but the variable another can’t contain null at all.

Properties of Nullable Types

The nullable types expose a few properties which can come in handy while working on our projects. The HasValueproperty indicates whether a nullable type contains a value or it is a null. The Value property enables us to retrieve the value of the nullable type if it is not a null:

int ? number = null;
 number = 234; //comment this line to print out the result from the else block 
 if (number.HasValue) {
     Console.WriteLine(number.Value);
 } else {
     Console.WriteLine("number is null");
 }


Data Types

Data types that represent the whole numbers are expressed with a certain number of bits. For unsigned numbers, the range is from 0 to 2N-1, and signed numbers range is from -2N-1 to 2N-1-1. So if the data type has a size of 8 bits like the sbyte data type, we can represent its range like this: from -27 to 27-1 => from -128 to 127.

The following table contains different data types that represent the whole numbers:

whole numbers table - C# Data Types

Letter u in front of the type means that type can’t contain negative numbers, it is unsigned.

The types mentioned above are the whole number types. But in C#, we have the number types with the floating point.

We can present them in a table as well:

Decimal numbers table in C# Data Types

In C#, we have two more basic data types:

Char-bool table C# Data Types

To use char type in our code we must place it inside the single quotes: ’a’ or ’A’ or ’3’…

One more type that is often introduced as the basic data type is the string type. But the string is not a value type it is a reference type. To use a string in our code we must place the value inside the double quotes: „This is the string type“ or „3452“…

So, we know we have the value types and reference types, and it is time to talk more about them and variables as well.

Variables in C#

Variable is a name of a memory location in which application stores values.

We should create our variables by following examples:

  • studentName
  • subject
  • work_day …

The wrong examples would be

  • student Name
  • work-day
  • 1place

We should mention that C# is a case-sensitive language so the studentName is not the same as the StudentName.

The C# language has its own set of reserved words, so-called keywords. We can’t use them as a name for our variables.

In C#, we have variables divided into two categories: Value type and Reference type. The difference is that the value type variables stores their values inside their own memory locations, but the memory location for the reference type variables contains only address to the dynamic memory location where the value is stored.

We should declare our variables in the following way:

<data type> <variable name> ;  or <data type> <variable name>, <variable name> ... ;

So a few examples would be:

class Program {
     static void Main(string[] args) {
         int age;
         double temperature,
         change;
         Student student;
     }
 }

Just with the declaration, we can’t assign a value to a value type variables. To do that, we need to use expressions in addition:

<data type> <variable name> = <expression> ;

Again, let’s look at this with an example:

class Program {
     static void Main(string[] args) {
         int x = 5;
         int y = 145 + x;
         char p = 'p';
         p = 'A';
     }
 }

To add a value for the reference type variable, we need to use the newkeyword in the expression part (string is an exception to this rule):

class Program {
     static void Main(string[] args) {
         Student student = new Student("John", 25);
     }
 }

We would like to mention that we don’t recommend to call variables with names „x“ or „y“… We have used that names just for the sake of simplicity. It is a better idea to give meaningful names to our variables.

Vendor Microsoft Azure AWS Google
Strengths •Second largest provider
• Integration with Microsoft tools and software
• Broad feature set
• Hybrid cloud
• Support for open source
• Dominant market position
• Extensive, mature offerings
• Support for large organizations
• Extensive training
• Global reach
• Designed for cloud-native businesses
• Commitment to open source and portability
• Deep discounts and flexible contracts
• DevOps expertise
Weaknesses •Issues with documentation
• Incomplete management tooling
• Difficult to use
• Cost management
• Overwhelming options
• Late entrant to IaaS market
• Fewer features and services
• Historically not as enterprise focused
Compute Services • Virtual Machines
• Virtual Machine Scale Sets
• Azure Container Service (AKS)
• Container Instances
• Batch
• Service Fabric
• Cloud Services
• EC2
• Elastic Container Service
• Elastic Container Service for Kubernetes
• Elastic Container Registry
• Lightsail
• Batch
• Elastic Beanstalk
• Fargate
• Auto Scaling
• Elastic Load Balancing
• VMware Cloud on AWS
• Compute Engine
• Kubernetes
• Functions
• Container Security
• Graphics Processing Unit (GPU)
• App Engine
• Knative
Storage Services • Blob Storage
• Queue Storage
• File Storage
• Disk Storage
• Data Lake Store
• Simple Storage Service (S3)
• Elastic Block Storage (EBS)
• Elastic File System (EFS)
• Storage Gateway
• Snowball
• Snowball Edge
• Snowmobile
•Cloud Storage
• Persistent Disk
• Transfer Appliance
• Transfer Service
Database Services •SQL Database
• Database for MySQL
• Database for PostgreSQL
• Data Warehouse
• Server Stretch Database
• Cosmos DB
• Table Storage
• Redis Cache
• Data Factory
• Aurora
• RDS
• DynamoDB
• ElastiCache
• Redshift
• Neptune
• Database migration service
•Cloud SQL
• Cloud Bigtable
• Cloud Spanner
• Cloud Datastore
Backup Services • Archive Storage
• Backup
• Site Recovery
Glacier None
AI/Machine Learning • Machine Learning
• Azure Bot Service
• Cognitive Services
•SageMaker
•Comprehend
• Lex
• Polly
•Rekognition
•Machine Learning
• Translate
•Transcribe
•DeepLens
• Deep Learning AMIs
• Apache MXNet on AWS
• TensorFlow on AWS
•Cloud Machine Learning Engine
• Dialogflow Enterprise Edition
• Cloud Natural Language
• Cloud Speech API
• Cloud Translation API
• Cloud Video Intelligence
• Cloud Job Discovery (Private Beta)
IoT • IoT Hub
• IoT Edge
• Stream Analytics
• Time Series Insights
• oT Core
•FreeRTOS
•Greengrass
• IoT 1-Click
• IoT Analytics
• IoT Button
• IoT Device Defender
• IoT Device Management
·   Cloud IoT Core (Beta)
Serverless Functions • Lambda
• Serverless Application Repository
Cloud Functions (Beta)

Azure vs. AWS vs. Google: Compute

Azure Compute:

  • Virtual Machines: Microsoft’s primary compute service is known simply as Virtual Machines. It boasts support for Linux, Windows Server, SQL Server, Oracle, IBM, and SAP, as well as enhanced security, hybrid cloud capabilities and integrated support for Microsoft software. Like AWS, it has an extremely large catalog of available instances, including GPU and high-performance computing options, as well as instances optimized for artificial intelligence and machine learning. It also has a free tier with 750 hours per month of Windows or Linux B1S virtual machines for a year.
  • Additional Services: Azure’s version of Auto Scaling is known as Virtual Machine Scale Sets. And it has two container services: Azure Container Service is based on Kubernetes, and Container Services uses Docker Hub and Azure Container Registry for management. It has a Batch service, and Cloud Services for scalable Web applications is similar to AWS Elastic Beanstalk. It also has a unique offering called Service Fabric that is specifically designed for applications with microservices architecture.

AWS Compute:

  • Elastic Compute Cloud: Amazon’s flagship compute service is Elastic Compute Cloud, or EC2. Amazon describes EC2 as “a web service that provides secure, resizable compute capacity in the cloud.” EC2 offers a wide variety of options, including a huge assortment of instances, support for both Windows and Linux, bare metal instances, GPU instances, high-performance computing, auto scaling and more. AWS also offers a free tier for EC2 that includes 750 hours per month for up to twelve months.
  • Container services: Within the compute category, Amazon’s various container services are increasing in popularity, and it has options that support Docker, Kubernetes, and its own Fargate service that automates server and cluster management when using containers. It also offers a virtual private cloud option known as Lightsail, Batch for batch computing jobs, Elastic Beanstalk for running and scaling Web applications, as well as a few other services.

Google Compute:

  • Compute Engine: By comparison, Google’s catalog of compute services is somewhat shorter than its competitors’. Its primary service is called Compute Engine, which boasts both custom and predefined machine types, per-second billing, Linux and Windows support, automatic discounts and carbon-neutral infrastructure that uses half the energy of typical data centers. It offers a free tier that includes one f1-micro instance per month for up to 12 months.
  • Focus on Kubernetes: Google also offers a Kubernetes Engine for organizations interested in deploying containers. Like all of the leading cloud vendors, it’s set up to offer containers and microservices. And it’s worth noting that Google has been heavily involved in the Kubernetes project, giving it extra expertise in this area.

In Visual Studio, there are at least 3 different types of class library you can create:

  • Class Library (.NET Framework)
  • Class Library (.NET Standard)
  • Class Library (.NET Core)

* Use a .NET Standard library when you want to increase the number of apps that will be compatible with your library, and you are okay with a decrease in the .NET API surface area your library can access.

* Use a .NET Core library when you want to increase the .NET API surface area your library can access, and you are okay with allowing only .NET Core apps to be compatible with your library.

Difference:

Compatibility: Libraries that target .NET Standard will run on any .NET Standard compliant runtime, such as .NET Core, .NET Framework, Mono/Xamarin. On the other hand, libraries that target .NET Core can only run on the .NET Core runtime.

API Surface Area: .NET Standard libraries come with everything in NETStandard.Library whereas .NET Core libraries come with everything in Microsoft.NETCore.App. The latter includes approximately 20 additional libraries, some of which we can add manually to our .NET Standard library (such as System.Threading.Thread) and some of which are not compatible with the .NET Standard (such as Microsoft.NETCore.CoreCLR).

As a quick resume, we can say that:

.Net Framework and .Net Core are two different implementations of the .Net runtime. Both Core and Framework (but especially Framework) have different profiles that include larger or smaller (or just plain different) selections of the many APIs and assemblies Microsoft has created for .Net, depending on where they are installed and in what profile. For example, there are some different APIs available in Universal Windows apps than in the “normal” Windows profile. Even on Windows, you might have the “Client” profile vs the “Full” profile. Additionally, there are other implementations (like Mono) that have their own sets of libraries.

.Net Standard is a specification for which sets of API libraries and assemblies must be available. An app written for .Net Standard 1.0 should be able to compile and run with any version of Framework, Core, Mono, etc, that advertises support for the .Net Standard 1.0 collection of libraries. Similar is true for .Net Standard 1.1, 1.5, 1.6, 2.0, etc. As long as the runtime provides support for the version of Standard targeted by your program, your program should run there.

A project targeted at a version of Standard will not be able to make use of features that are not included in that revision of the standard. This doesn’t mean you can’t take dependencies on other assemblies, or APIs published by other vendors (ie: items on NuGet). But it does mean that any dependencies you take must also include support for your version of .Net Standard. .Net Standard is evolving quickly, but it’s still new enough, and cares enough about some of the smaller runtime profiles, that this limitation can feel stifling. (Note a year and a half later: this is starting to change, and recent .Net Standard versions are much nicer and more full-featured).

On the other hand, an app targeted at Standard should be able to be used in more deployment situations, since in theory it can run with Core, Framework, Mono, etc. For a class library project looking for wide distribution, that’s an attractive promise. For a class library project used mainly for internal purposes, it may not be as much of a concern.

.Net Standard can also be useful in situations where the SysAdmin team is wanting to move from ASP.Net on Windows to ASP.Net for .Net Core on Linux for philosophical or cost reasons, but the Development team wants to continue working against .Net Framework in Visual Studio on Windows.

This is how Microsoft explains it:

.NET Framework is the “full” or “traditional” flavor of .NET that’s distributed with Windows. Use this when you are building a desktop Windows or UWP app, or working with older ASP.NET 4.6+.

.NET Core is cross-platform .NET that runs on Windows, Mac, and Linux. Use this when you want to build console or web apps that can run on any platform, including inside Docker containers. This does not include UWP/desktop apps currently.

Xamarin is used for building mobile apps that can run on iOS, Android, or Windows Phone devices, usually runs on top of Mono, which is a version of .NET that was built for cross-platform support before Microsoft decided to officially go cross-platform with .NET Core. Like Xamarin, the Unity platform also runs on top of Mono.

NameMicrosoft Azure Cosmos DB  Microsoft Azure SQL Database  
DescriptionGlobally distributed, horizontally scalable, multi-model database serviceDatabase as a Service offering with high compatibility to Microsoft SQL Server
Primary database modelDocument store
Graph DBMS
Key-value store
Wide column store
Relational DBMS
Secondary database modelsDocument store
Graph DBMS
Websiteazure.microsoft.com/­services/­cosmos-dbazure.microsoft.com/­en-us/­services/­sql-database
Technical documentationdocs.microsoft.com/­en-us/­azure/­cosmos-dbdocs.microsoft.com/­en-us/­azure/­sql-database
DeveloperMicrosoftMicrosoft
Initial release20142010
Current releaseV12
License commercialcommercial
Cloud-based only yesyes
DBaaS offerings (sponsored links) 
Server operating systemshostedhosted
Data schemeschema-freeyes
Typing yes yes
XML support yes
Secondary indexesyes yes
SQL SQL-like query languageyes
APIs and other access methodsDocumentDB API
Graph API (Gremlin)
MongoDB API
RESTful HTTP API
Table API
JDBC
ODBC
ADO.NET
Server-side scripts JavaScriptTransact SQL
TriggersJavaScriptyes
Partitioning methods Sharding 
Replication methods yes yes, with always 3 replicas available
MapReduce with Hadoop integration no
Consistency concepts Bounded Staleness
Consistent Prefix
Session Consistency
Eventual Consistency
Immediate Consistency 
Immediate Consistency
Foreign keys noyes
Transaction concepts Multi-item ACID transactions with snapshot isolation within a partitionACID
Concurrency yesyes
Durability yesyes
User concepts Access rights can be defined down to the item levelfine grained access rights according to SQL-standard

Cosmos DB is a globally distributed, multi-model database solution with high SLAs around distribution. It’s designed for your applications and supports document and graph databases. Azure SQL DB has the concept of consistent reads and the ability to store your data. But my goal here is to talk about their differences with global replication and global distribution of your data.

Cosmos DB

  • When it comes to distributing, with Cosmos DB you get a primary instance to write against and it gets distributed to all your read-only replicas that you choose around the world.
  • You can simply push a button, activate new scenarios and you can run manual failover transactions.
  • The big key with Cosmos is that is was built for global distribution. It was designed with the controls that allow it to be globally distributed with SLAs associated with that global distribution.
  • Another key thing is that you get one URL and that URL knows where to go and does all the work.

Azure SQL Database

  • Makes it possible to globally distribute your Azure SQL Databases.
  • You can have a primary replica that stands in the US and add secondary read-only replicas in Europe and Asia for instance. You can have the read closer to the people who are using your global applications.

A couple things to be aware of, with Azure SQL DB you only have 4 read-only secondaries off an individual or SQL DB. In contrast, Cosmos DB can replicate wherever Cosmos DB is in the data center; you just go in and click a button. Also, in Cosmos you can do manual failover operations, or you can code them out, so it can be written to wherever it is in the world, closer to the active people using your application.

Do you need the ability of global distribution of your data and wonder which database is the best for this? Today, I’d like to give you a comparison between Azure SQL Database and Cosmos Database for global distribution.

Cosmos DB is a globally distributed, multi-model database solution with high SLAs around distribution. It’s designed for your applications and supports document and graph databases. Azure SQL DB has the concept of consistent reads and the ability to store your data. But my goal here is to talk about their differences with global replication and global distribution of your data.

Cosmos DB

  • When it comes to distributing, with Cosmos DB you get a primary instance to write against and it gets distributed to all your read-only replicas that you choose around the world.
  • You can simply push a button, activate new scenarios and you can run manual failover transactions.
  • The big key with Cosmos is that is was built for global distribution. It was designed with the controls that allow it to be globally distributed with SLAs associated with that global distribution.
  • Another key thing is that you get one URL and that URL knows where to go and does all the work.

Azure SQL Database

  • Makes it possible to globally distribute your Azure SQL Databases.
  • You can have a primary replica that stands in the US and add secondary read-only replicas in Europe and Asia for instance. You can have the read closer to the people who are using your global applications.

A couple things to be aware of, with Azure SQL DB you only have 4 read-only secondaries off an individual or SQL DB. In contrast, Cosmos DB can replicate wherever Cosmos DB is in the data center; you just go in and click a button. Also, in Cosmos you can do manual failover operations, or you can code them out, so it can be written to wherever it is in the world, closer to the active people using your application.

Manual failover is not something you would do with Azure SQL DB. All those writes must come to a primary replica and we’d have to feed out the replicas through read. The biggest pain point you may notice is managing the connectivity to your Azure SQL database in a globally replicated scenario.

There are some techniques, as well as tools within Azure to make it easier to use, such as Traffic Manager. You have the option to use an IP address in Traffic Manager and route things through there, but you must set all that up.

With Cosmos DB, that work is done for you because it’s designed from the ground up to be globally replicated. This does not mean you shouldn’t use active global replication with Azure SQL DB. You just need to understand the differences and use cases to make sure you use the database that best fits your needs to distribute your data globally.

Async & Await Simple Explanation

Simple Analogy

A person may wait for their morning train. This is all they are doing as this is their primary task that they are currently performing. (synchronous programming (what you normally do!))

Another person may await their morning train while he drink his coffee. (Asynchronous programming)

What is asynchronous programming?

Asynchronous programming is where a programmer will choose to run some of his code on a separate thread from the main thread of execution and then notify the main thread on it’s completion.

What does the async keyword actually do?

Prefixing the async keyword to a method name like

async void DoSomething(){ . . .

allows the programmer to use the await keyword when calling asynchronous tasks. That’s all it does.

Why is this important?

In a lot of software systems the main thread is reserved for operations specifically relating to the User Interface. If I am running a very complex recursive algorithm that takes 5 seconds to complete on my computer, but I am running this on the Main Thread (UI thread) When the user tries to click on anything on my application, it will appear to be frozen as my main thread has queued and is currently processing far too many operations. As a result the main thread cannot process the mouse click to run the method from the button click.

When do you use Async and Await?

Use the asynchronous keywords ideally when you are doing anything that doesn’t involve the user interface.

So lets say you’re writing a program that allows the user to sketch on their mobile phone but every 5 seconds it is going to be checking the weather on the internet.

We should be awaiting the call the polling calls every 5 seconds to the network to get the weather as the user of the application needs to keep interacting with the mobile touch screen to draw pretty pictures.

How do you use Async and Await

Following on from the example above, here is some pseudo code of how to write it:

     //ASYNCHRONOUS
    //this is called every 5 seconds
    async void CheckWeather()
    {
        var weather = await GetWeather();
        //do something with the weather now you have it
    }

    async Task<WeatherResult> GetWeather()
    {

        var weatherJson = await CallToNetworkAddressToGetWeather();
        return deserializeJson<weatherJson>(weatherJson);
    }

    //SYNCHRONOUS
    //This method is called whenever the screen is pressed
    void ScreenPressed()
    {
        DrawSketchOnScreen();
    }

On a higher level:

1) Async keyword enables the await and that’s all it does. Async keyword does not run the method in a separate thread. The beginning f async method runs synchronously until it hits await on a time-consuming task.

2) You can await on a method that returns Task or Task of type T. You cannot await on async void method.

3) The moment main thread encounters await on time-consuming task or when the actual work is started, the main thread returns to the caller of the current method.

4) If the main thread sees await on a task that is still executing, it doesn’t wait for it and returns to the caller of the current method. In this way, the application remains responsive.

5) Await on processing task, will now execute on a separate thread from the thread pool.

6) When this await task is completed, all the code below it will be executed by the separate thread

Further to the other answers, have a look at await (C# Reference) and more specifically at the example included, it explains your situation a bit

Compare the most common: Amazon DynamoDB, Azure Cosmos DB and Mongo DB

NameAmazon DynamoDBMicrosoft Azure Cosmos DB MongoDB 
DescriptionHosted, scalable database service by Amazon with the data stored in Amazons cloudGlobally distributed, horizontally scalable, multi-model database serviceOne of the most popular document stores
Primary database modelDocument store
Key-value store
Document store
Graph DBMS
Key-value store
Wide column store
Document store
Websiteaws.amazon.com/­dynamodbazure.microsoft.com/­services/­cosmos-dbwww.mongodb.com
Technical documentationdocs.aws.amazon.com/­dynamodbdocs.microsoft.com/­en-us/­azure/­cosmos-dbdocs.mongodb.com/­manual
DeveloperAmazonMicrosoftMongoDB, Inc
Initial release201220142009
License commercial commercialOpen Source 
Cloud-based only yesyesno 
Server operating systemshostedhostedLinux
OS X
Solaris
Windows
Data schemeschema-freeschema-freeschema-free 
Typing yesyes yes 
Secondary indexesyesyes yes
SQL noSQL-like query languageRead-only SQL queries via the MongoDB Connector for BI
APIs and other access methodsRESTful HTTP APIDocumentDB API
Graph API (Gremlin)
MongoDB API
RESTful HTTP API
Table API
proprietary protocol using JSON
Server-side scripts noJavaScriptJavaScript
Triggersyes JavaScriptno
Partitioning methods ShardingSharding Sharding
Replication methods yesyes Master-slave replication
MapReduce no with Hadoop integration yes
Consistency concepts Eventual Consistency
Immediate Consistency 
Bounded Staleness
Consistent Prefix
Session Consistency
Eventual Consistency
Immediate Consistency 
Eventual Consistency
Immediate Consistency 
Foreign keys nonono 
Transaction concepts ACID Multi-item ACID transactions with snapshot isolation within a partitionMulti-document ACID Transactions with snapshot isolation
Concurrency yesyesyes
Durability yesyesyes 
In-memory capabilities yes 
User concepts Access rights for users and roles can be defined via the AWS Identity and Access Management (IAM)Access rights can be defined down to the item levelAccess rights for users and roles

In the .NET Core series, we are going to go through a detailed example of how to use .NET Core, Angular and MySQL for Microservice web application development.

What are we going to do in this tutorial?

We are going to use MySQL as our database. First, we are going to install the MySQL server, we are going to use Entity Framework EF to automatic create all tables (we are going to use a MySQL Docker Image, but you can use a local MySQL if you prefer).

Then, we are going to step into the world of .NET Core Web API development. It is going to be our server-side part of the application. As we progress through the .NET Core series, we are going to use Repository pattern, SOLID principles. generics, LINQ, entity framework core, create more projects and services to demonstrate some good practices. Overall we will try to write the application as we would in the real-time environment. Furthermore, you will learn about .NET Core architecture and code organization, so you can make it more readable and maintainable.

There are three approaches to using Entity Framework: Database First, Code First and Model First. In this tutorial, we are going to use the Code First approach, reate a model and their relation using classes and then create the database from these classes. It enables us to work with the Entity Framework in an object-oriented manner. Here we need not worry about the database structure.

After we finish the .NET Core series, we are going to introduce two of the most popular client frameworks (Angular and React) to consume our Web API, and at last we are going to create a Xamarin App and deploy it to Android. This will result in creating a full-stack web application.

In the end, we are going to publish our app on Docker, and finish strong by completing the entire development cycle.

You can use Visual Studio or Visual Code for this tutorial, our sample will be created using Visual Code.