4

Benchmarking Performance, Multitasking, and Concurrency

This chapter is about allowing multiple actions to occur at the same time to improve performance, scalability, and user productivity for the applications that you build.

In this chapter, we will cover the following topics:

  • Understanding processes, threads, and tasks
  • Monitoring performance and resource usage
  • Running tasks asynchronously
  • Synchronizing access to shared resources
  • Understanding async and await

Understanding processes, threads, and tasks

A process, with one example being each of the console applications we have created, has resources like memory and threads allocated to it.

A thread executes your code, statement by statement. By default, each process only has one thread, and this can cause problems when we need to do more than one task at the same time. Threads are also responsible for keeping track of things like the currently authenticated user and any internationalization rules that should be followed for the current language and region.

Windows and most other modern operating systems use preemptive multitasking, which simulates the parallel execution of tasks. It divides the processor time among the threads, allocating a time slice to each thread one after another. The current thread is suspended when its time slice finishes. The processor then allows another thread to run for a time slice.

When Windows switches from one thread to another, it saves the context of the thread and reloads the previously saved context of the next thread in the thread queue. This takes both time and resources to complete.

As a developer, if you have a small number of complex pieces of work and you want complete control over them, then you could create and manage individual Thread instances. If you have one main thread and multiple small pieces of work that can be executed in the background, then you can use the ThreadPool class to add delegate instances that point to those pieces of work implemented as methods to a queue, and they will be automatically allocated to threads in the thread pool.

In this chapter, we will use the Task type to manage threads at a higher abstraction level.

Threads may have to compete for and wait for access to shared resources, such as variables, files, and database objects. There are types for managing this that you will see in action later in this chapter.

Depending on the task, doubling the number of threads (workers) to perform a task does not halve the number of seconds that it will take to complete that task. In fact, it can increase the duration of the task, as shown in Figure 4.1:

Figure 4.1: A tweet about tasks in the real world

Good Practice: Never assume that more threads will improve performance! Run performance tests on a baseline code implementation without multiple threads, and then again on a code implementation with multiple threads. You should also perform performance tests in a staging environment that is as close as possible to the production environment.

Monitoring performance and resource usage

Before we can improve the performance of any code, we need to be able to monitor its speed and efficiency to record a baseline that we can then measure improvements against.

Evaluating the efficiency of types

What is the best type to use for a scenario? To answer this question, we need to carefully consider what we mean by “best,” and through this, we should consider the following factors:

  • Functionality: This can be decided by checking whether the type provides the features you need.
  • Memory size: This can be decided by the number of bytes of memory the type takes up.
  • Performance: This can be decided by how fast the type is.
  • Future needs: This depends on the changes in requirements and maintainability.

There will be scenarios, such as when storing numbers, where multiple types have the same functionality, so we will need to consider memory and performance to make a choice.

If we need to store millions of numbers, then the best type to use would be the one that requires the fewest bytes of memory. But if we only need to store a few numbers, yet we need to perform lots of calculations on them, then the best type to use would be the one that runs fastest on a specific CPU.

The sizeof() function shows the number of bytes that a single instance of a type uses in memory. When we are storing many values in more complex data structures, such as arrays and lists, then we need a better way of measuring memory usage.

You can read lots of advice online and in books, but the only way to know for sure what the best type would be for your code is to compare the types yourself.

In the next section, you will learn how to write code to monitor the actual memory requirements and performance when using different types.

Today a short variable might be the best choice, but it might be an even better choice to use an int variable, even though it takes twice as much space in the memory. This is because we might need a wider range of values to be stored in the future.

As listed above, there is an important metric that developers often forget: maintenance. This is a measure of how much effort another programmer would have to put in to understand and modify your code. If you make a nonobvious choice of type without explaining that choice with a helpful comment, then it might confuse the programmer who comes along later and needs to fix a bug or add a feature.

Monitoring performance and memory using diagnostics

The System.Diagnostics namespace has lots of useful types for monitoring your code. The first useful type that we will look at is the Stopwatch type:

  1. Use your preferred coding tool to create a class library project, as defined in the following list:
    • Project template: Class Library / classlib
    • Workspace/solution file and folder: Chapter04
    • Project file and folder: MonitoringLib
  2. Add a console app project, as defined in the following list:
    • Project template: Console App/console
    • Workspace/solution file and folder: Chapter04
    • Project file and folder: MonitoringApp
  3. Use your preferred coding tool to set which project is active:
    • If you are using Visual Studio 2022, set the startup project for the solution to the current selection.
    • If you are using Visual Studio Code, set MonitoringApp as the active OmniSharp project.
  4. In the MonitoringLib project, rename the Class1.cs file to Recorder.cs.
  5. In the MonitoringLib project, globally and statically import the System.Console class.
  6. In the MonitoringApp project, globally and statically import the System.Console class and add a project reference to the MonitoringLib class library, as shown in the following markup:
    <ItemGroup>
      <Using Include="System.Console" Static="true" />
    </ItemGroup>
    <ItemGroup> 
      <ProjectReference
        Include="..MonitoringLibMonitoringLib.csproj" />
    </ItemGroup>
    
  7. Build the MonitoringApp project.

Useful members of the Stopwatch and Process types

The Stopwatch type has some useful members, as shown in the following table:

Member

Description

Restart method

This resets the elapsed time to zero and then starts the timer.

Stop method

This stops the timer.

Elapsed property

This is the elapsed time stored as a TimeSpan format (for example, hours:minutes:seconds).

ElapsedMilliseconds property

This is the elapsed time in milliseconds stored as an Int64 value.

The Process type has some useful members, as shown in the following table:

Member

Description

VirtualMemorySize64

This displays the amount of virtual memory, in bytes, allocated for the process.

WorkingSet64

This displays the amount of physical memory, in bytes, allocated for the process.

Implementing a Recorder class

We will create a Recorder class that makes it easy to monitor time and memory resource usage. To implement our Recorder class, we will use the Stopwatch and Process classes:

  1. In Recorder.cs, change its contents to use a Stopwatch instance to record timings and the current Process instance to record memory usage, as shown in the following code:
    using System.Diagnostics; // Stopwatch
    using static System.Diagnostics.Process; // GetCurrentProcess()
    namespace Packt.Shared;
    public static class Recorder
    {
      private static Stopwatch timer = new();
      private static long bytesPhysicalBefore = 0;
      private static long bytesVirtualBefore = 0;
      public static void Start()
      {
        // force some garbage collections to release memory that is
        // no longer referenced but has not been released yet
        GC.Collect();
        GC.WaitForPendingFinalizers();
        GC.Collect();
        GC.WaitForPendingFinalizers();
        GC.Collect();
        // store the current physical and virtual memory use 
        bytesPhysicalBefore = GetCurrentProcess().WorkingSet64;
        bytesVirtualBefore = GetCurrentProcess().VirtualMemorySize64;
        timer.Restart();
      }
      public static void Stop()
      {
        timer.Stop();
        long bytesPhysicalAfter =
          GetCurrentProcess().WorkingSet64;
        long bytesVirtualAfter =
          GetCurrentProcess().VirtualMemorySize64;
        WriteLine("{0:N0} physical bytes used.",
          bytesPhysicalAfter - bytesPhysicalBefore);
        WriteLine("{0:N0} virtual bytes used.",
          bytesVirtualAfter - bytesVirtualBefore);
        WriteLine("{0} time span elapsed.", timer.Elapsed);
        WriteLine("{0:N0} total milliseconds elapsed.",
          timer.ElapsedMilliseconds);
      }
    }
    

    The Start method of the Recorder class uses the GC type (garbage collector) to ensure that any currently allocated but not referenced memory is collected before recording the amount of used memory. This is an advanced technique that you should almost never use in application code, because the GC understands memory usage better than a programmer would and should be trusted to make decisions about when to collect unused memory itself. Our need to take control in this scenario is exceptional.

  1. In Program.cs, delete the existing statements and then add statements to start and stop the Recorder while generating an array of 10,000 integers, as shown in the following code:
    using Packt.Shared; // Recorder
    WriteLine("Processing. Please wait...");
    Recorder.Start();
    // simulate a process that requires some memory resources...
    int[] largeArrayOfInts = Enumerable.Range(
      start: 1, count: 10_000).ToArray();
    // ...and takes some time to complete
    Thread.Sleep(new Random().Next(5, 10) * 1000);
    Recorder.Stop();
    
  2. Run the code and view the result, as shown in the following output:
    Processing. Please wait...
    827,392 physical bytes used.
    131,072 virtual bytes used.
    00:00:06.0123934 time span elapsed.
    6,012 total milliseconds elapsed.
    

Remember that the time elapsed is randomly between 5 and 10 seconds. Your results will vary even between multiple subsequent runs on the same machine. For example, when run on my Mac mini M1, less physical memory but more virtual memory was used, as shown in the following output:

Processing. Please wait...
294,912 physical bytes used.
10,485,760 virtual bytes used.
00:00:06.0074221 time span elapsed.
6,007 total milliseconds elapsed.

Measuring the efficiency of processing strings

Now that you’ve seen how the Stopwatch and Process types can be used to monitor your code, we will use them to evaluate the best way to process string variables:

  1. In the MonitoringApp project, add a new class file named Program.Helpers.cs.
  2. In Program.Helpers.cs, define a partial Program class with a method to output a section title in dark yellow color, as shown in the following code:
    partial class Program
    {
      static void SectionTitle(string title)
      {
        ConsoleColor previousColor = ForegroundColor;
        ForegroundColor = ConsoleColor.DarkYellow;
        WriteLine("*");
        WriteLine($"* {title}");
        WriteLine("*");
        ForegroundColor = previousColor;
      }
    }
    
  3. In Program.cs, comment out the previous statements by wrapping them in multi-line comment characters: /* */.
  4. Add statements to create an array of 50,000 int variables and then concatenate them with commas as separators using a string and StringBuilder class, as shown in the following code:
    int[] numbers = Enumerable.Range(
      start: 1, count: 50_000).ToArray();
    SectionTitle("Using StringBuilder");
    Recorder.Start();
    System.Text.StringBuilder builder = new();
    for (int i = 0; i < numbers.Length; i++)
    {
      builder.Append(numbers[i]);
      builder.Append(", ");
    }
    Recorder.Stop();
    WriteLine();
    SectionTitle("Using string with +");
    Recorder.Start();
    string s = string.Empty; // i.e. ""
    for (int i = 0; i < numbers.Length; i++)
    {
      s += numbers[i] + ", ";
    }
    Recorder.Stop();
    
  5. Run the code and view the result, as shown in the following output:
    *
    * Using StringBuilder
    *
    1,150,976 physical bytes used.
    0 virtual bytes used.
    00:00:00.0010796 time span elapsed.
    1 total milliseconds elapsed.
    *
    * Using string with +
    *
    11,849,728 physical bytes used.
    1,638,400 virtual bytes used.
    00:00:01.7754252 time span elapsed.
    1,775 total milliseconds elapsed. 
    

We can summarize the results as follows:

  • The StringBuilder class used about 1 MB of physical memory, zero virtual memory, and took about 1 millisecond.
  • The string class with the + operator used about 11 MB of physical memory, 1.5 MB of virtual memory, and took 1.7 seconds.

In this scenario, StringBuilder is more than 1,000 times faster and about 10 times more memory-efficient when concatenating text! This is because string concatenation creates a new string each time you use it because string values are immutable so they can be safely pooled for reuse. StringBuilder creates a single buffer in memory while it appends more characters.

Good Practice: Avoid using the String.Concat method or the + operator inside loops. Use StringBuilder instead.

Now that you’ve learned how to measure the performance and resource efficiency of your code using types built into .NET, let’s learn about a NuGet package that provides more sophisticated performance measurements.

Monitoring performance and memory using Benchmark.NET

There is a popular benchmarking NuGet package for .NET that Microsoft uses in its blog posts about performance improvements, so it is good for .NET developers to know how it works and use it for their own performance testing. Let’s see how we could use it to compare performance between string concatenation and StringBuilder:

  1. Use your preferred code editor to add a new console app to the Chapter04 solution/workspace named Benchmarking.
    • In Visual Studio Code, select Benchmarking as the active OmniSharp project.
  2. In the Benchmarking project, add a package reference to Benchmark.NET, remembering that you can find out the latest version and use that instead of the version I used, as shown in the following markup:
    <ItemGroup>
      <PackageReference Include="BenchmarkDotNet" Version="0.13.1" />
    </ItemGroup>
    
  3. Build the project to restore packages.
  4. Add a new class file named StringBenchmarks.cs.
  5. In StringBenchmarks.cs, add statements to define a class with methods for each benchmark you want to run, in this case, two methods that both combine twenty numbers comma-separated using either string concatenation or StringBuilder, as shown in the following code:
    using BenchmarkDotNet.Attributes; // [Benchmark]
    public class StringBenchmarks
    {
      int[] numbers;
      public StringBenchmarks()
      {
        numbers = Enumerable.Range(
          start: 1, count: 20).ToArray();
      }
      [Benchmark(Baseline = true)]
      public string StringConcatenationTest()
      {
        string s = string.Empty; // e.g. ""
        for (int i = 0; i < numbers.Length; i++)
        {
          s += numbers[i] + ", ";
        }
        return s;
      }
      [Benchmark]
      public string StringBuilderTest()
      {
        System.Text.StringBuilder builder = new();
        for (int i = 0; i < numbers.Length; i++)
        {
          builder.Append(numbers[i]);
          builder.Append(", ");
        }
        return builder.ToString();
      }
    }
    
  6. In Program.cs, delete the existing statements and then import the namespace for running benchmarks and add a statement to run the benchmarks class, as shown in the following code:
    using BenchmarkDotNet.Running;
    BenchmarkRunner.Run<StringBenchmarks>();
    
  7. Use your preferred coding tool to run the console app with its release configuration:
    • In Visual Studio 2022, in the toolbar, set Solution Configurations to Release, and then navigate to Debug | Start Without Debugging.
    • In Visual Studio Code, in a terminal, use the dotnet run --configuration Release command.
  8. Note the results, including some artifacts like report files, and the most important, a summary table that shows that string concatenation took a mean of 412.990 ns and StringBuilder took a mean of 275.082 ns, as shown in the following partial output:
    // ***** BenchmarkRunner: Finish  *****
    // * Export *
      BenchmarkDotNet.Artifacts
    esultsStringBenchmarks-report.csv
      BenchmarkDotNet.Artifacts
    esultsStringBenchmarks-report-github.md
      BenchmarkDotNet.Artifacts
    esultsStringBenchmarks-report.html
    // * Detailed results *
    StringBenchmarks.StringConcatenationTest: DefaultJob
    Runtime = .NET 7.0.0 (7.0.22.22904), X64 RyuJIT; GC = Concurrent Workstation
    Mean = 412.990 ns, StdErr = 2.353 ns (0.57%), N = 46, StdDev = 15.957 ns
    Min = 373.636 ns, Q1 = 413.341 ns, Median = 417.665 ns, Q3 = 420.775 ns, Max = 434.504 ns
    IQR = 7.433 ns, LowerFence = 402.191 ns, UpperFence = 431.925 ns
    ConfidenceInterval = [404.708 ns; 421.273 ns] (CI 99.9%), Margin = 8.282 ns (2.01% of Mean)
    Skewness = -1.51, Kurtosis = 4.09, MValue = 2
    -------------------- Histogram --------------------
    [370.520 ns ; 382.211 ns) | @@@@@@
    [382.211 ns ; 394.583 ns) | @
    [394.583 ns ; 411.300 ns) | @@
    [411.300 ns ; 422.990 ns) | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    [422.990 ns ; 436.095 ns) | @@@@@
    ---------------------------------------------------
    StringBenchmarks.StringBuilderTest: DefaultJob
    Runtime = .NET 7.0.0 (7.0.22.22904), X64 RyuJIT; GC = Concurrent Workstation
    Mean = 275.082 ns, StdErr = 0.558 ns (0.20%), N = 15, StdDev = 2.163 ns
    Min = 271.059 ns, Q1 = 274.495 ns, Median = 275.403 ns, Q3 = 276.553 ns, Max = 278.030 ns
    IQR = 2.058 ns, LowerFence = 271.409 ns, UpperFence = 279.639 ns
    ConfidenceInterval = [272.770 ns; 277.394 ns] (CI 99.9%), Margin = 2.312 ns (0.84% of Mean)
    Skewness = -0.69, Kurtosis = 2.2, MValue = 2
    -------------------- Histogram --------------------
    [269.908 ns ; 278.682 ns) | @@@@@@@@@@@@@@@
    ---------------------------------------------------
    // * Summary *
    BenchmarkDotNet=v0.13.1, OS=Windows 10.0.22000
    11th Gen Intel Core i7-1165G7 2.80GHz, 1 CPU, 8 logical and 4 physical cores
    .NET SDK=7.0.100
      [Host]     : .NET 7.0.0 (7.0.22.22904), X64 RyuJIT
      DefaultJob : .NET 7.0.0 (7.0.22.22904), X64 RyuJIT
    |                  Method |     Mean |   Error |   StdDev | Ratio | RatioSD |
    |------------------------ |---------:|--------:|---------:|------:|--------:|
    | StringConcatenationTest | 413.0 ns | 8.28 ns | 15.96 ns |  1.00 |    0.00 |
    |       StringBuilderTest | 275.1 ns | 2.31 ns |  2.16 ns |  0.69 |    0.04 |
    // * Hints *
    Outliers
      StringBenchmarks.StringConcatenationTest: Default -> 7 outliers were removed, 14 outliers were detected (376.78 ns..391.88 ns, 440.79 ns..506.41 ns)
      StringBenchmarks.StringBuilderTest: Default       -> 2 outliers were detected (274.68 ns, 274.69 ns)
    // * Legends *
      Mean    : Arithmetic mean of all measurements
      Error   : Half of 99.9% confidence interval
      StdDev  : Standard deviation of all measurements
      Ratio   : Mean of the ratio distribution ([Current]/[Baseline])
      RatioSD : Standard deviation of the ratio distribution ([Current]/[Baseline])
      1 ns    : 1 Nanosecond (0.000000001 sec)
    // ***** BenchmarkRunner: End *****
    // ** Remained 0 benchmark(s) to run **
    Run time: 00:01:13 (73.35 sec), executed benchmarks: 2
    Global total time: 00:01:29 (89.71 sec), executed benchmarks: 2
    // * Artifacts cleanup *
    

The Outliers section is especially interesting because it shows that not only is string concatenation slower than StringBuilder, but it is also more inconsistent in how long it takes. Your results will vary, of course. Note that there might not be Hints and Outliers sections if there are no outliers when you run your benchmarks!

You have now seen two ways to measure performance. Now let’s see how we can run tasks asynchronously to potentially improve performance.

Running tasks asynchronously

To understand how multiple tasks can be run simultaneously (at the same time), we will create a console app that needs to execute three methods.

There will be three methods that need to be executed: the first takes 3 seconds, the second takes 2 seconds, and the third takes 1 second. To simulate that work, we can use the Thread class to tell the current thread to go to sleep for a specified number of milliseconds.

Running multiple actions synchronously

Before we make the tasks run simultaneously, we will run them synchronously, that is, one after the other:

  1. Use your preferred code editor to add a new console app to the Chapter04 solution/workspace named WorkingWithTasks.
    • In Visual Studio Code, select WorkingWithTasks as the active OmniSharp project.
  2. In the WorkingWithTasks project, globally and statically import the System.Console class.
  3. In the WorkingWithTasks project, add a new class file named Program.Helpers.cs.
  4. In Program.Helpers.cs, define a partial Program class with methods to output a section title, a task title, and information about the current thread, each in different colors to make them easier to identify in output, as shown in the following code:
    partial class Program
    {
      static void SectionTitle(string title)
      {
        ConsoleColor previousColor = ForegroundColor;
        ForegroundColor = ConsoleColor.DarkYellow;
        WriteLine("*");
        WriteLine($"* {title}");
        WriteLine("*");
        ForegroundColor = previousColor;
      }
      static void TaskTitle(string title)
      {
        ConsoleColor previousColor = ForegroundColor;
        ForegroundColor = ConsoleColor.Green;
        WriteLine($"{title}");
        ForegroundColor = previousColor;
      }
      static void OutputThreadInfo()
      {
        Thread t = Thread.CurrentThread;
        ConsoleColor previousColor = ForegroundColor;
        ForegroundColor = ConsoleColor.DarkCyan;
        WriteLine(
          "Thread Id: {0}, Priority: {1}, Background: {2}, Name: {3}",
          t.ManagedThreadId, t.Priority, t.IsBackground, t.Name ?? "null");
        ForegroundColor = previousColor;
      }
    }
    
  5. In the WorkingWithTasks project, add a new class file named Program.Methods.cs.
  6. In Program.Methods.cs, add three methods that simulate work, as shown in the following code:
    partial class Program
    {
      static void MethodA()
      {
        TaskTitle("Starting Method A...");
        OutputThreadInfo();
        Thread.Sleep(3000); // simulate three seconds of work
        TaskTitle("Finished Method A.");
      }
      static void MethodB()
      {
        TaskTitle("Starting Method B...");
        OutputThreadInfo();
        Thread.Sleep(2000); // simulate two seconds of work
        TaskTitle("Finished Method B.");
      }
      static void MethodC()
      {
        TaskTitle("Starting Method C...");
        OutputThreadInfo();
        Thread.Sleep(1000); // simulate one second of work
        TaskTitle("Finished Method C.");
      }
    }
    
  7. In Program.cs, delete the existing statements and then add statements to call the helper method to output information about the thread, define and start a stopwatch, call the three simulated work methods, and then output the milliseconds elapsed, as shown in the following code:
    using System.Diagnostics; // Stopwatch
    OutputThreadInfo();
    Stopwatch timer = Stopwatch.StartNew();
    SectionTitle("Running methods synchronously on one thread."); 
    MethodA();
    MethodB();
    MethodC();
    WriteLine($"{timer.ElapsedMilliseconds:#,##0}ms elapsed.");
    
  8. Run the code, view the result, and note that when there is only one unnamed foreground thread doing the work, the total time required is just over 6 seconds, as shown in the following output:
    Thread Id: 1, Priority: Normal, Background: False, Name: null
    *
    * Running methods synchronously on one thread.
    *
    Starting Method A...
    Thread Id: 1, Priority: Normal, Background: False, Name: null
    Finished Method A.
    Starting Method B...
    Thread Id: 1, Priority: Normal, Background: False, Name: null
    Finished Method B.
    Starting Method C...
    Thread Id: 1, Priority: Normal, Background: False, Name: null
    Finished Method C.
    6,028ms elapsed.
    

Running multiple actions asynchronously using tasks

The Thread class has been available since the first version of .NET in 2002 and can be used to create new threads and manage them, but it can be tricky to work with directly.

.NET Framework 4.0 introduced the Task class in 2010, which represents an asynchronous operation. A task is a higher-level abstraction around the operating system thread that performs the operation, and the Task enables easier creation and management. Managing multiple threads wrapped in tasks will allow our code to execute at the same time, aka asynchronously.

Each Task has a Status property and a CreationOptions property. A Task has a ContinueWith method that can be customized with the TaskContinuationOptions enum, and it can be managed with the TaskFactory class.

Starting tasks

We will look at three ways to start the methods using Task instances. There are links in the GitHub repository to articles that discuss the pros and cons.

Each has a slightly different syntax, but they all define a Task and start it:

  1. In Program.cs, add statements to create and start three tasks, one for each method, as shown highlighted in the following code:
    SectionTitle("Running methods asynchronously on multiple threads."); 
    timer.Restart();
    Task taskA = new(MethodA);
    taskA.Start();
    Task taskB = Task.Factory.StartNew(MethodB); 
    Task taskC = Task.Run(MethodC);
    WriteLine($"{timer.ElapsedMilliseconds:#,##0}ms elapsed.");
    
  2. Run the code, view the result, and note that the elapsed milliseconds appear almost immediately. This is because each of the three methods is now being executed by three new background worker threads allocated from the thread pool, as shown in the following output:
    *
    * Running methods asynchronously on multiple threads.
    *
    Starting Method A...
    Thread Id: 4, Priority: Normal, Background: True, Name: .NET ThreadPool Worker
    Starting Method C...
    Thread Id: 7, Priority: Normal, Background: True, Name: .NET ThreadPool Worker
    Starting Method B...
    Thread Id: 6, Priority: Normal, Background: True, Name: .NET ThreadPool Worker
    6ms elapsed.
    

It is even likely that the console app will end before one or even all of the tasks have a chance to start and write to the console!

Waiting for tasks

Sometimes, you need to wait for a task to complete before continuing. To do this, you can use the Wait method on a Task instance, or the WaitAll or WaitAny static methods on an array of tasks, as described in the following table:

Method

Description

t.Wait()

This waits for the task instance named t to complete execution.

Task.WaitAny(Task[])

This waits for any of the tasks in the array to complete execution.

Task.WaitAll(Task[])

This waits for all the tasks in the array to complete execution.

Using wait methods with tasks

Let’s see how we can use these wait methods to fix the problem with our console app:

  1. In Program.cs, after creating the three tasks and before outputting the elapsed time, add statements to combine references to the three tasks into an array and pass them to the WaitAll method, as shown in the following code:
    Task[] tasks = { taskA, taskB, taskC };
    Task.WaitAll(tasks);
    
  2. Run the code and view the result, and note the original thread will pause on the call to WaitAll, waiting for all three tasks to finish before outputting the elapsed time, which is a little over 3 seconds, as shown in the following output:
    Starting Method A...
    Starting Method B...
    Thread Id: 4, Priority: Normal, Background: True, Name: .NET ThreadPool Worker
    Thread Id: 6, Priority: Normal, Background: True, Name: .NET ThreadPool Worker
    Starting Method C...
    Thread Id: 7, Priority: Normal, Background: True, Name: .NET ThreadPool Worker
    Finished Method C.
    Finished Method B.
    Finished Method A.
    3,013ms elapsed.
    

The three new threads execute their code simultaneously, and they can potentially start in any order. MethodC should finish first because it takes only 1 second, then MethodB, which takes 2 seconds, and finally MethodA, because it takes 3 seconds.

However, the actual CPU used has a big effect on the results. It is the CPU that allocates time slices to each process to allow them to execute their threads. You have no control over when the methods run.

Continuing with another task

If all three tasks can be performed at the same time, then waiting for all tasks to finish will be all we need to do. However, often a task is dependent on the output from another task. To handle this scenario, we need to define continuation tasks.

We will create some methods to simulate a call to a web service that returns a monetary amount, which then needs to be used to retrieve how many products cost more than that amount in a database. The result returned from the first method needs to be fed into the input of the second method.

This time, instead of waiting for fixed amounts of time, we will use the Random class to wait for a random interval between 2 and 4 seconds for each method call to simulate the work:

  1. In Program.Methods.cs, add two methods that simulate calling a web service and a database stored procedure, as shown in the following code:
    static decimal CallWebService()
    {
      TaskTitle("Starting call to web service...");
      OutputThreadInfo();
      Thread.Sleep((new Random()).Next(2000, 4000));
      TaskTitle("Finished call to web service.");
      return 89.99M;
    }
    static string CallStoredProcedure(decimal amount)
    {
      TaskTitle("Starting call to stored procedure...");
      OutputThreadInfo();
      Thread.Sleep((new Random()).Next(2000, 4000));
      TaskTitle("Finished call to stored procedure.");
      return $"12 products cost more than {amount:C}.";
    }
    
  2. In Program.cs, add statements to start a task to call the web service and then pass its return value to a task that starts the database stored procedure, as shown in the following code:
    SectionTitle("Passing the result of one task as an input into another."); 
    timer.Restart();
    Task<string> taskServiceThenSProc = Task.Factory
      .StartNew(CallWebService) // returns Task<decimal>
      .ContinueWith(previousTask => // returns Task<string>
        CallStoredProcedure(previousTask.Result));
    WriteLine($"Result: {taskServiceThenSProc.Result}");
    WriteLine($"{timer.ElapsedMilliseconds:#,##0}ms elapsed.");
    
  3. Run the code and view the result, as shown in the following output:
    Starting call to web service...
    Thread Id: 4, Priority: Normal, Background: True, Name: .NET ThreadPool Worker
    Finished call to web service.
    Starting call to stored procedure...
    Thread Id: 6, Priority: Normal, Background: True, Name: .NET ThreadPool Worker
    Finished call to stored procedure.
    Result: 12 products cost more than £89.99.
    5,463ms elapsed.
    

You might see two different threads running the web service and stored procedure calls as in the output above (for examples, threads 4 and 6), or the same thread might be reused since it is no longer busy.

Nested and child tasks

As well as defining dependencies between tasks, you can define nested and child tasks. A nested task is a task that is created inside another task. A child task is a nested task that must finish before its parent task is allowed to finish.

Let’s explore how these types of tasks work:

  1. In Program.Methods.cs, add two methods, one of which starts a task to run the other, as shown in the following code:
    static void OuterMethod()
    {
      TaskTitle("Outer method starting...");
      Task innerTask = Task.Factory.StartNew(InnerMethod);
      TaskTitle("Outer method finished.");
    }
    static void InnerMethod()
    {
      TaskTitle("Inner method starting...");
      Thread.Sleep(2000);
      TaskTitle("Inner method finished.");
    }
    
  2. In Program.cs, add statements to start a task to run the outer method and wait for it to finish before stopping, as shown in the following code:
    SectionTitle("Nested and child tasks");
    Task outerTask = Task.Factory.StartNew(OuterMethod);
    outerTask.Wait();
    WriteLine("Console app is stopping.");
    
  3. Run the code and view the result, as shown in the following output:
    Outer method starting...
    Inner method starting...
    Outer method finished.
    Console app is stopping.
    

    Although we wait for the outer task to finish, its inner task does not have to finish as well. In fact, the outer task might finish, and the console app could end, before the inner task even starts!

  1. To link these nested tasks as parent and child, we must use a special option. Modify the existing code that defines the inner task to add a TaskCreationOption value of AttachedToParent, as shown highlighted in the following code:
    Task innerTask = Task.Factory.StartNew(InnerMethod,
      TaskCreationOptions.AttachedToParent);
    
  2. Run the code, view the result, and note that the inner task must finish before the outer task can, as shown in the following output:
    Outer method starting...
    Inner method starting...
    Outer method finished.
    Inner method finished.
    Console app is stopping.
    

The OuterMethod can finish before the InnerMethod, as shown by its writing to the console, but its task must wait, as shown by the console not stopping until both the outer and inner tasks finish.

Wrapping tasks around other objects

Sometimes you might have a method that you want to be asynchronous, but the result to be returned is not itself a task. You can wrap the return value in a successfully completed task, return an exception, or indicate that the task was canceled by using one of the Task static methods, shown in the following table:

Method

Description

FromResult<TResult>(TResult)

Creates a Task<TResult> object whose Result property is the non-task result and whose Status property is RanToCompletion.

FromException<TResult>(Exception)

Creates a Task<TResult> that’s completed with a specified exception.

FromCanceled<TResult>(CancellationToken)

Creates a Task<TResult> that’s completed due to cancellation with a specified cancellation token.

These methods are useful when you need to:

  • Implement an interface that has asynchronous methods, but your implementation is synchronous. This is common for websites and services.
  • Mock asynchronous implementations during unit testing.

Imagine that you need to create a method to validate XML input and the method must conform to an interface that requires a Task<T> to be returned, as shown in the following code:

public interface IValidation
{
  Task<bool> IsValidXmlTagAsync(this string input);
}

We could use these helpful FromX methods to return the results wrapped in a task, as shown in the following code:

using System.Text.RegularExpressions;
namespace Packt.Shared;
public static class StringExtensions : IValidation
{
  public static Task<bool> IsValidXmlTagAsync(this string input)
  {
    if (input == null)
    {
      return Task.FromException<bool>(
        new ArgumentNullException($"Missing {nameof(input)} parameter"));
    }
    if (input.Length == 0)
    {
      return Task.FromException<bool>(
        new ArgumentException($"{nameof(input)} parameter is empty."));
    }
    return Task.FromResult(Regex.IsMatch(input,
      @"^<([a-z]+)([^<]+)*(?:>(.*)</1>|s+/>)$"));
  }
}

If the method you need to implement returns a Task (equivalent to void in a synchronous method) then you can return a predefined completed Task object, as shown in the following code:

public Task DeleteCustomerAsync()
{
  // ...
  return Task.CompletedTask;
}

Synchronizing access to shared resources

When you have multiple threads executing at the same time, there is a possibility that two or more of the threads may access the same variable or another resource at the same time, and as a result, may cause a problem. For this reason, you should carefully consider how to make your code thread-safe.

The simplest mechanism for implementing thread safety is to use an object variable as a flag or traffic light to indicate when a shared resource has an exclusive lock applied.

In William Golding’s Lord of the Flies, Piggy and Ralph spot a conch shell and use it to call a meeting. The boys impose a “rule of the conch” on themselves, deciding that no one can speak unless they’re holding the conch.

I like to name the object variable I use for implementing thread-safe code the “conch.” When a thread has the conch, no other thread should access the shared resource(s) represented by that conch. Note that I say should. Only code that respects the conch enables synchronized access. A conch is not a lock.

We will explore a couple of types that can be used to synchronize access to shared resources:

  • Monitor: An object that can be used by multiple threads to check if they should access a shared resource within the same process.
  • Interlocked: An object for manipulating simple numeric types at the CPU level.

Accessing a resource from multiple threads

Let’s create a console app to explore sharing resources between multiple threads:

  1. Use your preferred code editor to add a new console app to the Chapter04 solution/workspace named SynchronizingResourceAccess.
    • In Visual Studio Code, select SynchronizingResourceAccess as the active OmniSharp project.
  2. Globally and statically import the System.Console class.
  3. Add a new class file named SharedObjects.cs.
  4. In SharedObjects.cs, define a static class with a field to store a message that is a shared resource, as shown in the following code:
    static class SharedObjects
    {
      public static string? Message; // a shared resource
    }
    
  5. Add a new class file named Program.Methods.cs.
  6. In Program.Methods.cs, define two methods that both loop five times, waiting for a random interval of up to two seconds and appending either A or B to the shared message resource, as shown in the following code:
    partial class Program
    {
      static void MethodA()
      {
        for (int i = 0; i < 5; i++)
        {
          Thread.Sleep(Random.Shared.Next(2000));
          SharedObjects.Message += "A";
          Write(".");
        }
      }
      static void MethodB()
      {
        for (int i = 0; i < 5; i++)
        {
          Thread.Sleep(Random.Shared.Next(2000));
          SharedObjects.Message += "B";
          Write(".");
        }
      }
    }
    
  7. In Program.cs, delete the existing statements. Add statements to import the namespace for diagnostic types like Stopwatch, and statements to execute both methods on separate threads using a pair of tasks, and wait for them to complete before outputting the elapsed milliseconds, as shown in the following code:
    using System.Diagnostics; // Stopwatch
    WriteLine("Please wait for the tasks to complete.");
    Stopwatch watch = Stopwatch.StartNew();
    Task a = Task.Factory.StartNew(MethodA);
    Task b = Task.Factory.StartNew(MethodB);
     
    Task.WaitAll(new Task[] { a, b });
    WriteLine();
    WriteLine($"Results: {SharedObjects.Message}.");
    WriteLine($"{watch.ElapsedMilliseconds:N0} elapsed milliseconds.");
    
  8. Run the code and view the result, as shown in the following output:
    Please wait for the tasks to complete.
    ..........
    Results: BABABAABBA.
    5,753 elapsed milliseconds.
    

This shows that both threads were modifying the message concurrently. In an actual application, this could be a problem. But we can prevent concurrent access by applying a mutually exclusive lock to a conch object, as well as adding code to the two methods to voluntarily check the conch before modifying the shared resource, which we will do in the following section.

Applying a mutually exclusive lock to a conch

Now, let’s use a conch to ensure that only one thread accesses the shared resource at a time:

  1. In SharedObjects.cs, declare and instantiate an object variable to act as a conch, as shown in the following code:
    public static object Conch = new();
    
  2. In Program.Methods.cs, in both MethodA and MethodB, add a lock statement for the conch around the for statements, as shown highlighted in the following code:
    lock (SharedObjects.Conch)
    {
      for (int i = 0; i < 5; i++)
      {
        Thread.Sleep(Random.Shared.Next(2000));
        SharedObjects.Message += "A";
        Write(".");
      }
    }
    

    Good Practice: Note that since checking the conch is voluntary, if you only use the lock statement in one of the two methods, the shared resource will continue to be accessed by both methods. Make sure that all methods that access a shared resource respect the conch.

  1. Run the code and view the result, as shown in the following output:
    Please wait for the tasks to complete.
    ..........
    Results: BBBBBAAAAA.
    10,345 elapsed milliseconds.
    

Although the time elapsed was longer, only one method at a time could access the shared resource. Either MethodA or MethodB can start first. Once a method has finished its work on the shared resource, then the conch gets released, and the other method has the chance to do its work.

Understanding the lock statement

You might wonder what the lock statement does when it “locks” an object variable (hint: it does not lock the object!), as shown in the following code:

lock (SharedObjects.Conch)
{
  // work with shared resource
}

The C# compiler changes the lock statement into a try-finally statement that uses the Monitor class to enter and exit the conch object (I like to think of it as taking and releasing the conch object), as shown in the following code:

try
{
  Monitor.Enter(SharedObjects.Conch);
  // work with shared resource
}
finally
{
  Monitor.Exit(SharedObjects.Conch);
}

When a thread calls Monitor.Enter on a reference type, it checks to see if some other thread has already taken the conch. If it has, the thread waits. If it has not, the thread takes the conch and gets on with its work on the shared resource. Once the thread has finished its work, it calls Monitor.Exit, releasing the conch. If another thread was waiting, it can now take the conch and do its work. This requires all threads to respect the conch by calling Monitor.Enter and Monitor.Exit appropriately.

Good Practice: You cannot use value types (struct type) as a conch. Monitor.Enter requires a reference type because it locks the memory address.

Avoiding deadlocks

Knowing how the lock statement is translated by the compiler to method calls on the Monitor class is also important because using the lock statement can cause a deadlock.

Deadlocks can occur when there are two or more shared resources (each with a conch to monitor which thread is currently doing work on each shared resource), and the following sequence of events happens:

  • Thread X “locks” conch A and starts working on shared resource A.
  • Thread Y “locks” conch B and starts working on shared resource B.
  • While still working on resource A, thread X needs to also work with resource B, and so it attempts to “lock” conch B but is blocked because thread Y already has conch B.
  • While still working on resource B, thread Y needs to also work with resource A, and so it attempts to “lock” conch A but is blocked because thread X already has conch A.

One way to prevent deadlocks is to specify a timeout when attempting to get a lock. To do this, you must manually use the Monitor class instead of using the lock statement:

  1. In Program.Methods.cs, modify your code to replace the lock statements with code that tries to enter the conch with a timeout and outputs an error and then exits the monitor, allowing other threads to enter the monitor, as shown highlighted in the following code:
    try
    {
      if (Monitor.TryEnter(SharedObjects.Conch, TimeSpan.FromSeconds(15)))
      {
        for (int i = 0; i < 5; i++)
        {
          Thread.Sleep(Random.Shared.Next(2000));
          SharedObjects.Message += "A";
          Write(".");
        }
      }
      else
      {
        WriteLine("Method A timed out when entering a monitor on conch.");
      }
    }
    finally
    {
      Monitor.Exit(SharedObjects.Conch);
    }
    
  2. Run the code and view the result, which should return the same results as before (although either A or B could grab the conch first) but is better code because it will prevent potential deadlocks.

Good Practice: Only use the lock keyword if you can write your code such that it avoids potential deadlocks. If you cannot avoid potential deadlocks, then always use the Monitor.TryEnter method instead of lock, in combination with a try-finally statement, so that you can supply a timeout and one of the threads will back out of a deadlock if it occurs. You can read more about good threading practices at the following link: https://docs.microsoft.com/en-us/dotnet/standard/threading/managed-threading-best-practices.

Synchronizing events

.NET events are not thread-safe, so you should avoid using them in multithreaded scenarios.

After learning that .NET events are not thread-safe, some developers attempt to use exclusive locks when adding and removing event handlers or when raising an event, as shown in the following code:

// event delegate field
public event EventHandler? Shout;
// conch
private object eventConch = new();
// method
public void Poke()
{
  lock (eventConch) // bad idea
  {
    // if something is listening...
    if (Shout != null)
    {
      // ...then call the delegate to raise the event
      Shout(this, EventArgs.Empty);
    }
  }
}

Good Practice: Is it good or bad that some developers do this? It depends on complex factors, so I cannot give a value judgment. You can read more about events and thread safety at the following link: https://docs.microsoft.com/en-us/archive/blogs/cburrows/field-like-events-considered-harmful.

But it is complicated, as explained by Stephen Cleary in the following blog post: https://blog.stephencleary.com/2009/06/threadsafe-events.html.

Making CPU operations atomic

Atomic is from the Greek word atomos, which means undividable. It is important to understand which operations are atomic in multithreading because if they are not atomic, then they could be interrupted by another thread partway through their operation. Is the C# increment operator atomic, as shown in the following code?

int x = 3;
x++; // is this an atomic CPU operation?

It is not atomic! Incrementing an integer requires the following three CPU operations:

  1. Load a value from an instance variable into a register.
  2. Increment the value.
  3. Store the value in the instance variable.

A thread could be interrupted after executing the first two steps. A second thread could then execute all three steps. When the first thread resumes execution, it will overwrite the value in the variable, and the effect of the increment or decrement performed by the second thread will be lost!

There is a type named Interlocked that can perform atomic actions like Add, Increment, Decrement, Exchange, CompareExchange, And, Or, and Read on the following integer types:

  • System.Int32 (int), System.UInt32 (uint)
  • System.Int64 (long), System.UInt64 (ulong)

Interlocked does not work on numeric types like byte, sbyte, short, ushort, and decimal.

Interlocked can perform atomic operations like Exchange and CompareExchange that swap values in memory on the following types:

  • System.Single (float), System.Double (double)
  • nint, nuint
  • T, System.Object (object)

Let’s see it in action:

  1. Declare another field in the SharedObjects class that will count how many operations have occurred, as shown in the following code:
    public static int Counter; // another shared resource
    
  2. In Program.Methods.cs, in both methods A and B, inside the for statement and after modifying the string value, add a statement to safely increment the counter, as shown in the following code:
    Interlocked.Increment(ref SharedObjects.Counter);
    
  3. In Program.cs, after outputting the elapsed time, write the current value of the counter to the console, as shown in the following code:
    WriteLine($"{SharedObjects.Counter} string modifications.");
    
  4. Run the code and view the result, as shown highlighted in the following output:
    Please wait for the tasks to complete.
    ..........
    Results: BBBBBAAAAA.
    13,531 elapsed milliseconds.
    10 string modifications.
    

Observant readers will realize that the existing conch object protects all shared resources accessed within a block of code locked by the conch, and therefore it is unnecessary to use Interlocked in this specific example. But if we had not already been protecting another shared resource like Message, then using Interlocked would be necessary.

Applying other types of synchronization

Monitor and Interlocked are mutually exclusive locks that are simple and effective, but sometimes, you need more advanced options to synchronize access to shared resources, as shown in the following table:

Type

Description

ReaderWriterLock, ReaderWriterLockSlim

These allow multiple threads to be in read mode, one thread to be in write mode with exclusive ownership of the write lock, and one thread that has read access to be in upgradeable read mode, from which the thread can upgrade to write mode without having to relinquish its read access to the resource.

Mutex

Like Monitor, this provides exclusive access to a shared resource, except it is used for inter-process synchronization.

Semaphore, SemaphoreSlim

These limit the number of threads that can access a resource or pool of resources concurrently by defining slots. This is known as resource throttling rather than resource locking.

AutoResetEvent, ManualResetEvent

Event wait handles allow threads to synchronize activities by signaling each other and by waiting for each other’s signals.

Understanding async and await

C# 5 introduced two C# keywords when working with the Task type. They are especially useful for the following:

  • Implementing multitasking for a graphical user interface (GUI).
  • Improving the scalability of web applications and web services.

In Chapter 18, Building Mobile and Desktop Apps Using .NET MAUI, we will see how the async and await keywords can implement multitasking for a GUI.

But for now, let’s learn the theory of why these two C# keywords were introduced, and then later you will see them used in practice.

Improving responsiveness for console apps

One of the limitations with console apps is that you can only use the await keyword inside methods that are marked as async, but C# 7 and earlier do not allow the Main method to be marked as async! Luckily, a new feature introduced in C# 7.1 was support for async in Main:

  1. Use your preferred code editor to add a new console app to the Chapter04 solution/workspace named AsyncConsole.
    • In Visual Studio Code, select AsyncConsole as the active OmniSharp project.
  2. In Program.cs, delete the existing statements, statically import Console, and then add statements to create an HttpClient instance, make a request for Apple’s home page, and output how many bytes it has, as shown in the following code:
    using static System.Console;
    HttpClient client = new();
    HttpResponseMessage response =
      await client.GetAsync("http://www.apple.com/");
    WriteLine("Apple's home page has {0:N0} bytes.",
      response.Content.Headers.ContentLength);
    
  3. Build the project and note that it builds successfully. In .NET 5 and earlier, the project template created an explicit Program class with a non-async Main method, so you would have seen an error message, as shown in the following output:
    Program.cs(14,9): error CS4033: The 'await' operator can only be used within an async method. Consider marking this method with the 'async' modifier and changing its return type to 'Task'. [/Users/markjprice/apps-services-net7/ Chapter04/AsyncConsole/AsyncConsole.csproj]
    
  4. You would have had to add the async keyword to the Main method and change its return type to Task. With .NET 6 and later, the console app project template uses the top-level program feature to automatically define the Program class with an asynchronous <Main>$ method for you.
  5. Run the code and view the result, which is likely to have a different number of bytes since Apple changes its home page frequently, as shown in the following output:
    Apple's home page has 40,252 bytes.
    

Working with async streams

With .NET Core 3.0, Microsoft introduced the asynchronous processing of streams.

You can complete a tutorial about async streams at the following link: https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/generate-consume-asynchronous-stream.

Before C# 8.0 and .NET Core 3.0, the await keyword only worked with tasks that return scalar values. Async stream support in .NET Standard 2.1 allows an async method to return one value after another asynchronously.

Let’s see a simulated example that returns three random integers as an async stream:

  1. Use your preferred code editor to add a new console app to the Chapter04 solution/workspace named AsyncEnumerable.
    • In Visual Studio Code, select AsyncEnumerable as the active OmniSharp project.
  2. Globally and statically import the System.Console class.
  3. In Program.cs, delete the existing statements and then at the bottom of Program.cs, create a method that uses the yield keyword to return a random sequence of three numbers asynchronously, as shown in the following code:
    async static IAsyncEnumerable<int> GetNumbersAsync()
    {
      Random r = Random.Shared;
      // simulate work
      await Task.Delay(r.Next(1500, 3000));
      yield return r.Next(0, 1001);
      await Task.Delay(r.Next(1500, 3000));
      yield return r.Next(0, 1001);
      await Task.Delay(r.Next(1500, 3000));
      yield return r.Next(0, 1001);
    }
    
  4. Above GetNumbersAsync, add statements to enumerate the sequence of numbers, as shown in the following code:
    await foreach (int number in GetNumbersAsync())
    {
      WriteLine($"Number: {number}");
    }
    
  5. Run the code and view the result, as shown in the following output:
    Number: 509
    Number: 813
    Number: 307
    

Improving responsiveness for GUI apps

So far in this book, we have only built console apps. Life for a programmer gets more complicated when building web applications, web services, and apps with GUIs such as Windows desktop and mobile apps.

One reason for this is that for a GUI app, there is a special thread: the user interface (UI) thread.

There are two rules for working in GUIs:

  • Do not perform long-running tasks on the UI thread.
  • Do not access UI elements on any thread except the UI thread.

To handle these rules, programmers used to have to write complex code to ensure that long-running tasks were executed by a non-UI thread, but once complete, the results of the task were safely passed to the UI thread to present to the user. It could quickly get messy!

Luckily, with C# 5 and later, you have the use of async and await. They allow you to continue to write your code as if it is synchronous, which keeps your code clean and easy to understand, but underneath, the C# compiler creates a complex state machine and keeps track of running threads. It’s kind of magical! The combination of these two keywords makes the asynchronous method run on a worker thread and, when complete, return the results on the UI thread.

Let’s see an example. We will build a Windows desktop app using WPF that gets employees from the Northwind database in a SQL Server database using low-level types like SqlConnection, SqlCommand, and SqlDataReader.

The Northwind database has a medium complexity and a decent number of sample records. You used it extensively in Chapter 2, Managing Relational Data Using SQL Server, where it was introduced and set up.

Warning! You will only be able to complete this task if you have Microsoft Windows and the Northwind database stored in Microsoft SQL Server. This is the only section in this book that is not cross-platform and modern (WPF is 17 years old!). You can use either Visual Studio 2022 or Visual Studio Code.

At this point, we are focusing on making a GUI app responsive. You will learn about XAML and building cross-platform GUI apps in Chapter 18, Building Mobile and Desktop Apps Using .NET MAUI. Since this book does not cover WPF elsewhere, I thought this task would be a good opportunity to at least see an example app built using WPF even if we do not look at it in detail.

Let’s go!

  1. If you are using Visual Studio 2022 for Windows, add a new WPF Application [C#] project named WpfResponsive to the Chapter04 solution. If you are using Visual Studio Code, use the following command: dotnet new wpf, and make this the active OmniSharp project.
  2. Add a package reference for Microsoft.Data.SqlClient to the project.
  3. In the project file, note the output type is a Windows EXE, the target framework is .NET 7 for Windows (it will not run on other platforms like macOS and Linux), and the project uses WPF, as shown in the following markup:
    <Project Sdk="Microsoft.NET.Sdk">
      <PropertyGroup>
        <OutputType>WinExe</OutputType>
        <TargetFramework>net7.0-windows</TargetFramework>
        <Nullable>enable</Nullable>
        <UseWPF>true</UseWPF>
      </PropertyGroup>
      <ItemGroup>
        <PackageReference Include="Microsoft.Data.SqlClient" Version="5.0.0" />
      </ItemGroup>
    </Project>
    
  4. Build the WpfResponsive project to restore packages.
  5. In MainWindow.xaml, in the <Grid> element, add elements to define two buttons, a text box and a list box, laid out vertically in a stack panel, as shown in the following markup:
    <StackPanel>
      <Button Name="GetEmployeesSyncButton" 
              Click="GetEmployeesSyncButton_Click">
        Get Employees Synchronously</Button>
      <Button Name="GetEmployeesAsyncButton" 
              Click="GetEmployeesAsyncButton_Click">
        Get Employees Asynchronously</Button>
      <TextBox HorizontalAlignment="Stretch" Text="Type in here" />
      <ListBox Name="EmployeesListBox" Height="350" />
    </StackPanel>
    

    Visual Studio 2022 for Windows has good support for building WPF apps and will provide IntelliSense as you edit code and XAML markup. Visual Studio Code does not.

  1. In MainWindow.xaml.cs, import the System.Diagnostics and Microsoft.Data.SqlClient namespaces.
  2. In the MainWindow class, create two string constants for the database connection string and SQL statement, as shown in the following code:
    private const string connectionString = 
      "Data Source=.;" +
      "Initial Catalog=Northwind;" +
      "Integrated Security=true;" +
      "Encrypt=false;" +
      "MultipleActiveResultSets=true;";
    private const string sql =
      "WAITFOR DELAY '00:00:05';" +
      "SELECT EmployeeId, FirstName, LastName FROM Employees";
    
  3. Create event handlers for clicking on the two buttons. They must use the string constants to open a connection to the Northwind database and then populate the list box with the IDs and names of all employees, as shown in the following code:
    private void GetEmployeesSyncButton_Click(object sender, RoutedEventArgs e)
    {
      Stopwatch timer = Stopwatch.StartNew();
      using (SqlConnection connection = new(connectionString))
      {
        try
        {
          connection.Open();
          SqlCommand command = new(sql, connection);
          SqlDataReader reader = command.ExecuteReader();
          while (reader.Read())
          {
            string employee = string.Format("{0}: {1} {2}",
              reader.GetInt32(0), reader.GetString(1), reader.GetString(2));
            EmployeesListBox.Items.Add(employee);
          }
          reader.Close();
          connection.Close();
        }
        catch (Exception ex)
        {
          MessageBox.Show(ex.Message);
        }
      }
      EmployeesListBox.Items.Add($"Sync: {timer.ElapsedMilliseconds:N0}ms");
    }
    private async void GetEmployeesAsyncButton_Click(
      object sender, RoutedEventArgs e)
    {
      Stopwatch timer = Stopwatch.StartNew();
      using (SqlConnection connection = new(connectionString))
      {
        try
        {
          await connection.OpenAsync();
          SqlCommand command = new(sql, connection);
          SqlDataReader reader = await command.ExecuteReaderAsync();
          while (await reader.ReadAsync())
          {
            string employee = string.Format("{0}: {1} {2}",
              await reader.GetFieldValueAsync<int>(0), 
              await reader.GetFieldValueAsync<string>(1), 
              await reader.GetFieldValueAsync<string>(2));
            EmployeesListBox.Items.Add(employee);
          }
          await reader.CloseAsync();
          await connection.CloseAsync();
        }
        catch (Exception ex)
        {
          MessageBox.Show(ex.Message);
        }
      }
      EmployeesListBox.Items.Add($"Async: {timer.ElapsedMilliseconds:N0}ms");
    }
    

    Note the following:

    • Defining an async void method is generally bad practice because it is “fire and forget”. You will not be notified when it is completed and there is no way to cancel it because it does not return a Task or Task<T> that can be used to control it.
    • The SQL statement uses the SQL Server command WAITFOR DELAY to simulate processing that takes five seconds. It then selects three columns from the Employees table.
    • The GetEmployeesSyncButton_Click event handler uses synchronous methods to open a connection and fetch the employee rows.
    • The GetEmployeesAsyncButton_Click event handler is marked as async and uses asynchronous methods with the await keyword to open a connection and fetch the employee rows.
    • Both event handlers use a stopwatch to record the number of milliseconds the operation takes and add it to the list box.
  1. Start the WPF app without debugging.
  2. Click in the text box, enter some text, and note the GUI is responsive.
  3. Click the Get Employees Synchronously button.
  4. Try to click in the text box, and note the GUI is not responsive.
  5. Wait for at least five seconds until the list box is filled with employees.
  6. Click in the text box, enter some text, and note the GUI is responsive again.
  7. Click the Get Employees Asynchronously button.
  8. Click in the text box, enter some text, and note the GUI is still responsive while it performs the operation. Continue typing until the list box is filled with the employees, as shown in Figure 4.2:

Figure 4.2: Loading employees into a WPF app synchronously and asynchronously

  1. Note the difference in timings for the two operations. The UI is blocked when fetching data synchronously, while the UI remains responsive when fetching data asynchronously.
  2. Close the WPF app.

Improving scalability for web applications and web services

The async and await keywords can also be applied on the server side when building websites, applications, and services. From the client application’s point of view, nothing changes (or they might even notice a small increase in the time taken for a request to return). So, from a single client’s point of view, the use of async and await to implement multitasking on the server side makes their experience worse!

On the server side, additional, cheaper worker threads are created to wait for long-running tasks to finish so that expensive I/O threads can handle other client requests instead of being blocked. This improves the overall scalability of a web application or service. More clients can be supported simultaneously.

Common types that support multitasking

There are many common types that have asynchronous methods that you can await, as shown in the following table:

Type

Methods

DbContext<T>

AddAsync, AddRangeAsync, FindAsync, and SaveChangesAsync

DbSet<T>

AddAsync, AddRangeAsync, ForEachAsync, SumAsync, ToListAsync ToDictionaryAsync, AverageAsync, and CountAsync

HttpClient

GetAsync, PostAsync, PutAsync, DeleteAsync, and SendAsync

StreamReader

ReadAsync, ReadLineAsync, and ReadToEndAsync

StreamWriter

WriteAsync, WriteLineAsync, and FlushAsync

Good Practice: Any time you see a method that ends in the suffix Async, check to see whether it returns Task or Task<T>. If it does return Task or Task<T>, then you could use it instead of the synchronous non-Async suffixed method. Remember to call it using await and decorate your method with async.

Using await in catch blocks

When async and await were first introduced in C# 5, it was only possible to use the await keyword in a try block, but not in a catch block. In C# 6 and later, it is now possible to use await in both try and catch blocks.

Practicing and exploring

Test your knowledge and understanding by answering some questions, getting some hands-on practice, and exploring this chapter’s topics with deeper research.

Exercise 4.1 – Test your knowledge

Answer the following questions:

  1. What information can you find out about a process?
  2. How accurate is the Stopwatch class?
  3. By convention, what suffix should be applied to a method that returns Task or Task<T>?
  4. To use the await keyword inside a method, what keyword must be applied to the method declaration?
  5. How do you create a child task?
  6. Why should you avoid the lock keyword?
  7. When should you use the Interlocked class?
  8. When should you use the Mutex class instead of the Monitor class?
  9. What is the benefit of using async and await in a website or web service?
  10. Can you cancel a task? If so, how?

Exercise 4.2 – Explore topics

Use the links on the following GitHub page to learn more about the topics covered in this chapter:

https://github.com/markjprice/apps-services-net7/blob/main/book-links.md#chapter-4---improving-performance-and-scalability-using-multitasking

Exercise 4.3 – Read more about parallel programming

Packt has a book that goes deeper into the topics in this chapter, Parallel Programming and Concurrency with C# 10 and .NET 6: A modern approach to building faster, more responsive, and asynchronous .NET applications using C#, by Alvin Ashcraft.

https://www.packtpub.com/product/parallel-programming-and-concurrency-with-c-10-and-net-6/9781803243672

Summary

In this chapter, you learned:

  • How to define and start a task.
  • How to wait for one or more tasks to finish.
  • How to control task completion order.
  • How to synchronize access to shared resources.
  • The magic behind async and await.

In the next chapter, you will learn how to use some popular third-party libraries.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset