Using dispatch queues in your application

A basic understanding of threads is good enough for you to start using them in your applications. However, once you start using them, there's a good chance that they suddenly become confusing again. If this happens to you, don't worry; threading is not easy. Now let's look at an example of threaded code:

var someBoolean = false

DispatchQueue(label: "MutateSomeBoolean").async {
  // perform some work here
  for i in 0..<100 {
      continue
  }

  someBoolean = true
}

print(someBoolean)

The preceding snippet demonstrates how you could mutate a variable after performing a task that is too slow to execute on the main thread. In the preceding code, an instance of DispatchQueue is created, and it's given a label. This creates a new queue on which you can execute instructions. The queue represents the background thread from the visualization you looked at earlier.

Then, the async method on DispatchQueue is called with the closure that should be executed on the freshly created queue. The loop inside of this block is performed on the background thread. In the visualization from before, this would roughly compare to the fetch data and parse JSON instructions. Once the task is done, someBoolean is mutated.

The last line in the snippet prints the value of someBoolean. What do you think the value of someBoolean is at that point? If your answer is false, good job! If you thought true, you're not alone. A lot of people who start writing multithreaded, asynchronous code don't immediately grasp how it works.

The following image shows what the code does in the background. This should make what happens and why someBoolean was false when it got printed more obvious:

Because this code uses a background thread, the main thread can immediately move to the next instruction. This means that the for loop and the print run simultaneously. In other words, someBoolean is printed before it's mutated on the background thread. This is both the beauty and a caveat of using threads. When everything starts running simultaneously, it is hard to keep track of when something completes.

The preceding visualization also exposes a potential problem in the code. A variable was created on the main thread, and then it got captured in the background thread which then mutated the variable. Doing this is not recommended; your code could suffer from unintended side-effects, such as race conditions, where both the main thread and the background thread mutate a value, or worse, you could accidentally try to access a Core Data object on a different thread than the one it was created on. Core Data objects do not support being used on multiple threads so you should always try to make sure that you avoid mutating or accessing objects that are not on the same thread as the one where you access them.

So, how can you mutate someBoolean safely and print its value after mutating it? Well, you could use a callback closure to achieve this. The following code is a sample of what that would look like:

func executeSlowOperation(withCallback callback: @escaping ((Bool) -  > Void)) {
  DispatchQueue(label: "MutateSomeBoolean").async {
    // perform some work here
    for i in 0..<100 {
      continue
    }

    callback(true)
  }
}

executeSlowOperation { result in
  DispatchQueue.main.async {
    someBoolean = result
    print(someBoolean)
  }
}

In this snippet, the slow operation is wrapped in a function that is called with a callback closure. Once the task is complete, the callback is executed, and it is passed the resulting value from the background thread. The closure makes sure that its code is executed on the main thread. If you don't do this, the closure itself would have been executed on the background thread. It's important to keep this in mind when calling asynchronous code.

The callback-based approach is excellent if your callback should be executed when a single task is finished. However, there are scenarios where you want to complete some tasks before moving to the next task. You have already used this approach in Chapter 11, Being Proactive with Background Fetch.

Let's review the heart of the background fetch logic that was used in that chapter:

func application(_ application: UIApplication, performFetchWithCompletionHandler completionHandler: @escaping (UIBackgroundFetchResult) -> Void) {

  let fetchRequest: NSFetchRequest<Movie> = Movie.fetchRequest()
  let managedObjectContext = persistentContainer.viewContext
  guard let allMovies = try? managedObjectContext.fetch(fetchRequest) else {
    completionHandler(.failed)
    return
  }

  let queue = DispatchQueue(label: "movieDBQueue")
  let group = DispatchGroup()
  let helper = MovieDBHelper()
  var dataChanged = false

  for movie in allMovies {
    queue.async(group: group) {
      group.enter()
      helper.fetchRating(forMovieId: movie.remoteId) { id,
        popularity in
        guard let popularity = popularity, popularity !=
          movie.popularity else {
            group.leave()
            return
        }

        dataChanged = true

        managedObjectContext.persist {
          movie.popularity = popularity
          group.leave()
        }
      }
    }
  }

  group.notify(queue: DispatchQueue.main) {
    if dataChanged {
      completionHandler(.newData)
    } else {
      completionHandler(.noData)
    }
  }
}

When you first saw this code, you were probably able to follow along, but it's unlikely that you were completely aware of how complex this method is. Multiple dispatch queues are used in this snippet. To give you an idea, this code starts off on the main thread. Then, for each movie, a background queue is used to fetch its rating. Once fetching all these ratings is complete, the managed object context's dispatch queue is used to update the movie. Think about all this switching between dispatch queues that is going on for a second. Quite complex, isn't it?

The background fetch method needs to call a completion handler when it is done fetching all the data. However, a lot of different queues are used, and it's kind of hard to tell when all the fetch operations have completed. This is where dispatch groups come in. A dispatch group can hold onto a set of tasks that are executed either serially or in parallel.

When you call enter() on a dispatch group, you are also expected to call leave() on the group. The enter call tells the group that there is unfinished work in the dispatch group. When you call leave(), the task is marked as completed. Once all tasks are completed, the group executes a closure on any thread you desire. In the example, notify(queue:) is the method used to perform the completion handler on the main queue.

It's okay if this is a bit daunting or confusing right now. As mentioned before, asynchronous programming and threads are pretty complex topics, and dispatch groups are no different.

The most important takeaways regarding dispatch groups are that you call enter() on a group to submit an unfinished task, you call leave() to mark the task finished, and you use notify(queue:) to execute a closure on the queue passed to this method once all tasks are marked as completed.

The approach you've seen so far makes direct use of closures to perform tasks. This causes your methods to become long and relatively complex since everything is written in line with the rest of your code. You already saw how mixing code that exists on different threads can lead to confusion because it's not very obvious which code belongs on which queue. Also, all this inline code is not particularly reusable. You can't pick up a particular task and execute it on a different queue for instance, because the code is tightly coupled to a specific dispatch queue.

You can use Operations to make tasks that are easy to reuse and decoupled from running on a specific queue.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset