Grand Central Dispatch

Now that we know closures, we know how to wrap up little blocks of code and pass them around as objects. Beyond AV Foundation, we’ll find that many iOS frameworks work with closures. One of the most popular idioms is to do some work that will take an unpredictable amount of time, and then when it’s done, it can call a closure we provide to finish up. This is the completion handler pattern, and it’s frequently used for things like network access: like “Send a tweet, and when it goes through, call my closure that updates the UI and plays a sound.”

This also raises an interesting question: just who’s executing our closures anyway?

Closures play an important role in solving one of the biggest problems on a platform like the iPhone and iPad. All currently available iOS devices have multiple CPU cores, meaning they can perform multiple tasks at the same time. However, we often think of our code as running in a single straight line of execution. If all our code runs in lock-step order, it seems it’ll have to be on one CPU core. So what’s the other one going to do?

In iOS (and macOS, for that matter), Apple’s solution is a system called Grand Central Dispatch. The idea of GCD is that there are multiple processing queues that take units of work to execute, in order. As a low-level system technology, iOS can examine how many cores the device has and how busy they are, and mete out the work evenly, so we don’t tax one core while leaving the other idling.

And the units of work that the queues manage…are closures!

GCD Queues

Consider the following figure:

images/closures/4-gcd-queues.png

This is a hypothetical arrangement of GCD queues. There don’t have to be exactly as many as there are CPU cores. In fact, an app often has twenty or more queues by default. In the diagram, each colored block is code to be executed, and they flow through the queues left-to-right. Some queues are specific to background tasks like network I/O or audio, while others are utility queues that take whatever work they’re given. So imagine the work our app is doing is represented by the blue blocks on queue 1. And then the blocks with other colors are tasks the system performs by itself: downloading mail, playing music, and what have you.

But let’s also say we have some extra work we want performed in parallel with our app’s main tasks. That’s the blue block on queue 4. We’ve actually already been exposed to this idea of choosing a queue to perform additional work for our app. When we called the AVPlayer’s addPeriodicTimeObserver to add the closure to update the time label, the second parameter was called queue. The documentation says:

A serial dispatch queue onto which block should be enqueued. Passing a concurrent queue is not supported and will result in undefined behavior.

If you pass NULL, the main queue (obtained using dispatch_get_main_queue) is used.

There’s a little jargon here to untangle. A serial queue always executes its closures in order, and only one can be running at a time. With a concurrent queue, it’s possible two or more of its closures could be running at once.

Then there’s the main queue. This is super important, because the main queue is responsible for all UI in an iOS app. The main queue’s only job is to look for user input (touches, rotations, or shakes) and update the user interface. It runs this loop over and over, many times a second, so it’s vitally important that we don’t do long-running, slow work on the main queue, which would appear to the user as slowdown or janky animation, as needed refreshes of the UI get delayed.

It’s possible to take this advice too far in the wrong direction. For instructional purposes, we’re going to do exactly that.

Bad Idea Department

images/aside-icons/warning.png

What we’re about to do is going to make our app buggy. We’ll fix it in the next section. But if you want to take a pass on coding these steps and just leave everything working as-is, that’s totally fine. We’ll revert these changes at the end of the chapter anyway.

Passing nil for the queue makes our timed observations occur on the main queue, the one that refreshes the UI. Let’s say, in our naive can-do spirit, that we want to run our closure on a different queue. We can totally do that; we just have to specify the queue to use, either by creating our own GCD queue or by asking iOS to give us a utility queue.

To get a utility queue, we call Dispatch.queue and pass in a quality-of-service argument. This argument is of type DispatchQoS and ranges from high-priority values like userInteractive to “we’ll get around to it” priorities like background.

Let’s try it! We just change the value we pass to queue, but to help find it in the code, here’s the entire call that sets up the periodic observer on the player.

1: playerPeriodicObserver =
2:  player?.addPeriodicTimeObserver(forInterval: interval,
3:  queue: ​DispatchQueue​.global(qos: .​default​))
4:  { [​weak​ ​self​] currentTime ​in
5: self​?.updateTimeLabel(currentTime)
6: }

The only difference is line 3, where we use Dispatch.queue to get a default-quality queue.

Now, we optimistically believe this will make our app faster and better, because we’ve taken work off the main queue. Try running it and see what happens when you press the Play/Pause button.

Well, that didn’t go so well, did it?

Instead of seeing any visible improvement in our app, it is clearly worse. The time label goes several seconds without updating. Worse, we are now seeing errors in the console pane (C), and they’re pretty scary looking:

 2016-08-29 20:02:58.211 PragmaticPodcasts[13002:5709690] This application is
 modifying the autolayout engine from a background thread after the engine
 was accessed from the main thread. This can lead to engine corruption and
 weird crashes.

First, hats off to the Apple engineer who wrote this log statement. “Weird crashes” is exactly what we want to avoid in our app. (Weird Crashes would also be a great name for an all-developer rock band or karaoke posse, but that’s another story.)

But what have we done, and how do we fix it?

GCD and the Main Queue

When we said earlier that the main queue is responsible for handling user-input events and updating the user interface, we left out a key caveat: it is the only queue that should ever do that stuff.

The UI can be fast because its code can assume that it won’t be accessed by two closures running concurrently on different queues. We just broke that assumption.

So, the rule is clear: never touch the UI from a queue other than main. That means no setting text on labels, no setting titles on buttons, no changing background colors or font sizes or anything like that, unless you’re on the main queue.

And usually, it’s not a problem. When a button is tapped and calls our code, that’s on the main queue, because it’s the main queue that’s looking for button taps and other touch events in the first place.

But we also mentioned previously that there are some iOS APIs that do weird things with closures, like making a network call and running our closure when it’s done. Those APIs don’t use the main queue, because they don’t want to block the main queue on something that takes a long time like network access or file I/O.

In a case like that, there must be a way to safely update the UI, even if our code isn’t on the main queue. That’s the situation we’re in now, so let’s look at the solution.

Along with allowing us to create queues or look up system queues, GCD also lets us send work to certain queues. The DispatchQueue type has several related methods named async or sync (or variants on these terms) that take a closure that uses no parameters and doesn’t have a return value. This closure is then executed on that queue. The difference is that sync-style calls put the work on the other queue and wait for it to finish, while the async sends the work to the other queue and then unblocks the current queue immediately.

The following figure shows this arrangement, with a global queue using Dispatch.main.async() to put work onto the main queue.

images/closures/gcd-dispatch-async-to-main.png

The trick of moving work off of and back onto the main queue involves three steps:

  1. Our app’s code (“1”) is running on the main queue and creates a closure (“2”) for a background task, which it puts on a global queue. The main queue can then immediately continue to start processing user input again or performing other app code (the blocks marked “n”).

  2. The code on the global queue runs for however long it needs; this is why putting work on other queues is great for long-running and unpredictable things like downloading data from the network. When it’s done and needs to update the UI, it creates another closure (“3”) and sends it back to the main queue.

  3. The UI-updating closure reaches the front of the main queue and is executed, allowing it to update the UI.

So with that in mind, we’ll rewrite how we set up our periodic observer. In this version, our code to create the periodic observer is “1” in the diagram, the observer itself is “2,” and updating the time label is “3.”

1: playerPeriodicObserver =
2:  player?.addPeriodicTimeObserver(forInterval: interval,
3:  queue: ​DispatchQueue​.global(qos: .​default​))
4:  { [​weak​ ​self​] currentTime ​in
5: DispatchQueue​.main.async {
6: self​?.updateTimeLabel(currentTime)
7:  }
8: }

The change here is lines 5-7, and it’s really simple. All we’ve done is take our one line of UI code (the call to self?.updateTimeLabel()) and wrap it in a brand-new closure that we then send to the main queue. We do that by identifying the queue we want (Dispatch.main) and then calling its async method.

Notice we’re again using the trailing-closure syntax; this could also be written as Dispatch.main.async(execute: { ... }), but that’s a lot uglier.

Anyway, the point is we have addressed our bug by doing right by the main queue; our modification to the timeLabel is now explicitly performed on the main queue. Run it again and everything works smoothly, with no pauses and no scary error messages.

Of course, this was a case of getting ourselves in trouble just to prove a point. The approach we used in the beginning of the chapter was the right one: send nil for the queue, and AVPlayer will do the work of running our closure on the main queue (or, in terms of the diagram, “3” is put directly on the main queue by the AVPlayer and there is no closure “2”). Feel free to undo the changes we made in this section to get back to that point. The downloadable sample code for all later chapters will do so as well.

Where this technique is truly useful is in dealing with long-running tasks like network access. It’s a very common pattern to download data on a background queue, let that take however long it needs (without blocking the main queue), and, only when it’s done, create another closure to update the UI and send it back to the main queue. That’s exactly what we’ll be doing in the next few chapters.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset