Chapter 26

Priority Queues

The queue Abstract Data Type (ADT) that we have looked at previously can be thought of as modeling a service line. All the entities that are added in are considered equal, and they are taken off in the order they arrive. Not every system works in such an equitable way. In real life there are things like VIP lines. Some systems have a serious reason to treat entities differently. An example of such a system is an emergency room. If you show up to an ER with a paper cut, you can expect to wait for a very long time. This is because the order in which people are served in an ER is based much more on the seriousness of the person's condition than on the time of arrival. Situations where people set up appointments are another example. Arriving long before your appointment time generally does not lead to you being seen significantly earlier.

These types of systems call for a new ADT, the priority queue. A priority queue has the same methods as a normal queue, but there is a different behavior for the dequeue and peek methods in that they give the highest priority object on the queue and only consider length of time in the queue if the highest priority items have equal priority.

26.1 Two Approaches

We will consider two different approaches to writing a priority queue in this chapter, both use a linked list for storage. The difference between the two is whether extra work is done when new items are added to the queue or if that work is put off until the items are removed. We will discuss both and compare their efficiency, but only code the one that is better. Unlike the normal queue, it is not possible to get O(1) performance for all the methods in a priority queue. For the implementations we will talk about here, some methods will be O(n). We will discuss another implementation that can improve on that in chapter 32.

All implementations of the priority queue will extend the following trait.

package scalabook.adt
/∗∗
 ∗ This trait defines a mutable Priority Queue ADT.
 ∗ @tparam A the type of data stored
 ∗/
trait PriorityQueue[A] {
 /∗∗
  ∗ Add an item to the priority queue.
  ∗ @param obj the item to add
  ∗/
 def enqueue(obj:A)


  
 /∗∗
 ∗ Remove the item that has the highest priority or, in the case of a tie,
 ∗ has been on the longest.
 ∗ @return the item that was removed
 ∗/
 def dequeue():A


  
 /∗∗
  ∗ Return the item that has the highest priority or, in the case of a tie,
  ∗ has been on the longest without removing it.
  ∗ @return the most recently added item
  ∗/
 def peek:A


  
 /∗∗
  ∗ Tells whether this priority queue is empty.
  ∗ @return true if there are no items on the priority queue, otherwise false.
  ∗/
 def isEmpty:Boolean
}

26.1.1 Searching by Priority

One way to create a priority queue using a linked list is to add elements to tail just like a regular queue, but then alter the code in dequeue and peek so that they run through the elements to find the first one with the highest priority. In the case of dequeue, that one is then removed.

We can evaluate the quality of this approach by looking at the amount of work done by each of the methods. The enqueue and isEmpty methods will be O(1). However, both dequeue and peek have to do a search through all the elements to find the one with the highest priority. This requires O(n) time. Looking closer than just order, this search is particularly costly because it is impossible to know what we are looking for early on so every element must be considered for each search.

26.1.2 Sorted Linked List

A second approach is to do the additional work when elements are added, and keep the list in sorted order so that we always know where the highest priority item is and we can remove it efficiently. Using this approach, the dequeue, peek, and isEmpty methods are all O(1). The enqueue method, however, must perform a search to find the correct insertion locations, and is therefore O(n).

Both of these approaches contain at least one method which is O(n). As most applications will have roughly equal numbers of calls to enqueue and dequeue, and the number of calls will be at least O(n) as well, we can expect either of these implementations to have an overall performance of O(n2). This second approach is better for most applications for a few reasons. The first is that calls to peek are O(1). While there is a logical reason why calls to enqueue and dequeue should roughly balance, there could easily be more calls to peek than either of the other two. In addition, placing items into a sorted list does not require walking the full list. This procedure is like what happens in the inner loop of an insertion sort. You only have to walk through the list until you find the right place to put the new element. For many applications, new elements will have a naturally lower priority than many previous elements. This means that the code will do comparisons to fewer than half the elements, on average, for each enqueue, compared to comparisons of all elements for dequeue and peek in the other approach.

Here is a listing of a sample implementation of a priority queue using a sorted linked list.

package scalabook.adt


  
class SortedListPriorityQueue[A : Manifest](comp: (A,A)=>Int) extends
  PriorityQueue[A] {
 private class Node(var data:A, var prev:Node, var next:Node)
 private val end = new Node(new Array[A](1)(0),null,null)
 end.next = end
 end.prev = end


  
 def enqueue(obj:A) {
 var rover = end.prev
 while(rover!=end && comp(obj,rover.data)>0) rover = rover.prev
 rover.next.prev = new Node(obj,rover,rover.next)
 rover.next = rover.next.prev
 }


  
 def dequeue():A = {
 val ret = end.next.data
 end.next = end.next.next
 end.next.prev = end
 ret
 }


  
 def peek:A = end.next.data


  
 def isEmpty:Boolean = end.next==end


  
 def removeMatches(f: A => Boolean) {
  var rover = end.next
 while(rover!=end) {
 if(f(rover.data)) {
 rover.prev.next = rover.next
 rover.next.prev = rover.prev
 }
 rover=rover.next
  }
 }
}

Note that a comparison function is passed in at creation so that any type can be used. This implementation uses a doubly-linked list with a sentinel to reduce special cases and keep the code compact. The highest priority items are kept at the head of the list, end.next, and this is where dequeue pulls from. The enqueue method runs from the tail of the list backwards until it gets to the sentinel or finds a node with higher priority data. Note that if each new item had a lower priority than the one before it, this version of enqueue would be O(1).

The test code for this class can look just like that for the regular queue with a few minor changes. In addition to changing the type, there is a comparison function required for the priority queue at creation. Here we can use the compareTo method that is part of Int and most other built-in types in the library that have a natural ordering. This makes higher values have a higher priority. If you were in a situation where lower values should have the higher priority then you could use b.compareTo(a) instead.

 var queue:PriorityQueue[Int] = null


  
 @Before def initQueue {
 queue = new SortedListPriorityQueue[Int]((a,b) => a.compareTo(b))
 }


  
 @Test def enqueue100Dequeue100 {
 val nums = Array.fill(100)(util.Random.nextInt)
 nums.foreach(queue.enqueue(_))
 nums.sorted.reverse.foreach(assertEquals(_,queue.dequeue))
 }

Using the priority queue also requires that code checking the dequeue order needs to be altered. The last method shown here was one that used a queue of random values. The checks can be done against a reversed, sorted version of the array because the comparison made higher values have higher priority and they will come off first.

26.1.3 Problems with Arrays

For both the regular stack and the regular queue, it was possible to write an implementation using an array instead of a linked list utilizing a similar amount of code and having a very similar performance. This is not the case for the priority queue using either of these approaches, because they involve either insertion or deletion from random locations in the array, which requires doing O(n) copies to move elements around.1

In one sense, the array-based implementations still O(n) in the same methods. However, the type of operations they are O(n) in has changed. The linked list approaches are O(n) in comparisons and memory reads, but they are only O(1) in memory writes. Proper array-based implementations add O(n) write overhead on top of the comparisons and memory reads.

This overhead is completely unavoidable for the version that keeps the array sorted all the time as it is not possible to maintain a sorted array without doing inserts that shift elements around. The sort and remove approach can get around this if we relax the requirement that items with tied priority go in the order of arrival. This is not as bad as it might sound because many applications will have very few, if any, ties. With that relaxation, when the highest priority item is found, it can be removed and the last item in the array can be swapped into its place instead of copying all the items down.

26.2 Project Integration: Discrete Event Simulation

One area that makes significant use of priority queues is the field of discrete event simulation. This is a style of simulation in which all changes to the system are modeled as events that happen at specific times and the simulation jumps instantly from one event to the next. The way this is normally done is to have a priority queue that events are placed onto. Calls to dequeue pulls events off in order by the time they occur. Most events have the ability to schedule one or more other events. The simulation continues until either a certain amount of time has passed, or until the queue becomes empty.

A very standard example of a discrete event simulation is a line of people at a teller. The events in this system are people arriving and getting in line and tellers finishing with one person and taking another. Simulations of this type allowed businesses to determine how many people need to be working during different hours. A minor variation on these can be used to set schedules for traffic lights.

26.2.1 Cell Splitting

A very simple example of a discrete event simulation is a simple model of cells dividing in a dish. Part of the simplicity of this simulation is that there is only one type of event, a cell dividing. The dish starts off with a single cell. After a certain period of time, that cell divides into two cells. Each of those later divides into two more. This process continues until we stop the simulation. We ignore cell death to keep things simpler.

The delay between when a cell is first "born" and when it splits is not a fixed value. Instead, it has a certain randomness. It is possible to spend a whole semester covering random numbers, how to pull random numbers from various statistical distributions, and how to find appropriate distributions for different data sets. As these details do not concern us, we will say that each cell splits somewhere between a minimum and a maximum number of time units after the split that created it with an equal probability of any value in that range. This uses a uniform distribution, which is what we get from math.random.

The events, in this case, are simply splitting times and can be represented with nothing more than a Double. We begin by placing one value on the queue, which is the time the first cells splits. We also need to start a counter that keeps track of how many cells there are. It begins at one and it gets incremented by one for each event as one cell splits into two. Basic code for this looks like the following.

 def runSimulation:Double = {
 val pq = new SortedListPriorityQueue[Double]((a,b) => b.compareTo(a))
 pq.enqueue(minSplitTime+math.random∗(maxSplitTime-minSplitTime))
 var pop = 1
 while(pop<maximumPopulation-1) {
   val time = pq.dequeue()
   pop += 1
   pq.enqueue(time+minSplitTime+math.random∗(maxSplitTime-minSplitTime))
   pq.enqueue(time+minSplitTime+math.random∗(maxSplitTime-minSplitTime))
  }
  pq.dequeue()
 }

This code will run the simulation, but it does not really do anything useful for us because it does not do much for us. It returns the amount of time it took to reach the stop population. We can produce something that is a bit more functional by putting this inside of a class that is Drawable and having it plot the population as a function of time. Code that does that is shown here.

package scalabook.drawing


  
import swing._
import event._
import java.awt.{Color}
import java.awt.geom._
import java.awt.image.BufferedImage
import collection.mutable
import scalabook.adt._


  
class DrawCellSim(p:DrawTransform,
 private var width:Int,
 private var height:Int,
 private var maximumPopulation:Int,
 private var minSplitTime:Double,
 private var maxSplitTime:Double,
 private var numSims:Int) extends DrawLeaf {
 val parent = p
 @transient private var propPanel:Component = null
 @transient private var img:BufferedImage = null


  
 private def runSimulation(splits:mutable.Buffer[Double]) {
 val pq = new SortedListPriorityQueue[Double]((a,b) => b.compareTo(a))
 pq.enqueue(minSplitTime+math.random∗(maxSplitTime-minSplitTime))
 while(splits.length+1<maximumPopulation) {
 val time = pq.dequeue()
 splits += time
 pq.enqueue(time+minSplitTime+math.random∗(maxSplitTime-minSplitTime))
 pq.enqueue(time+minSplitTime+math.random∗(maxSplitTime-minSplitTime))
 }
 }


  
 private def redoImage {
 if(img==null || width!=img.getWidth || height!=img.getHeight) {
   img = new BufferedImage(width,height,BufferedImage.TYPE_INT_ARGB)
 }
 for(i <- 0 until width; j <- 0 until height) img.setRGB(i,j,0) // transparent
 val data = mutable.Buffer[Double]()
 var maxTime = -1.0
 val g = img.createGraphics()
 g.setPaint(Color.black)
 for(i <- 0 until numSims) {
  runSimulation(data)
  if(maxTime < 0) {
   maxTime = data.last∗1.2
 }
 var (lastX,lastY) = (0.0,height-height/maximumPopulation)
 for(i <- data.indices) {
 	val (x,y) = (data(i)∗width/maxTime,height-(i+1)∗height/maximumPopulation)
 	g.draw(new Line2D.Double(lastX,lastY,x,y))
 	lastX = x
 	lastY = y
 }
  data.clear
 }
 drawing.refresh
 }


  
 def draw(g:Graphics2D) {
 if(img==null) redoImage
 g.drawImage(img,0,0,null)
 }


  
 def propertiesPanel() : Component = {
 if(propPanel == null) {
  import BorderPanel.Position._
 propPanel = new BorderPanel {
  layout += new BorderPanel {
  val properties = Seq[(String, () => String, (String) => Unit)](
 	 ("Width", () => width.toString, str => width = str.toInt),
 	("Height", () => height.toString, str => height = str.toInt),
 	("Max Pop.", () => maximumPopulation.toString, str =>
 		maximumPopulation = str.toInt),
 	("Min Split Time", () => minSplitTime.toString, str => minSplitTime =
 		str.toDouble),
 ("Max Split Time", () => maxSplitTime.toString, str => maxSplitTime =
 		str.toDouble),
 ("Num Sims", () => numSims.toString, str => numSims = str.toInt)
 )
 layout += new GridPanel(6,1) {
 for((name,_,_) <- properties) {
  contents += new Label(name)
 }
 } -> West
 layout += new GridPanel(6,1) {
  for((_,get,set) <- properties) {
 	contents += new TextField(get()) {
 	 listenTo(this)
 	 reactions += {case e:EditDone =>
 	 try {
 		set(text)
 } catch {
  case ex:NumberFormatException =>
     }
    }
   }
   }
  } -> Center
   } -> North
   layout += Button("Run Sims"){redoImage} -> South
   }
  }
 propPanel
 }


  
 override def toString() = "Cell Simulation"


  
 def toXML : xml.Node = {
 <drawCellSim width={width.toString} height={height.toString}
  maxPop={maximumPopulation.toString} minSplitTime={minSplitTime.toString}
  maxSplitTime={maxSplitTime.toString} numSims={numSims.toString}/>
 }
}


  
object DrawCellSim {
 def apply(p:DrawTransform,data:xml.Node) = {
 new DrawCellSim(p,(data  "@width").text.toInt,
   (data  "@height").text.toInt,
   (data  "@maxPop").text.toInt,
   (data  "@minSplitTime").text.toDouble,
   (data  "@maxSplitTime").text.toDouble,
   (data  "@numSims").text.toInt)
 }


  
 def apply(p:DrawTransform) = {
  new DrawCellSim(p,200,200,100,1.0,5.0,10)
 }
}

This lets the user set a number of different parameters for running the simulation and plot lines for many different simulations. The plots are scaled to fit the range in the specified size. No attempt is made to show scale values as that is a significantly more challenging problem. Figure 26.1 shows an example using this class in a drawing.

26.2.2 Collision Handling

Another example where discrete event handling makes sense is in realistic hard-sphere collision handling. The events in this system are times when balls run into one another or into other obstacles such as walls. They have to be handled in order because one collision can alter or completely prevent a later one. It is possible to do this constantly for the whole simulation, though there are reasons to bring all the particles up to a particular time every so often. For example, if you are going to render the simulation or save off the configuration to a disk, it helps if all the particles have a location for the same time instead of having them at the last position for which they had an event.

Using events for collisions is more accurate and potentially significantly faster than letting particles move forward for a whole timestep, then checking for overlap. The latter approach requires small time steps and it potentially misses details. The obvious example of this is if two small, fast-moving particles should collide in the middle of a timestep. Unless the actual times of collisions are found, they can pass through one another and have no overlap at the end of the step. Similar types of errors can occur when one particle should collide with two or more other particles during a step.

In a purely event-driven simulation, the "timesteps" can also be handled as events. So when these events are reached, all the particle positions are updated and any special handling, such as rendering the particles or the application of non-collisional forces like gravity, can be performed. This type of handling can also be performed by breaking out of the event-handling loop occasionally. With some minor modifications, we can make it so that the DrawBouncingBalls class we wrote previously works in this way.

Figure 26.1

Figure showing This screen shot shows the output of the cell simulation drawable. The properties that are used are visible on the left side of the window.

This screen shot shows the output of the cell simulation drawable. The properties that are used are visible on the left side of the window.

The first modification we have to make is not to the bouncing balls, but to our queue. Unlike the other examples that have been mentioned, this example requires that we are able to remove events from the queue without processing them. Each time we process an event that modifies the velocity of a particle, all future events involving that particle must be removed. The most general way for us to add that functionality is with a method like the following.

 def removeMatches(f: A => Boolean) {
 var rover = end.next
 while(rover!=end) {
  if(f(rover.data)) {
   rover.prev.next = rover.next
   rover.next.prev = rover.prev
  }
  rover=rover.next
 }
 }

This takes a predicate function and removes all the elements for which the function is true.

With this added in, we can make modifications to DrawBouncingBalls itself. We will show the entire class here as more has been modified or added than was left alone.

package scalabook.drawing


  
import java.awt.{Graphics2D,Color}
import java.awt.geom._
import swing._
import event._
import javax.swing.Timer
import scalabook.adt._


  
class DrawBouncingBalls(p:DrawTransform,
 private var balls:Vector[DrawBouncingBalls.Ball]) extends DrawLeaf {
  val parent = p
 @transient private var propPanel:Component = null
 @transient private var lTimer:Timer = null
 @transient private var workCopy:Array[DrawBouncingBalls.Ball] = null
 @transient private var pq:SortedListPriorityQueue[CollEvent] = null


  
 private def timer = {
 if(lTimer==null) {
  if(workCopy==null) workCopy = balls.toArray
  if(pq==null) pq = new SortedListPriorityQueue[CollEvent]((a,b) =>
  b.time.compareTo(a.time))
  lTimer = new Timer(100,Swing.ActionListener(e => {
   for(i <- balls.indices) workCopy(i) = balls(i).copy(vy = balls(i).vy+0.01,
   time = 0.0)
   for(i <- balls.indices) {
  findEventsFor(i,i+1 until balls.length,0.0)
   }
   while(!pq.isEmpty) {
  val event = pq.dequeue()
  event.handle
   }
   balls = Vector(workCopy.map(_.advanceTo(1.0)):_∗)
   drawing.refresh
  }))
 }
 lTimer
 }


  
 def draw(g:Graphics2D) {
 g.setPaint(Color.black)
 g.draw(new Rectangle2D.Double(0,0,100,100))
 g.setPaint(Color.green)
 for(DrawBouncingBalls.Ball(x,y,_,_,s,_) <- balls) {
   g.fill(new Ellipse2D.Double((x-s)∗100,(y-s)∗100,s∗200,s∗200))
 }
 }


  
 def propertiesPanel() : Component = {
 if(propPanel==null) {
  propPanel = new BorderPanel {
   layout += new GridPanel(1,1) {
  val button = new Button("Start")
  button.action = Action("Start"){
   if(button.text == "Start") {
   timer.start()
   button.text="Stop"
   } else {
   timer.stop()
   button.text="Start"
   }
  }
  contents += button
   } -> BorderPanel.Position.North
  }
 }
 propPanel
 }


  
 override def toString = "Bouncing Balls"


  
 def toXML : xml.Node =
 <drawBouncingBalls>
  {balls.map(_.toXML)}
 </drawBouncingBalls>


  
 private def collisionTime(b1:DrawBouncingBalls.Ball,
  b2:DrawBouncingBalls.Ball):Double = {
 val (sx1,sy1) = (b1.x-b1.vx∗b1.time, b1.y-b1.vy∗b1.time)
 val (sx2,sy2) = (b2.x-b2.vx∗b2.time, b2.y-b2.vy∗b2.time)
 val radSum = b1.size+b2.size
 val (dx,dy) = (sx1-sx2, sy1-sy2)
 val (dvx,dvy) = (b1.vx-b2.vx, b1.vy-b2.vy)
 val c = dx∗dx+dy∗dy-radSum∗radSum
 val b = 2∗(dx∗dvx+dy∗dvy)
 val a = dvx∗dvx+dvy∗dvy
 val root = b∗b-4∗a∗c
 if(root<0) {
  -1.0
 } else {
  (-b-math.sqrt(root))/(2∗a)
 }
 }


  
 private def findEventsFor(i:Int,against:Seq[Int],curTime:Double) {
 for(j <- against) {
  val t = collisionTime(workCopy(i),workCopy(j))
  if(t>=curTime && t<1.0) {
   pq.enqueue(new BallBallColl(t,i,j))
  }
 }
 for((tfunc,bfunc) <- DrawBouncingBalls.wallInfo) {
  val t = tfunc(workCopy(i))
  if(t>=curTime && t<1.0) {
   pq.enqueue(new BallWallColl(t,i,bfunc))
  }
 }
 }


  
 private trait CollEvent {
 def time:Double
 def handle:Unit
 }


  
private class BallBallColl(val time:Double,val b1:Int,val b2:Int) extends
 	  CollEvent {
 def handle {
  val ball1 = workCopy(b1).advanceTo(time)
  val ball2 = workCopy(b2).advanceTo(time)
  val m1 = ball1.size∗ball1.size∗ball1.size
  val m2 = ball2.size∗ball2.size∗ball2.size
  val cmvx = (ball1.vx∗m1+ball2.vx∗m2)/(m1+m2)
  val cmvy = (ball1.vy∗m1+ball2.vy∗m2)/(m1+m2)
  val dx = ball1.x-ball2.x
  val dy = ball1.y-ball2.y
  val dist = math.sqrt(dx∗dx+dy∗dy)
  if(dist>1.01∗(ball1.size+ball2.size)) {
   println("Warning: collision with big separation. "+b1+" "+b2+" "+dist)
  }
  if(dist<0.99∗(ball1.size+ball2.size)) {
   println("Warning: collision with little separation. "+b1+" "+b2+" "+dist)
  }
  val vx1 = ball1.vx-cmvx
  val vy1 = ball1.vy-cmvy
  val nx = dx/dist
  val ny = dy/dist
  val mag = nx∗vx1+ny∗vy1
  workCopy(b1) = ball1.copy(vx = ball1.vx-1.9∗mag∗nx, vy = ball1.vy-1.9∗mag∗ny)
  workCopy(b2) = ball2.copy(vx = ball2.vx+1.9∗mag∗nx∗m1/m2, vy =
  ball2.vy+1.9∗mag∗ny∗m1/m2)
  pq.removeMatches(_ match {
   case bbc:BallBallColl => bbc.b1==b1 || bbc.b2==b1 || bbc.b1==b2 ||
    bbc.b2==b2
   case bwc:BallWallColl => bwc.b==b1 || bwc.b==b2
   case _ => false
  })
  val others = workCopy.indices.filter(b => b!=b1 && b!=b2)
  findEventsFor(b1,others,time)
  findEventsFor(b2,others,time)
 }
 }


  
 private class BallWallColl(val time:Double,val b:Int,val
 newDir:(Double,Double,Double,Double) => (Double,Double)) extends CollEvent {
 def handle {
 val ball = workCopy(b)
 val nx = ball.x+(time-ball.time)∗ball.vx
 val ny = ball.y+(time-ball.time)∗ball.vy
 val (nvx,nvy) = newDir(ball.x,ball.y,ball.vx,ball.vy)
 workCopy(b) = DrawBouncingBalls.Ball(nx,ny,nvx,nvy,ball.size,time)
 pq.removeMatches(_ match {
   case bbc:BallBallColl => bbc.b1==b || bbc.b2==b
   case bwc:BallWallColl => bwc.b==b
   case _ => false
  })
  findEventsFor(b,workCopy.indices.filter(_!=b),time)
 }
 }
}


  
object DrawBouncingBalls {
 def apply(p:DrawTransform,data:xml.Node) = {
  new DrawBouncingBalls(p,Vector((data"ball").map(bxml => {
  val x=(bxml"@x").text.toDouble
  val y=(bxml"@y").text.toDouble
  val vx=(bxml"@vx").text.toDouble
  val vy=(bxml"@vy").text.toDouble
  val size=(bxml"@size").text.toDouble
  Ball(x,y,vx,vy,size,0.0)
 }):_∗))
 }


  
 def apply(p:DrawTransform,minSize:Double = 0.01,maxSize:Double = 0.05) = {
 new DrawBouncingBalls(p,Vector.fill(20) {
 val size = minSize+math.random∗(maxSize-minSize)
 Ball(size+math.random∗(1-2∗size),size+math.random∗(1-2∗size),
   (math.random-0.5)∗0.02,(math.random-0.5)∗0.02,size,0.0)
 })
 }


  
 case class Ball(x:Double,y:Double,vx:Double,vy:Double,size:Double,time:Double) {
 def toXML = <ball x={x.toString} y={y.toString}
  vx={vx.toString} vy={vy.toString} size={size.toString}/>


  
 def advanceTo(t:Double) = {
  val dt = t-time
  copy(x = x+vx∗dt, y = y+vy∗dt, time = t)
 }
 }


  
 private val wallInfo=Seq[(Ball => Double, (Double,Double,Double,Double) =>
   (Double,Double))](
  (b => if(b.vx<0) b.time-(b.x-b.size)/b.vx else -1, (x,y,vx,vy) =>
   (vx.abs,vy)),
  (b => if(b.vx>0) b.time+(1-b.x-b.size)/b.vx else -1, (x,y,vx,vy) =>
   (-vx.abs,vy)),
  (b => if(b.vy<0) b.time-(b.y-b.size)/b.vy else -1, (x,y,vx,vy) =>
   (vx,vy.abs)),
  (b => if(b.vy>0) b.time+(1-b.y-b.size)/b.vy else -1, (x,y,vx,vy) =>
   (vx,-vy.abs)))
}

The most significant changes are reflected at the top with two new class-level variables and modifications to the code in the timer. The new class-level variables are an array to mirror the vector while the processing is happening and a priority queue that holds a type called CollEvent, which is a trait declared later in the code.

The timer code checks to see if either of those is null and creates them if so. This has to be done for the purposes of serialization. The main changes to the logic occur in the action for the timer. The workCopy variable is initialized to have a copy of the balls with gravity applied. Then for each ball, all events that happen during the next time unit are found relative to all the balls after it in the array. The condition of only checking against particles later in the list prevents us from double finding collisions. Then as long as the priority queue is not empty, we pull off the next event and handle it. Then the balls are copied over the a Vector that will be used for the purposes of drawing. The code for drawing, making the properties, and producing XML remain the same.

After the toXML method there are a number of new methods and classes that have been added to the main class. First is a method that takes two Ball objects and returns a collision time for the two. This method assumes the balls move on straight lines, so the distance between them squared is a quadratic in time. We simply have to solve for the point in time when they get to a distance equal to the sum of the radii. Each particle keeps track of the "time" it is at during a step so that has to be accounted for. If there is no collision, this method returns -1.0. Otherwise, it returns the lesser root as that is when the particles hit. The other root would be when they separate if they moved through one another. Here you can see the math for finding the coefficients in the quadratic equation.

d2(t)=((x1+vx1t)(x2+vx2t))2+((y1+vy1t)(y2+vy2t))2=(Δx+Δvxt)2+(Δy+Δvyt)2=Δx2+Δy2+2(ΔxΔvx+ΔyΔvy)t+(Δvx2+Δvy2)t2

When particle motion is more complex than a straight line, more complex code has to be involved to find the collision time as normally there is not a closed-form solution like there is in this case.

After the method that finds the time of a collision is a method that finds all the events for a particular particle after the current time against some set of other particles and the walls. The way the code has been set up, the structure for handling the particles and the walls is very similar. We loop over the objects we want to check against, calculate an event time, and if that event time falls into the valid range for the current step, we add it to the queue. We will come back to how the wall code is written to work in this way.

This simulation has two types of collisions that are subtypes of the CollEvent trait. They are BallBallColl and BallWallColl. Each of these knows a time when the event happens and contains code for handling the event. The wall class is simpler and is probably worth looking at to see what is going on. First the position of the ball at the time of the event is found. Then we call a function that was passed in at the construction of the event which takes the position and velocity of the ball and returns a new velocity. That new velocity and position are used to build a new ball object which is stored in the array in place of the old one. Then future events involving that ball are removed and new events are found.

The code for ball-to-ball collisions is doing the same thing, but for two balls and it involves more math for handling that type of collision. It is not significant that you understand this math unless you want to. It starts by calculating relative masses of the balls, then finding the center of mass velocity of the two balls.2 This code checks the distance against the sum of the radii and prints an error message if it is not close enough to the expected value. This code can be very helpful for making sure that everything is working correctly, and while it is not technically needed in the final code, it can be helpful to keep in there to prevent problems with future modifications. The modification in velocity for each particle is calculated as a multiple of the projection of the velocity along the line separating the particles. In this code, the constant 1.9 is used so that there is some energy loss. Perfect collisions would use a value of 2.0. Unfortunately, the gravity integration for this code does not conserve energy, so it is helpful to have the collisions damp the system a bit.3

While the math in this code might be a bit confusing to some, the most interesting piece of the code it probably the way in which collisions with the walls are handled. The event finding code uses something called wallInfo and functions from that are passed into the BallWallColl objects. The magic is contained in the declaration of wallInfo at the bottom of the code. This is a sequence of tuples where each element has two functions. These functions represent a wall. The first one tells when a given ball would collide with that wall and the second one gives the modified velocity of the ball after it collides with the wall.

The beauty of this approach is that it makes it fairly simple to add additional barriers. For example, the following tuple defines a barrier that runs diagonally across the middle of the cell that bounces particles toward the top-left corner.

  (b => if(b.vx>0 || b.vy>0) b.time+(1-b.x-b.y)/(b.vx+b.vy) else -1,
  	(x,y,vx,vy) => (-vy.abs,-vx.abs))

This is not technically a wall because it only considers the center of the particle, not the edges. It also allows particles to pass up through it, just not down. However, it demonstrates the flexibility of the approach. With a bit more math, it would be possible to throw in barriers like a round wall that is a cup at the bottom. As soon as the functions are added to the wallInfo sequence, it automatically works with the rest of the code.

If you enter the code that is given above and run the drawing program, then add a bouncing balls drawable, you will see 20 balls of random sizes added in, just like before. You can select that drawable from the tree, and click the button to start the timer going. If you watch for a while, you will see that the large balls bunch up near the bottom and the small ones occasionally get shot higher up into the box. This type of behavior is called equipartition of energy and is a standard feature of physical systems where bodies of different masses interact. The fact that we see it in this system is a good indication that our code is behaving properly.

26.3 End of Chapter Material

26.3.1 Summary of Concepts

  • A priority queue has the same methods as a queue, but the dequeue method functions differently.
    • Instead of always taking the object that has been present the longest, it takes the object with the highest priority.
    • Tied priority can go to the one that has been present longest.
    • We can create a reasonably efficient priority queue using a sorted linked list.
  • Discrete event simulations use priority queues.
    • Events are prioritized by the time they occur.
    • We looked at a cell splitting simulation and collision handling.
    • Collision handling required an extra method to remove certain events.

26.3.2 Exercises

  1. Code the search-based priority queue and do speed tests to compare it to the version that maintains a sorted linkedlist. For the speed tests, insert elements with random times and dequeue them. Vary the number of elements used across an exponential range to measure how that impacts performance.
  2. Code Array-based implementations of a priority queue using both a sorted Array and an unsorted Array that you search. Do speed tests to compare them to each other and to the linked list-based implementations. For the search-based method with an Array, do both shifting elements down and and swapping the last element into the hole left by the one that is removed.
  3. Implement priority queues using ArrayBuffer and ListBuffer to see how efficient they are relative to our "custom" implementations.
  4. Implement some extra wall types for the collision simulation. Consider trying walls that are not lines or writing a method that creates wall segments.
  5. Look up some different distributions for random variables and try the cell splitting simulation using those.

26.3.3 Projects

  1. There are a few ways that you could use a priority queue in your MUD to good effect. All of them involve having a single priority queue that actions can be added to so that they will run at a later time. This will allow things like spells to have delayed effects or to wear off. It can also be used by computer controlled characters and to implement random spawning of characters. The advantage of a priority queue is that you do not have to have checks in every single character/room/item for whether something should happen. Instead, all those things could place events on the priority queue with a delay, and the main thread only pulls off things as needed.

    The idea here uses a sorted list or Array for the priority queue so that the peek operation is fast. The idea is that in most "ticks" there will not be any events on the priority queue to process. In that case, the code only does a peek, sees that the time of the first element is after the current time, then goes on to whatever else needs to be done.

  2. If you are working on the web spider project, you might have noticed that many websites are extremely large. Even if you limit your spider to a single domain, say only pages coming from domains ending with cnn.com, espn.go.com, or google.com, there can still be an extremely large number of pages. If you let the processing proceed in the order that links are found, you are likely to never get to a lot of the data that you are interested in because of resource limitations. One way to deal with this is to make the queue of pages to be read work as a priority queue.

    If you know the types of pages you are looking for and have an idea of what paths they are at on the site you are running through, you can give those pages a higher priority. That way you will visit those pages first. As long as the program keeps finding pages that have a structure you have coded to be interesting, it will work on those pages and not go to other pages that are less likely to have interesting information.

  3. If you are working on either a networked or non-networked graphical game, you can use a priority queue in the main loop of the game in much the same way as is described above for the MUD. It is possible that players will be able to do things that should not have effects for a while. It is also possible that you will have many units in the game that do not need to update every single tick. In either of these cases, you can make a priority queue of events/commands that are only executed when it comes time for them to happen.

  4. This chapter gave a reasonable introduction to the topic of discrete event simulation. Add some capabilities to your simulation workbench that support general discrete event simulations as well as collisional simulations. Make a few specific instances of discrete event simulations that users can create and run. Allow them to adjust certain significant parameters and track significant aspects of the status of the simulation.

  5. The math worksheet does not provide any obvious places to include a priority queue. Two possibilities would be to give the ability for the user to build simple discrete event simulations in a worksheet or to have elements of a worksheet prioritized for processing. The latter might be useful if you either want things that are currently on screen to be calculated before things off screen or if you decide that processing of graphical elements should be a lower priority than processing numeric elements.

    If neither of these appeals to you or fits into what you want to do with your project, you could do the first three exercises in this chapter instead.

  6. The Photoshop® project also does not automatically lend itself to including a priority queue. If you can think of an application, you could write one. Prioritizing draw elements could work, but you have to make sure you do not break the user control over what covers what in a painters algorithm.

    If this option does not fit into what you want to do with your project, you could do the first three exercises in this chapter instead.

1In chapter 32, we will see that an array is exactly what we want to use for a priority queue implementation that is more efficient than either of the options considered here.

2Collisions are much easier to calculate in the center of mass frame. In this moving frame of motion, it is as if each ball ran into a stationary wall.

3This code applies a full downward acceleration to each particle at the beginning of the step. If the particle reverses vertical direction part way through the step, the influence of gravity should have been reversed. That does not happen here so particles can gain energy with each bounce off the bottom.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset