Since the producers (checkers) and consumer (packers) don't work at the same time, we can easily fill up a queue with 15,000,000 bulbs via a trivial for loop (we are not very interested in this part of the assembly line). This is shown in the following code snippet:
private static final Random rnd = new Random();
private static final int MAX_PROD_BULBS = 15_000_000;
private static final BlockingQueue<String> queue
= new LinkedBlockingQueue<>();
...
private static void simulatingProducers() {
logger.info("Simulating the job of the producers overnight ...");
logger.info(() -> "The producers checked "
+ MAX_PROD_BULBS + " bulbs ...");
for (int i = 0; i < MAX_PROD_BULBS; i++) {
queue.offer("bulb-" + rnd.nextInt(1000));
}
}
Further, let's create a default work-stealing thread pool:
private static ExecutorService consumerService
= Executors.newWorkStealingPool();
For comparison, we will also use the following thread pools:
- A cached thread pool:
private static ExecutorService consumerService
= Executors.newCachedThreadPool();
- A fixed thread pool using the number of available processors as the number of threads (the number of processors is used by the default work-stealing thread pool as the parallelism level):
private static final Consumer consumer = new Consumer();
private static final int PROCESSORS
= Runtime.getRuntime().availableProcessors();
private static ExecutorService consumerService
= Executors.newFixedThreadPool(PROCESSORS);
And, let's start 15,000,000 small tasks:
for (int i = 0; i < queueSize; i++) {
consumerService.execute(consumer);
}
The Consumer wraps a simple queue.poll() operation, therefore it should run pretty fast, as shown in the following snippet:
private static class Consumer implements Runnable {
@Override
public void run() {
String bulb = queue.poll();
if (bulb != null) {
// nothing
}
}
}
The following graph represents the collected data for 10 runs:
Even if this is not a professional benchmark, we can see that the work-stealing thread pool has obtained the best results, while the cached thread poll has the worse results.