Someone recently suggested a little experiment to me that I found interesting. They made the observation that the classic N-Queens problem offered many contained sub-routines which could be used to test out web workers. So why not try them out?

Intrigued, I went ahead and did just that and it made for a very interesting morning before work. Along the way I learned several things about Javascript and the browser. It also stoked in me an already smoldering curiosity about concurrency in other languages.


It's always seemed curious to me that I hear such rare mention of web workers are they have seemed to suggest such a range of practical use cases.

An oft noted limitation of Javascript it's single threaded nature. That is, putting aside asynchronous network and server side calls, the program runs in a straight path from the top of the file to the bottom. This means that each piece of our program must wait for everything that comes before it. Certainly many subsections must run in sequence as they depend on some earlier result. However, in any program there are usually some number of processes which are wholly independent. These could run parallel to the rest of the program without harm and would not even require any message passing strategy. It's these parts of an application that would seem to be low hanging fruit for web workers.

Web workers are much like threads in Java. A web worker opens a fairly expensive cpu thread in which javascript can execute, with limited scope, at the same time as the main body of your application. The adjective 'expensive' turns out to be really key here.


A couple of points before we begin.

  • Web workers are a browser technology so our code must run in the browser to implement them.(One library brings them to Node)
  • Worker files must be served from the same source as the main thread. This means you can't load them locally, spin up a little express server to play with them.
  • There are two types of workers dedicated and shared. I will use dedicated here as shared offered no advantages.
Basic Worker

Workers are created one at a time and usually have their code in their own file. Once they have been opened you can pass info into the worker and receive info via message passing. The distilled process for employing a web worker is as follows:

Instantiate a new worker passing in it's filepath

const worker = new Worker('src/myWorker.js');

Pass any initiating data to the worker with postMessage

worker.postMessage({ some data, more data });

The worker will receive this via it's onmessage handler

onmessage = function(e) {
  // contains the object passed in
  // do stuff with

When the worker needs to pass data back to the application it can postMessage back.

postMessage({ message });

And the main thread will execute it's awaiting onmessage handler.

worker.onmessage = function(e) { 
  // do stuff with 

Finally when the worker has completed its task make sure to close it


I want to take a minute to note a detail here that is not well written in the documentation. To receive a message from the worker I set the handler on the function like an event listener(which is what it is) but I do not attach the onmessage handler within the worker to anything.

This has to do with the way the worker object is built. Any onmessage or onerror function within it's file is placed on the worker object and acts upon it like an event listener would. So, no need to create an initiation function in the file, just write the code you want to have runnable in your thread.


A sub worker, on the surface is exactly like the basic worker illustrated above. The difference is it is created by a worker and reports data back to the worker which spawned it. Now, I don't know about you but I just love the thought of spawning a myriad of little tasks that run about simultaneously carrying out my instructions. A perfectly organized little task army. I was very excited to implement this.

There is one problem with this little dream. Chrome doesn't support them. Now don't panic there are at least a couple polyfills out there. I grabbed this one and it got the job done.

To use the polyfill place the subworkers.js with your other workers and import it at the top of your main worker


Now sub workers can be spawned just like the in the main thread

const subWorker = new Worker('mySubWorker.js');

Sending and receiving messages with sub-workers works just as it does between the primary worker and main thread. However, if for some reason you find yourself needing to close a sub-worker in the primary worker you can do so with terminate


And that's all it takes! Once you've got the polyfill in it really is trivial to raise up a whole group of workers to do your bidding. It is very important to note though that this should not be done ad infinitum. Remember these threads are expensive. Do a little reading first on how many might be safe to run on your machine.


While that covers the basics of implementing a single worker as well as sub workers there is a whole mess of things that their used required of me in solving N-Queens. What formed from this exercise was a very unambiguous picture of when and when not to use web workers. So, look for that post coming soon!

Until then if you'd like to play with them in your own browser use node to put a little static file server up and get coding!