A new task is simply added to the tasks array. The current opcode is inserted as the first instruction in the new task.
Again there is some room for optimization. It is not always feasible to spawn new tasks. I believe it is a good thing spawn as many tasks as we can, initially, but when we have finished parallelizing a sequence we should look at the tasks and the estimated costs of executing them locally versus remotely. Tasks can very easily be contracted as we see fit, and a simple heuristic on this will definitely be a huge benefit over this ``pervasive parallelization'' strategy. Again, the hard part was to be able to detect the possibility of spawning new tasks. The easy thing will be to merge some of them together again.