Being mathematically equivalent to a Turing machine may be important for a computer scientist but not so important for an engineer concerned with reliability. Here's the reason I gave two decades ago.
in
Aaron Sloman, Beyond Turing Equivalence,
In Machines and Thought: The
Legacy of Alan Turing (vol I),
Eds. P.J.R. Millican and A. Clark, The Clarendon Press, Oxford,
pp. 179--219, 1996,
Originally presented at Turing90 Colloquium, Sussex University, April 1990,
http://www.cs.bham.ac.uk/research/projects/cogaff/96-99.html#1
Consider the control requirements for a collection of co-existing interacting sub-systems. It is sometimes possible to produce the required interactions on a single time-shared processor, by providing a collection of concurrent virtual machines, but virtual parallel processes on a single machine sometimes have slightly different causal powers from processes implemented on a collection of machines, even when they do compute the same input/output function. One obvious causal difference that is important from an engineering point of view, though not a mathematical point of view, is robustness: a bug in the scheduler or memory management system, or even the central processor, can make a single-processor system go irretrievably awry, whereas a multi-processor implementation could include compensatory mechanisms, for instance one processor detecting the error state of another and doing something to change it. This distinction can also be relevant to the difference between a single process and several processes running time-shared on one computer. If two (or more) interacting processes are always ensured a fair share of the time by the scheduler, then if one process has a bug, or gets stuck in a dead-end search, it can be redirected by another. After all, that's exactly the sort of thing that happens in an operating system. So sometimes the advantages of parallelism are to be found even in virtual machines.
A less obvious point is that a single-processor system simulating N interacting processors would have to cycle through the changes in those processors in sequence. In doing so it would pass through fragile and meaningless intermediate states that don't occur on a true multi-engine machine where all the processors change concurrently. During these intermediate states the machine with virtual parallelism may be incapable of responding coherently to certain inputs. The risks can be reduced if the inputs from the environment are handled by separate processors that buffer all incoming signals until the main processor is ready to handle them (as happens in time-shared computers) but then we are again dealing with a multi-processor system, even though some of the processors perform only lowly buffering functions.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html