I get ridiculed by young JavaScript and Python coders, whenever I say that parallel processing is essential to the future of computing.
The seasoned among them point out to me that the idea of #supercomputers is almost as old as me, that their iPhone can run rings round a typical supercomputer I may have used in my grad school days, and that their Python programmes running on laptops can beat anything I may have written on a CRAY in Fortran or C. Those points seem valid, but they miss the mark.
First, just outrunning a 30-year-old system is not a legitimate measure of current performance.
Secondly, if modern hardware performance has reached a level where a naïve implementation of an algorithm in a slow scripting language can beat a hand-tuned parallel programme running on an old supercomputer, then today's programmers have the ethical responsibility to optimise their software implementations by exploiting those newer, greater hardware capabilities available to them.
Thirdly, if there is so much excess hardware capacity, the software should soak that up by striving for more accuracy, more precision, more features, whatever, but not by mining bitcoins.
Lastly, just about every consumer-grade machine today—server, desktop, laptop, tablet, phone, single-board computer—is a multicore, multiprocessor monster. Programmers should be exploiting those readily available parallel resources, now. Automatic performance upgrade of sequential code by Moore's law and Dennard scaling is dead and gone. And fully automatic parallelisation of sequential code by compilers is still a distant dream.
#Parallel #programming matters—especially today.