What is Exascale Computing

Exascale computing pertains to the capability of a machine which can do at least one exaFLOP or one billion billion calculations per second. These machines became reality back in 2009. The opportunities and challenges were heavily weighed in a lot of seminars, webinars, and conferences. There is a lot of research still going on. Manufacturers like Intel and NVIDIA are making quite a lot of progress in creating more aggressive exascale products. An exascale capable machine is 1000 times faster than the petascale machine. Countries are also racing for more and more commercial grade powerful machines. Primarily there is an intense race between America and China (no surprise there).


The opportunities of exascale computing are highly compelling. There are very complex problems which were very very hard to overcome even in petascale computing. But theoretically, they can be mitigated to some extent using exascale computing. It will not only solve some old issues but it will also create new forms of services. It will push the frontiers of materials research, biotech research, cancer research, renewable energy research, human brain reverse engineering, further DNA analysis, fusion and fission reactors’ analysis and implementation, and many others. Since it will advance the analysis and prediction of such domains, it will definitely have a profound impact on industrial impact of each country.

It will change the role of simulation and modelling in the science and engineering sector. Modeling molecules to modelling universe to modelling brain might be possible where we can try to understand these areas with some accuracy. It will transform Aerospace, Astrophysics, Biological systems, Medical systems, Climate analysis, Nuclear Systems & Engineering, Material Science, Security, Energy systems, etc.


Exascale computing requires a lot of power (normally on the scale of gigawatt but new models are being tested almost every month). One of the biggest challenges is to reduce power consumption of such machines. There was a claim in the industry that exascale machines will be available in 2018, but it now seems like the timeline is pushed a bit more (perhaps more than 5 years).

Programmability is another issue. Recently Intel rolled out an architecture for exascale computing called Xion Phi processors. Back in 2006 Intel create GPU architecture called Larrabee, then it didn’t work. Intel then shifted to Knights Ferry (manycore architecture), it didn’t work. Then it shifted to Knight Hill, alas that didn’t work either. And then Xion Phi architectures, which are expected to work but not until 4-5 years on the timeline.

The run-time errors are huge (at least to me). The issue lies in the clock frequency. It is very difficult to increase clock frequency but the processing elements would need that factor. A sheer number of processing elements will rely on the same clock frequencies hence producing more error. So the research is moving towards identification and correction of error rather than increasing the clock frequencies.

The classic fact of concurrency that the more processors on the chip, the more concurrency it is. So imagine the type of concurrency these systems can have, given a huge number of processors on a single chip. So we need new programming paradigms, mathematical constructs etc to control it. New programming languages have to be exascale aware and multi-scale in nature.

Even though these challenges are pretty much costly but its worth the effort.


Exascale computing is highly promising. There are a lot of efforts going on in the industry to achieve this goal. In achieving these goals the industry has made progress in many other sectors as well. The timeline for the availability of such systems is not much in sight yet, but we never know a single breakthrough in next few years might land us in much higher speed computing. It is good that companies and governments (albeit individually within their own ecosystem of companies) are also working together to achieve these goals.