Modular Exponentiation helps us in computing modulo of some numbers which are extremely large. E.g. if x and y are two variables of 100 and 500 bits long respectively, then calculating \(x^y mod N\) is pretty damn difficult with normal multiplications and divisions. And what about if N itself is of extremely long number of bits. There are huge applications of such calculations in cryptography, and modular exponentiation makes it easy for us.

So in modular exponentiation, we ask the simple question

Given 3 numbers x, y, and N, compute \(x^y mod N\)

Let’s assume x and y as 30-bits long. Then \(x^y\) would be like \(30^30\). It is really hard to compute this number and then take the modulo N. The idea to solve it with modular exponentiation is to perform all intermediate computations modulo N themselves, as given below:

\(x mod N \rightarrow x^2 mod N \rightarrow x^3 mod N \rightarrow … \rightarrow x^y mod N\)

This way we can do computations which will yield numbers smaller than N in each intermediate step, so computations won’t take too long. There is an inherent problem with the above formula in terms of implementation in computers. If let’s say the value of y is 1000 bits long, then we need to perform 999 individual computations to reach the answer. This is still much. We can do better than exponential times (in values of y only).

The idea to simplify it further is to do intermediate computations on the individual ‘set’ bits of y. This way we can yields results by squares the value of x in individual computation step e.g.

\(x mod N \rightarrow x^2 mod N \rightarrow x^4 mod N \rightarrow … \rightarrow x^(2\log y) mod N\)

A recursive algorithm can be used to achieve this as given below (whose complexity is \(\log y\)):

And hence this simple algorithm saves us a lot of computing time.