Computational Complexity Of Neural Networks

Why are neural networks so slow?

In order to motivate why we separate the training and inference phases of neural networks, it can be useful to analyse the computational complexity.

This essay assumes familiarity with analytical complexity analysis of algorithms, and hereunder big-O notation. If you need a recap, you should read the essay on computational complexity before continuing.

Forward propagation

Looking at inference part of a feed forward neural network, we have forward propagation.

Finding the asymptotic complexity of the forward propagation procedure can be done much like we how we found the run-time complexity of matrix multiplication.

Before beginning, you should be familiar with the forward propagation procedure.

We assume the input-vector can be described as:

xRnx \in \mathbb{R}^{n}

Where the first element is the bias unit: x0=1x_0 = 1.[1]

The input is treated in the same as any other activation matrix, and has the index: x=a(0)x=a^{(0)}. The zeroth element, a0(0)a_0^{(0)} is as usual the bias unit with a value of 1.

About forward propagation, we can write:

z(k)=θ(k)a(k1)z(k)R1×(mθ(k)Rm×n)a(k)=g(z(k))\begin{aligned} z^{(k)} &= \theta^{(k)} a^{(k-1)} \\ z^{(k)} &\in \mathbb{R}^{1 \times (m | \theta^{(k)}\in \mathbb{R}^{m \times n})} \\ a^{(k)} &= g\big( z^{(k)} \big) \end{aligned}

Where g(x)g(x) is the activation function which is evaluated elementwise. We therefore know that a(k)a^{(k)} has the same dimensions as z(k)z^{(k)}.

We see that for each layer a matrix multiplication, and an activation function is computed. We know from the the previous essay that naive matrix multiplication has a asymptotic run-time of O(n3)O(n^3), and since g(x)g(x) is an elementwise function, we know that it has a run-time of O(n)O(n).

By analysing the dimensions of a feed forward neural network, we find:

θ(0)Rn(0)×1θ(1)Rn(1)×n(0)θ(2)Rn(2)×n(1)\begin{aligned} \theta^{(0)} &\in \mathbb{R}^{n^{(0)} \times 1} \\ \theta^{(1)} &\in \mathbb{R}^{n^{(1)} \times n^{(0)}} \\ \theta^{(2)} &\in \mathbb{R}^{n^{(2)} \times n^{(1)}} \end{aligned}

More generally, we can write:

θ(k)={Rn(k)×1if k=0Rn(k)×n(k1)if k>0\theta^{(k)}= \begin{cases} \mathbb{R}^{n^{(k)}\times 1} \quad &\text{if} \space k=0\\ \mathbb{R}^{n^{(k)}\times n^{(k-1)}} \quad &\text{if} \space k>0 \end{cases}

Where n(k)n^{(k)} is the number of neurons including the bias unit in layer kk.

Recalling zadzad from the previous post, we see that:

a=n(k)z=n(k1)d=n(k2)\begin{aligned} a &= n^{(k)} \\ z &= n^{(k-1)} \\ d &= n^{(k-2)} \end{aligned}

From this we find that:

nmul=k=2nlayers(n(k)n(k1)n(k2))+(n(1)n(0)1)ng=k=1nlayers(n(k))\begin{aligned} n_{mul} &= \sum_{k=2}^{n_{layers}} \big( n^{(k)}n^{(k-1)}n^{(k-2)} \big) + \big( n^{(1)}n^{(0)}1 \big) \\ n_{g} &= \sum_{k=1}^{n_{layers}} \big( n^{(k)} \big) \\ \end{aligned}

Where nmuln_{mul} is the number of multiplications performed, and ngn_g is how many times we apply the activation function, gg.

It should be noted, however, that we have rather naively assumed that a(k)a^{(k)} has the same dimension as θ(k)\theta^{(k)} which is obviously not the case as a(k)Rn(k)a^{(k)}\in \mathbb{R}^{n^{(k)}}. (Thanks Chuanting Zhang for suggesting making this assumption more clear.)

This gives us:

time=nmul+ngtime=k=2nlayers(n(k)n(k1)n(k2))+(n(1)n(0)1)+k=1nlayers(n(k))\begin{aligned} \text{time} &= n_{mul} + n_{g} \Leftrightarrow \\ \text{time} &= \sum_{k=2}^{n_{layers}} \big( n^{(k)}n^{(k-1)}n^{(k-2)} \big) + \big( n^{(1)}n^{(0)}1 \big) + \sum_{k=1}^{n_{layers}} \big( n^{(k)} \big) \end{aligned}

When analysing matrix algorithms, it's common to assume that the matrices are quadratic; that is they have the same number of rows, as columns. By doing this, we find that:

nmul=nlayersn3n_{mul} = n_{layers} \cdot n^{3}

If we once again assume that there are the same number of neurons in each layer, and that the number of layers equal the number of neurons in each layer we find:

nmul=O(nn3)=O(n4)n_{mul} = O(n\cdot n^3) = O(n^4)

The same can be done for the activations:

ng=nlayersn=O(n2)n_g = n_{layers} \cdot n = O(n^2)

The total run-time therefore becomes:

O(n4+n2)O(n4)n1  n4+n22n4\begin{aligned} &O(n^4+n^2) \Leftrightarrow \\ &O(n^4) \because \forall n \geq 1 \space |\space n^4+n^2 \leq 2n^4 \end{aligned}

Backpropagation

We can find the run-time complexity of backpropagation in a similar manner.

Before beginning, you should be familiar with the backpropagation procedure.

We can safely ignore timea\text{time}_{\nabla_a} as it will be in the order of 1:

timea=k\text{time}_{\nabla_a} = k

This gives us:

timeerror={αβ  a(L)Rα×βif k=L(θ(k+1)Tδ(k+1))g(z(k))\text{time}_{\text{error}} = \begin{cases} \alpha \cdot \beta \space | \space \nabla_a^{(L)} \in \mathbb{R}^{\alpha \times \beta} \quad \text{if} \space k=L\\ \big( \theta^{(k+1)^T} \delta^{(k+1)} \big) \odot g'(z^{(k)}) \end{cases}

If we assume that there are nn neurons in each layer, we find that:

n2=O(αβ  a(L)Rα×β)n3=O((θ(k+1)Tδ(k+1))g(z(k)))\begin{aligned} &n^2 = O\big( \alpha \cdot \beta \space | \space \nabla_a^{(L)} \in \mathbb{R}^{\alpha \times \beta} \big) \\ &n^3 = O\Big( \big( \theta^{(k+1)^T} \delta^{(k+1)} \big) \odot g'(z^{(k)}) \Big) \end{aligned}

The total run-time for the delta error then becomes:

O(timeerror)=n2+n3(L1)O( \text{time}_{\text{error}} ) = n^2 + n^3 (L-1)

If we assume there are nn layers as we did during forward propagation, we find:

O(timeerror)=n4O( \text{time}_{\text{error}} ) = n^4

In order to find all the weights between a layer, we get:

O(timeweights)=O(timeerror)+n3=n4O( \text{time}_{\text{weights}} ) = O( \text{time}_{\text{error}} ) + n^3 = n^4

Plugging this into gradient descent we get:

timegradient descent=ngradient iterationstimeweights\text{time}_{\text{gradient descent}} = n_{\text{gradient iterations}} \cdot \text{time}_{\text{weights}}

If we assume nn gradient iterations:

O(timegradient descent)=nn4=n5O(\text{time}_{\text{gradient descent}}) = n \cdot n^4 = n^5

So by assuming that gradient descent runs for nn iterations, and that there are nn layers each with nn neurons as we did with forward propagation, we find the total run-time of backpropagation to be:

O(n5)O(n^5)

Discussion

We see that the learning phase (backpropagation) is slower than the inference phase (forward propagation). This is even more pronounced by the fact that gradient descent often has to be repeated many times.

In fact, gradient descent has a convergence rate of O(1ϵ)O\big( \frac{1}{\epsilon} \big) for a convex function where ϵ\epsilon is the error of the final hypothesis.

This results in a large constant factor that has real world consequences, but which big-O doesn't show.

One way of making algorithms run faster is by using parallel execution by forexample running the matrix operations on a GPU.

GPUs are specifically designed to run many matrix operations in parallel since 3D geometry and animation can be expressed as a series of linear transformations.

This is also why we usually train neural networks on GPUs.

Proper Learning

It's worth mentioning that in 1988 Pitt and Valient formulated an argument that if RP \neq NP, which is currently not known, and if it's NP-HARD to differentiate realizable hypotheses from unrealizable hypotheses, then a correct hypothesis hh must be NP to find.[2]

This, however, doesn't concur with the complexity we found for backpropagation which is only P.

Whether a function has realizable examples is defined as:

heh with an error <1niN,[1;n]  he(xi)=yi    Realizable\begin{aligned} & h_e \in h \space \text{with an error} \space < \frac{1}{n}\\ & \forall i \in \mathbb{N}, [1;n] \space | \space h_e(x_i) = y_i \implies \text{Realizable} \end{aligned}

Here we see that a hypothesis is realizable if an experimential hypothesis heh_e with an error of less than 1n\frac{1}{n} can correctly guess all the examples within that error-margin.

Here, heh_e would be a neural network.

Shalev-Swartz argued against this by classifying neural networks, specifically deep neural networks, as doing improper learning by letting hehh_e \notin h, and argued that heh_e can still agree with hh even though the searchspace is not realizable. [3]

The best run-time for neural networks is an area of active research.

Conclusion

We have derived the computational complexity of a feed forward neural network, and seen why it's attractive to split the computation up in a training and a inference phase since backpropagation, O(n5)O(n^5), is much slower than the forward propagation, O(n4)O(n^4).

We have considered the large constant factor of gradient descent required to reach an acceptable accuracy which strengthens the argument.

Furthermore, we have discussed some theoretical aspects of learning representations of functions, and hereunder the role of neural networks, but we were unable to reach a definitive conclusion.


  1. When implementing neural networks, it's often the case that all the samples are collected into a matrix with the dimensions xRη×nx \in \mathbb{R}^{\eta \times n} where η\eta is the total number of samples in the trainingset. This is done to save time on allocating memory, and is primarily a practical problem which is why we won't consider it further in the theory. ↩︎

  2. Pitt & Valient Paper ↩︎

  3. Shalev-Swartz's argument ↩︎

continue reading