How does a neural network simulate an arbitrary function

Overview

The reason why the neural network is powerful lies in its powerful simulation ability. In theory, it can simulate arbitrary functions with infinitely small errors.


In other words, we can use neural networks to construct arbitrary functions and obtain arbitrary algorithms.


We use some visual examples here to help you gain some intuitive understanding.

Simulation of unary function

Straight line

This is the simplest case, we can simulate it by using a neuron without activation function.

f(x)=wx+bf(x) = wx + b
ww1
bb0

By adjusting the w,bw, b parameters, any straight line can be simulated.

Step function

We use a neuron with Sigmoidactivation function to simulate it.

ww30
bb0

As the ww parameter continues to increase, the neural network will gradually approach the function.

Rectangular pulse function

We divide it into several steps:

  1. Use a single neuron to simulate the left half of the function.
f1(x)=sigmoid(w1x+b1)f_1(x) = \text{sigmoid}(w_1x+b_1)
w1w_120
b1b_120
  1. Use a single neurons to simulate the right half of the function (upside down).
f2(x)=sigmoid(w2x+b2)f_2(x) = \text{sigmoid}(w_2x+b_2)
w2w_220
b2b_2-20
  1. Use another neuron to synthesize the images of the first 2 steps
f3(x,y)=sigmoid(w31x+w32y+b3)f_3(x, y) = \text{sigmoid}(w_{31}x + w_{32}y + b_3)
w31w_{31}10
w32w_{32}-10
b3b_3-5

The result obtained is a good approximation of the objective function.

Other unary functions

Using the rectangular impulse function, we can easily approximate other arbitrary functions, just like the principle of integration.

nn10

Experiment

Complete Broken Line mission and observe the function corresponding to each neuron.


This is the simplest case, we can simulate it by using a neuron without activation function.

f(x,y)=w1x+w2y+bf(x, y) = w_1x + w_2y + b
w1w_10
w2w_21
bb0

By adjusting the parameters of w1,w2,bw_1, w_2, b, any plane can be simulated.

Binary Step Function

We use a neuron with Sigmoidactivation function to simulate it.

f(x)=sigmoid(w1x+w2y+b)f(x) = \text{sigmoid}(w_1x + w_2y + b)
w1w_10
w2w_230
bb0

Binary rectangular impulse function

Similar to the case of unary functions, we implement it step by step:

  1. Use a single neuron to simulate an edge of the function
f1(x,y)=sigmoid(w11x+w12y+b1)f_1(x, y) = \text{sigmoid}(w_{11}x + w_{12}y + b_1)
w11w_{11}0
w12w_{12}20
b1b_120
  1. Then we can get the following function:
wiw_i10
wjw_j-10
bib_i-5
  1. Finally, the following functions can be synthesized
w51w_{51}10
w52w_{52}-10
w53w_{53}10
w54w_{54}-10
b5b_5-15

The final neural network structure is shown in the figure below:

Other binary functions

Using the binary rectangular impulse function, we can easily approximate any other binary function, just like the principle of integration.

Experiment

Complete the Circle mission and observe the function corresponding to each neuron.


The principle is the same, imagine for yourself! 😥

Question

We already have digital circuits and software program algorithms, why do we need neural networks?

Software programs built on digital circuits can also simulate arbitrary functions, so why invent artificial neural networks?