- What is Neural Network?
- Dive into the neuron
- How does a neural network simulate an arbitrary function
- Why do we need neural networks

- How to construct a neural network
- Fully connected neural network
- Use graphical tool to design neural network
- The "activation function" of the output layer

- How to train a neural network
- Learning algorithm and principle
- Build and train neural networks from scratch
- Rewrite the code using PyTorch
- Use graphical tool to train neural network

- Some important problems of neural network
- Network structure
- Overfitting
- Underfitting
- Overfitting vs underfitting
- Initialization
- Vanishing gradient and exploding gradient

- Convolutional Neural Network(CNN)
- 1D-convolution
- 1D-convolution experiments
- 1D-pooling
- 1D-CNN experiments
- 2D-CNN
- 2D-CNN experiments

- Recurrent Neural Network(RNN)
- Vanilla RNN
- Seq2seq, Autoencoder, Encoder-Decoder
- Advanced RNN
- RNN classification experiment

- Natural language processing
- Embedding: Convert symbols to values
- Text Classification 1
- Text Classification 2
- TextCNN
- Entity recognition
- Word segmentation, POS tagging and chunking
- Sequence tagging in action
- Bidirectional RNN
- BI-LSTM-CRF
- Attention

- Language Models
- n-gram Model: Unigram
- n-gram Model: Bigram
- n-gram Model: Trigram
- RNN Language Model
- Transformer Language Model

- Linear Algebra
- Vector
- Matrix
- Dive in matrix multiplication
- Tensor

The reason why the neural network is powerful lies in its powerful simulation ability. In theory, it can simulate arbitrary functions with infinitely small errors.

In other words, we can use neural networks to construct arbitrary functions and obtain arbitrary algorithms.

We use some visual examples here to help you gain some intuitive understanding.

This is the simplest case, we can simulate it by using a neuron without activation function.

By adjusting the $w, b$ parameters, any straight line can be simulated.

We use a neuron with Sigmoidactivation function to simulate it.

As the $w$ parameter continues to increase, the neural network will gradually approach the function.

We divide it into several steps:

- Use a single neuron to simulate the left half of the function.

- Use a single neurons to simulate the right half of the function (upside down).

- Use another neuron to synthesize the images of the first 2 steps

The result obtained is a good approximation of the objective function.

Using the rectangular impulse function, we can easily approximate other arbitrary functions, just like the principle of integration.

Complete Broken Line mission and observe the function corresponding to each neuron.

This is the simplest case, we can simulate it by using a neuron without activation function.

By adjusting the parameters of $w_1, w_2, b$, any plane can be simulated.

We use a neuron with Sigmoidactivation function to simulate it.

$f(x) = \text{sigmoid}(w_1x + w_2y + b)$Similar to the case of unary functions, we implement it step by step:

- Use a single neuron to simulate an edge of the function

- Then we can get the following function:

- Finally, the following functions can be synthesized

The final neural network structure is shown in the figure below:

Using the binary rectangular impulse function, we can easily approximate any other binary function, just like the principle of integration.

Complete the Circle mission and observe the function corresponding to each neuron.

The principle is the same, imagine for yourself! 😥

Software programs built on digital circuits can also simulate arbitrary functions, so why invent artificial neural networks?