Artificial Neural Networks (ANNs) are computing systems inspired by biological neural networks. They consist of three main layers:
- Input Layer
- Hidden Layer(s)
- Output Layer
The types of ANNs are based on:
- Layer connectivity
- Information flow
- Learning strategy
1. Feed Forward Neural Network (FFNN)
- Definition: The simplest ANN where data flows in only one direction — from input to output.
- Structure: May contain no hidden layer (single-layer) or one/more hidden layers (multi-layer).
- Characteristics:
- No loops or feedback.
- Fast and easy to implement.
- Not suitable for complex problems.
- Applications:
- Simple classification
- Image recognition (basic)

2. Fully Connected Neural Network (FCNN)
- Definition: In this ANN, every neuron in one layer is connected to every neuron in the next layer.
- Structure: Can be feedforward or multi-layered.
- Characteristics:
- High connectivity increases learning ability.
- Requires more memory and computation.
- Applications:
- Pattern recognition
- General-purpose learning models

3. Multi-Layer Perceptron (MLP)
- Definition: A type of ANN with multiple layers, including one or more hidden layers.
- Structure:
- Fully connected layers.
- Uses backpropagation to train — errors are propagated backward to update weights.
- Characteristics:
- Suitable for solving non-linear and complex problems.
- Forms the base of deep learning.
- Applications:
- Speech recognition
- Medical diagnosis
- Forecasting
- Deep learning tasks

4. Feedback Neural Network (Recurrent Neural Network – RNN type)
- Definition: A network where connections form cycles (i.e., feedback is present).
- Structure:
- Output from a layer can be sent back to the same layer or previous layers.
- Characteristics:
- Dynamic in behavior.
- Suitable for sequence-based data.
- Maintains memory of previous outputs.
- Applications:
- Time-series prediction
- Language modeling
- Sequence classification
