This article was automatically translated from the original Turkish version.

DenseNet, or Dense Convolutional Networks, is a deep learning architecture developed in 2017 by Gao Huang and colleagues. This structure maximizes information flow within the network by allowing each layer to connect directly to all subsequent layers rather than just the next one. The DenseNet architecture offers significant advantages in training deep neural networks particularly in terms of parameter efficiency and gradient flow.
In the DenseNet architecture each layer takes as input the feature maps of all preceding layers. This approach reduces information loss and enables feature reuse unlike in classical neural networks.
In DenseNet the input of each layer is defined as follows:

The DenseNet architecture consists of repeated dense blocks and transition layers that connect them. Within dense blocks layers are densely interconnected; transition layers reduce channel dimensionality and spatial resolution to maintain model compactness.

5-layer dense block (
In the DenseNet architecture each layer receives information from all preceding layers thereby optimizing gradient propagation and feature utilization.
The key advantages of the DenseNet architecture are:
DenseNet is widely used in applications such as image classification object detection and medical image analysis. It has achieved successful results on large datasets such as ImageNet.

Dense Connectivity Architecture
Inter-layer Connections
Dense Blocks and Transition Layers
Dense Block Structure
Transition Layers
Advantages and Applications