Convolutional layer
In artificial neural networks, a convolution layer is a type of network layer that applies a convolution operation to the input, passing the result to the next layer. Convolution layers are the primary building blocks of convolutional neural networks (CNNs), a class of neural networks most commonly applied to audio, image, video, and other data where the data has a uniform translational symmetry.[1]
Basic concept
The convolution operation in a convolution layer involves sliding a small window (called a kernel or filter) across the input data and computing the dot product between the values in the kernel and the input at each position. This process creates a feature map that represents detected features in the input.[2]
2D Convolution
For a 2D input and a 2D kernel , the 2D convolution operation can be expressed as:where and are the height and width of the kernel, respectively.
Kernels
Kernels, also known as filters, are small matrices of weights that are learned during the training process. Each kernel is responsible for detecting a specific feature in the input data. The size of the kernel is a hyperparameter that affects the network's behavior.
Stride
Stride determines how the kernel moves across the input data. A stride of 1 means the kernel shifts by one pixel at a time, while a larger stride (e.g., 2 or 3) results in less overlap between convolutions and produces smaller output feature maps.
Padding
Padding involves adding extra pixels around the edges of the input data. It serves two main purposes:
- Preserving spatial dimensions: Without padding, each convolution reduces the size of the feature map.
- Handling border pixels: Padding ensures that border pixels are given equal importance in the convolution process.
Common padding strategies include:
- No padding/valid padding. This strategy typically causes the output to shrink.
- Same padding: Any method that ensures the output size same as input size is a same padding strategy.
- Full padding: Any method that ensures each input entry is convolved over for the same number of times is a full padding strategy.
Common padding algorithms include:
- Zero padding: Add zero entries to the borders of input.
- Mirror/reflect/symmetric padding: Reflect the input array on the border.
- Circular padding: Cycle the input array back to the opposite border, like a torus.
The exact numbers used in convolutions is complicated, for which we refer to (Dumoulin and Visin, 2018)[3] for details.
Variants
Standard
The basic form of convolution as described above, where each kernel is applied to the entire input volume.
Depthwise separable
Depthwise separable convolution separates the standard convolution into two steps: depthwise convolution and pointwise convolution. It significantly reduces the number of parameters and computational cost.[4]
Dilated
Dilated convolution, or atrous convolution, introduces gaps between kernel elements, allowing the network to capture a larger receptive field without increasing the kernel size.[5]
Transposed
Transposed convolution, Also known as deconvolution or fractionally strided convolution, this operation can be thought of as the gradient of a convolution with respect to its input. It's often used in encoder-decoder architectures for upsampling.
History
The concept of convolution in neural networks was inspired by the visual cortex in biological brains. Early work by Hubel and Wiesel in the 1960s on the cat's visual system laid the groundwork for artificial convolution networks.[6]
The first computer-based convolution neural network was developed by Kunihiko Fukushima in 1980, called the Neocognitron.[7]
In 1998, Yann LeCun et al. introduced LeNet-5, an early influential CNN architecture for handwritten digit recognition, trained on the MNIST dataset.[8]
(Olshausen & Field, 1996)[9] discovered that simple cells in the mammalian primary visual cortex implement localized, oriented, bandpass receptive fields, which could be recreated by fitting sparse linear codes for natural scenes. This was later found to also occur in the lowest-level kernels of trained CNNs.
The field saw a resurgence in the 2010s with the development of deeper architectures and the availability of large datasets and powerful GPUs. AlexNet, developed by Alex Krizhevsky et al. in 2012, was a catalytic event in modern deep learning.[10]
See also
References
- ^ Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). Deep Learning. Cambridge, MA: MIT Press. pp. 326–366. ISBN 978-0262035613.
- ^ Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "7.2. Convolutions for Images". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.
- ^ Dumoulin, Vincent; Visin, Francesco (2016). "A guide to convolution arithmetic for deep learning". arXiv preprint arXiv:1603.07285.
- ^ Chollet, François (2017). "Xception: Deep Learning with Depthwise Separable Convolutions". 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR): 1800–1807. doi:10.1109/CVPR.2017.195.
- ^ Yu, Fisher; Koltun, Vladlen (2016). "Multi-Scale Context Aggregation by Dilated Convolutions". ICLR 2016.
- ^ Hubel, D. H.; Wiesel, T. N. (1968). "Receptive fields and functional architecture of monkey striate cortex". The Journal of Physiology. 195 (1): 215–243. doi:10.1113/jphysiol.1968.sp008455. PMC 1557912. PMID 4966457.
- ^ Fukushima, Kunihiko (1980). "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position". Biological Cybernetics. 36 (4): 193–202. doi:10.1007/BF00344251. PMID 7370364.
- ^ LeCun, Yann; Bottou, Léon; Bengio, Yoshua; Haffner, Patrick (1998). "Gradient-based learning applied to document recognition". Proceedings of the IEEE. 86 (11): 2278–2324. doi:10.1109/5.726791.
- ^ Olshausen, Bruno A.; Field, David J. (1996-06). "Emergence of simple-cell receptive field properties by learning a sparse code for natural images". Nature. 381 (6583): 607–609. doi:10.1038/381607a0. ISSN 1476-4687.
{{cite journal}}
: Check date values in:|date=
(help) - ^ Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E (2012). "ImageNet Classification with Deep Convolutional Neural Networks". Advances in Neural Information Processing Systems. 25. Curran Associates, Inc.