Deep learning algorithms such as Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) have played an important role in resolving issues in different fields such as speech recognition, computer vision, and much more. Although the results were highly accurate, it was limited to euclidean data. However, with Network Science, Biology, Physics, Computer Graphics, and Recommender Systems, one has to encounter non-euclidean data, such as manifolds and graphs. Geometric Deep Learning applies deep learning techniques to a manifold or graph-structured data to deal with non-euclidean data.
Geometric deep learning creates competitive models by dealing with complex data like graphs. Michael M. Bronstein was the one who first acknowledged Geometric deep learning in the paper entitled Geometric deep learning: going beyond Euclidean data. This can be applied in the following areas like 3D object classification, graph analytics, 3D object correspondence, etc.
Reinforcement Learning
Reinforcement learning has been employed to drive the search for better architectures. The ability to efficiently navigate the search space to save valuable computational and memory resources is the major barrier in a NAS algorithm. Often, models built with the only intention of high validation accuracy result in complications, requiring a greater number of parameters, more memory, and longer inference times.
Neuroevolution
Floreano et al. (2008) state that for optimizing the neural network weights, gradient-based methods surpass evolutionary methods and such evolutionary approaches should only be utilized to optimize the architecture itself. Aside from determining the appropriate genetic evolution parameters such as mutation rate, death rate, and so on. There is also a need to evaluate how neural network topologies are represented in the genotypes used for digital evolution.
Apart from determining the appropriate genetic evolution parameters such as mutation rate, death rate, etc., there is also a necessity to evaluate how precisely neural network topologies are represented in the genotypes used for digital evolution.
Creating a Search Strategy
The majority of the effort put into neural architecture search has gone into finding out which optimization methods work best and how they can be changed or tweaked to make the search process produce better results faster and with consistent stability. Several approaches, such as Bayesian optimization, reinforcement learning, neuroevolution, network morphing, and game theory, have been tried.
Artificial Neural Network
It is a neural network that is designed as a feed-forward network. Information is transferred from one layer to the next without returning to the previous layers. It is intended to detect patterns in raw data and improve with each new input. The design architecture is comprised of three layers, each of which adds weight to the passage of information. Because they can learn non-linear functions, they are commonly referred to as Universal Functional Approximators. It is primarily used in predictive processes like business intelligence, text prediction, spam email detection, and so on. It has a few disadvantages and advantages over other algorithms.
Convolutional Neural Networks
It has three layers that are widely used in computer vision applications: a convolutional layer, a pooling layer, and a fully-connected layer. Computer vision is used in CNN image identification anchors. Algorithm complexity increases with each layer. They process the input using a set of filters known as kernels. They are similar to matrices that move over the input data and are used to extract features from images. The links between neurons develop as kernels in the layers as the input images are processed. To process an image, for instance, kernels go through sequential layers and change as they identify colors, shapes, and, finally, the overall image.
Recurrent Neural Networks
The RNN network’s two pillars are voice recognition and natural language processing. RNN algorithms make voice search with Apple’s Siri, Google Translate, and Picasa’s face detection technology possible. RNN networks, as opposed to feed-forward networks, make use of memory. While traditional neural networks are assumed to have independent inputs and outputs, the RNN network is dependent on previous outputs in the sequence. RTT networks employ a slightly different backpropagation technique than other networks, which is tailored to the entire data sequence.