Researchers at Google have made open source EvoLved sIgn mOmeNtum (Lion), an optimization algorithm for learning neural networks that was discovered using an evolutionary automated machine learning algorithm, Infoq reports.

Models trained with Lion can achieve better accuracy than models trained with other optimizers, while requiring fewer computational cycles for convergence.

Google used evolutionary search on character programs to discover the algorithm. Lion uses less memory and fewer instructions than Adam. One of the key differences between Lion and other optimizers is that it only cares about the sign of the gradient and applies the same sized update for each weight.

Deep learning models are trained by using a gradient descent algorithm to iteratively update the network’s weights to produce a minimum value of the network’s loss function.

To develop Lion, the team defined the search space as a set of functions written in an imperative language similar to Python. The inputs to these are model weights, gradient and learning rate, and the outputs are updates to the weights. Search is performed by mutating the function code – adding, deleting or modifying operators. Candidate functions are selected by using them to train small ML models. The accuracy of the trained models reflects the suitability of the candidate function for training.

Although in many cases Lion models outperform Adam models, there are instances where performance is similar. This happens in tasks where “the datasets are huge and of high quality”. For example, vision and masked language modeling tasks. The authors also note that Lion likely performs similarly to Adam on small dataset sizes.

Tags: , , , , , , , , , , , , , , , , ,
Editor @ DevStyleR