The job of compilers is to translate programming languages written by humans into binary executable by computer hardware. Compilers run on large complex, heterogeneous, non-deterministic, and constantly changing systems. Optimising compilers is difficult because the number of possible optimisations is huge. Designing heuristics that take all of these considerations into account ultimately becomes hard. As a result, many compiler optimisations are out of date or poorly tuned.

One of the key challenges is to select the right code transformation for a given program, which requires effective evaluation of the quality of a possible compilation option. For instance, knowing how a code transformation will affect eventual performance is one such option.

A decade ago, machine learning was introduced to enhance the automation of space optimisation. This enabled compiler writers to develop without having to worry about architectures or program specifics. Algorithms were capable of learning heuristics from the results of previous searches. This helped optimise the process in the next run in a single shot manner. Machine learning-based compilation is now a research area, and over the last decade, this field has generated a large amount of academic interest.

To know more about the current state of ML and its implications for compilers, researchers from the University of Edinburgh and Facebook AI collaborated to survey the role of machine learning with regards to compilers.

Current State Of ML For Compilers