“BERT takes months of computation and is very expensive – like a million dollars to create that model and repeat those processes,” says Bahrami. “So if everyone wanted to do the same thing, it would be very expensive – it doesn’t save energy, it’s not good for the world.”
Although the field shows great promise, researchers are still looking for ways to make autoML techniques more computationally efficient. For example, methods like neural architecture search are now building and testing many different models to find the best fit and the energy required to complete all of those iterations is possible. very significant.
AutoML techniques can also be applied to machine learning algorithms that do not involve neural networks, like generating random decision forests or support vector machines for data classification. Research in those areas is underway, with many coding libraries already available for those looking to incorporate autoML techniques into their projects.
The next step is to use autoML to quantify uncertainty and address questions of reliability and fairness in algorithms, said Hutter, a conference organizer. In that vision, the standards around reliability and fairness would be the same as any other constrain of machine learning, such as accuracy. And autoML can capture and automatically correct biases found in those algorithms before they’re released.
The search continues
But for things like deep learning, autoML still has a long way to go. The data used to train deep learning models, such as images, documents, and recorded speech, is often dense and complex. It takes tremendous computing power to process. The cost and time spent on training these models may be prohibitive for anyone other than researchers working at Private company.
One of the competitions at the conference asked participants to develop energy-efficient alternative algorithms for neural architecture search. It is a significant challenge because this technique has notorious computing needs. It automatically cycles through a multitude of deep learning models to help researchers choose the right model for their application, but the process can take months and cost more than a million dollars.
The goal of these alternative algorithms, known as zero-cost neural architecture search proxies, is to make neural architecture search more accessible and eco-friendly by cutting significantly reduce its computational needs. Results take seconds to run, instead of months. These techniques are still in the early stages of development and are often unreliable, but predictive machine learning researchers that they have the potential to make the model selection process much more efficient.