Science

Automated methods could make AI development easier – Reuters

“BERT takes months of computation and is very expensive, around a million dollars, to build this model and repeat these processes,” Bahrami says. “So if everyone wants to do the same thing, then it’s expensive – it’s energy inefficient, it’s not good for the world.”

Although this area is promising, researchers are still looking for ways to make autoML methods more computationally efficient. For example, techniques such as Neural Architecture Search currently create and test many different models to find the best fit, and the energy required to go through all these iterations can be significant.

AutoML methods can also be applied to non-neural machine learning algorithms, such as generating random decision forests or supporting vector machines for classifying data. Research in these areas is more advanced and many coding libraries are already available for people who want to use autoML techniques in their projects.

The next step is to use autoML to quantify uncertainty and answer questions about the robustness and fairness of algorithms, says conference organizer Hutter. From this point of view, the standards of reliability and fairness will be the same as any other limitations of machine learning, such as accuracy. And autoML can automatically capture and fix bugs found in these algorithms before they are released.

Search continues

But for something like deep learning, autoML still has a long way to go. The data used to train deep learning models, such as images, documents, and recorded speech, is typically dense and complex. Control requires a huge amount of computing power. The cost and time to train these models can be prohibitive for anyone but researchers working for wealthy private companies.

At one of the conference competitions, participants were asked to develop alternative energy-efficient algorithms for studying neural architecture. This is a serious problem as this method has notorious computational requirements. It automatically iterates through countless deep learning models to help researchers choose the right one for their application, but the process can take months and cost over a million dollars.

The goal of these alternative algorithms, called zero-cost neural architecture search proxies, is to make neural architecture search more accessible and environmentally friendly by drastically reducing its computational appetite. The result takes only a few seconds, not months. These methods are still in their early stages of development and are often unreliable, but machine learning researchers predict they could make the model selection process much more efficient.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.