Header Ads

Breaking News
recent

Automated techniques could make it easier to develop AI


 Machine learning researchers make many decisions when designing a new model. Determine the number of layers to include in the neural network and the weight to give to each node's input. As a result of these human decisions, complex models are “designed by intuition” rather than systematically, says Frank Hatter, director of the Machine Learning Institute at the University of Freiburg in Germany.
A growing field called automated machine learning (autoML) aims to take the guesswork out of it. The idea is that algorithms take over the decisions researchers currently have to make when designing models. Ultimately, these techniques may make machine learning more accessible.
Nearly a decade after the introduction of automated machine learning, researchers are still working to improve it. A new conference held last week in Baltimore — the organizers described it as the first international conference on the subject — announced efforts to improve autoML accuracy and optimize its performance. .
Interest in the potential of autoML to simplify machine learning has grown significantly. Companies such as Amazon and Google already offer low-code machine learning tools powered by AutoML technology. As these techniques become more efficient, they could speed up research and make machine learning accessible to more people.
The idea is to allow users to pick a question they want to ask, point the autoML tool at that question, and get the results they're looking for.
That vision is “the holy grail of computer science,” says Lars Kothoff, conference organizer and assistant professor of computer science at the University of Wyoming. "You specify a problem and the computer figures out how to solve it. That's all you do."
But first, researchers need to figure out how to make these techniques more time and energy efficient.
What is autoML? At first glance, the concept of autoML may seem unnecessary. After all, machine learning already automates the process of extracting insights from data. However, autoML algorithms operate at a higher level of abstraction than the underlying machine learning models and rely only on the results of those models for guidance, saving time and computational effort. Researchers can apply his autoML method to pre-trained models to gain new insights without wasting computational power repeating existing research.
For example, research scientist Mehdi Bahrami and his coauthors at Fujitsu Research of America presented recent work on how to use a BERT-sort algorithm with different pre-trained models to adapt them for new purposes. BERT-sort is an algorithm that can figure out what is called “semantic order” when trained on data sets—given data on movie reviews, for example, it knows that “great” movies rank higher than “good” and “bad” movies.

“BERT takes months of computation and is very expensive—like, a million dollars to generate that model and repeat those processes,” Bahrami says. “So if everyone wants to do the same thing, then it`s expensive—it`s not energy efficient, not good for the world.”
Although the field shows promise, researchers are still searching for ways to make autoML techniques more computationally efficient. For example, methods like neural architecture search currently build and test many different models to find the best fit, and the energy it takes to complete all those iterations can be significant. AutoML techniques can also be applied to machine learning algorithms that don't involve neural networks. B. Building random decision forests or support vector machines to classify data. Research in these areas is more advanced, and many coding libraries are already available for those who want to incorporate autoML techniques into their projects.
The next step is to use autoML to quantify uncertainty and answer questions about algorithmic reliability and fairness, said conference organizer Hutter. In this vision, reliability and fairness criteria will be similar to other machine learning constraints, such as: B. Accuracy. Also, autoML can capture the biases found in these algorithms and automatically fix them before publishing. But for things like deep learning, autoML still has a long way to go. The data used to train deep learning models, such as images, documents, and recorded audio, is typically dense and complex. It requires a huge amount of computing power to process it. The cost and time required to train these models can be prohibitive for anyone but researchers working in well-funded private companies.
One of the conference's competitions tasked participants with developing energy-efficient alternative algorithms for neural architecture search. This is very difficult due to the notorious computational requirements of this technique. It automatically iterates through an infinite number of deep learning models to help researchers choose the right model for their application, a process that can take months and cost upwards of $1 million.
#techniques #advantages #science




No comments:

Powered by Blogger.