$0.00

Researchers Develop New Tools to Train Robots

New machine learning algorithms that have been developed to train robots.

First introduced as TAMER, or Training an Agent Manually via Evaluative Reinforcement, a new algorithm called "Deep TAMER" is introduced. It is an extension of TAMER that uses deep learning - a class of machine learning algorithms that are loosely inspired by the brain - to provide a robot the ability to learn how to perform tasks by viewing video streams in a short amount of time with a human trainer. (source: Army Research Laboratory)

June 4, 2018 | Source: U.S. Army, arl.army.mil, 6 Feb 2018, Joyce M. Conant

Researchers at the U.S. Army Research Laboratory and The University of Texas at Austin have developed new techniques for robots or computer programs to learn how to perform tasks by interacting with a human instructor. The findings of the study were presented and published Monday at the Thirty-Second Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence in New Orleans, Louisiana.

The purpose of the AAAI conference is to promote research in artificial intelligence and scientific exchange among AI researchers, practitioners, scientists and engineers in affiliated disciplines.

"AAAI is one of the premiere conferences in artificial intelligence," said Dr. Garrett Warnell, ARL researcher. "We are excited to be able to present some of our recent work in this area and to solicit valuable feedback from the research community."

ARL and UT researchers considered a specific case where a human provides real-time feedback in the form of critique. First introduced as TAMER, or Training an Agent Manually via Evaluative Reinforcement, a new algorithm called "Deep TAMER" is introduced. It is an extension of TAMER that uses deep learning – a class of machine learning algorithms that are loosely inspired by the brain – to provide a robot the ability to learn how to perform tasks by viewing video streams in a short amount of time with a human trainer.

Researchers considered situations where a human teaches an agent how to behave by observing it and providing critique, i.e., "good job" or "bad job" —similar to the way a person might train a dog to do a trick. ARL and UT Austin jointly extended earlier work in this field to enable this type of training for robots or computer programs that currently see the world through images, which is an important first step in designing learning agents that can operate in the real world.

"This research advance has been made possible in large part by our unique collaboration arrangement in which Dr. Warnell, an ARL employee, has been embedded in my lab at UT Austin," said Dr. Peter Stone. "This arrangement has allowed for a much more rapid and deeper exchange of ideas than is typically possible with remote research collaborations."

Communities: