An anonymous reader shares a report: AI models invariably encounter ambiguous situations that they struggle to respond to with instructions alone. That’s problematic for autonomous agents tasked with, say, navigating an apartment, because they run the risk of becoming stuck when presented with several paths. To solve this, researchers at Amazon’s Alexa AI division developed a framework that endows agents with the ability to ask for help in certain situations. Using what’s called a model-confusion-based method, the agents ask questions based on their level of confusion as determined by a predefined confidence threshold, which the researchers claim boosts the agents’ success by at least 15%.

“Consider the situation in which you want a robot assistant to get your wallet on the bed … with two doors in the scene and an instruction that only tells it to walk through the doorway,” wrote the team in a preprint paper describing their work. “In this situation, it is clearly difficult for the robot to know exactly through which door to enter. If, however, the robot is able to discuss the situation with the user, the situational ambiguity can be resolved.” The team’s framework employs two agent models: Model Confusion, which mimics human user behavior under confusion, and Action Space Augmentation, a more sophisticated algorithm that automatically learns to ask only necessary questions at the right time during navigation. Human interaction data is used to fine-tune the second model further so that it becomes familiar with the environment.

Share on Google+

of this story at Slashdot.

…read more

Source:: Slashdot