![]() ![]() The system also allows a robot's user to set a target degree of success, which is tied to a particular uncertainty threshold that will lead a robot to ask for help. ![]() "Blindly following plans generated by an LLM could cause robots to act in an unsafe or untrustworthy manner, and so we need our LLM-based robots to know when they don't know," said Majumdar. LLMs are bringing robots powerful capabilities to follow human language, but LLM outputs are still frequently unreliable, said Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton and the senior author of a study outlining the new method. 8 at the Conference on Robot Learning.īecause tasks are typically more complex than a simple "pick up a bowl" command, the engineers use large language models (LLMs)-the technology behind tools such as ChatGPT-to gauge uncertainty in complex environments. The paper, " Robots That Ask for Help: Uncertainty Alignment for Large Language Model Planners," was presented Nov. But telling a robot to pick up a bowl when there are five bowls on the table generates a much higher degree of uncertainty-and triggers the robot to ask for clarification. Telling a robot to pick up a bowl from a table with only one bowl is fairly clear. The technique involves quantifying the fuzziness of human language and using that measurement to tell robots when to ask for further directions. Engineers at Princeton University and Google have come up with a new way to teach robots to know when they don't know. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |