Predictive Collaborative Robots via Deep Reinforcement Learning Open Access
Downloadable ContentDownload PDF Report an accessibility issue with this item
In recent years, advances in the field of collaborative robotics have enabled people and robots to work in a shared environment. However, the complexity of modeling human-robot interaction and difficulty of automating many tasks has restricted the application range of collaborative robots. This research introduces a model free reinforcement learning-based framework capable of learning to perform new tasks, as well as learning the human behaviors associated with those tasks, enabling a robotic system to work directly with people to complete a shared objective. By utilizing data captured from a camera mounted above the workspace, this framework acts as an adaptive control system that enables a collaborative robot to adjust to changes in its environment in real time. First, a classification neural network is trained to model the probability distribution of human behaviors associated with a specific task based on data collected on that task being performed. Then, a Deep Q Network is trained in simulation, converging to an optimal decision policy based on the rewards it receives for the outcomes of actions it selects. In contrast to traditional approaches to programming robots, this system 'learns' generalizable policies that allow it to adapt to dynamic environments, enabling high levels of performance in scenarios it has never encountered before. This system was implemented on a collaborative assembly task both in simulation and in physical space, in which the objective was to assemble a series of parts in a specific order in collaboration with a person. This resulted in an average efficiency increase 21.6% over the person working alone while maintaining a high standard of safety. This novel approach to addressing human-robot interaction enables collaborative robots to become predictive rather than reactive, resulting in safer and more efficient collaboration.
Notice to Authors
If you are the author of this work and you have any questions about the information on this page, please use the Contact form to get in touch with us.
|scott barnes ss submission.pdf||2018-08-27||Open Access||