In recent years robots have become more adaptive and aware of the surroundings which enables them for use in human-robot collaboration. By introducing robots into the same working cell as the human, then the two can collaborate by letting the robot deal with heavy lifting, repetitive and high accuracy tasks while the human focuses on tasks that needs the flexibility of the human. Collaborative robots already exists today in the market but the usage of these robots are mainly to work in close proximity.
Usually a teaching pendant is used to program a robot by moving it using a joystick or buttons. Using this teaching pendant for programming is usually quite slow and requires training which means that few can operate it. However, recent research shows that there exist several application using multi-modal communication systems to improve the programming of a robot. This kind of programming will be necessary to collaborate with a robot in the industry since the human in a collaborative task might have to teach the robot how to execute its task.
This project aims to introduce a programming-by-guidance system into assembly manufacturing where the human can assist the robot by teaching the robot how to execute its task. Three technologies will be combined, speech recognition, haptic control, and augmented reality. The hypothesis is that with these three technologies an effective and intuitive programming-by-guidance system can be used within the assembly manufacturing industry. This project have three main motivators: Allowing workers, with no robot programming expertise, to teach the robot how to execute its task in an assembly manufacturing system; Reducing the development time of the robot by introducing advanced programming-by-guidance technology; Showing that augmented reality can add additional information that is useful when programming the robot.
2016. , 10 p.