Object detection

Peg-in-Hole task using sensor-fusion F/T sensors and 2D camera image

In the computer vision course, I was grateful to have chosen to work with PhD students on using sensor fusion of RGB camera and Force/Torque sensors to perform Peg-In-Hole task via KuKa Iiwa LBR 14 robot. I took part in developing the object detection of the Peg using classical CV techniques, and then YOLOv5 deep NN architecture model, which did outperform the classical methods in detecting the Peg. We managed to publish a paper of this work in NIR conference 2021.

Please watch the video for the full Peg-In-Hole demo :point_down: