Skip to content

Shape sorter using SCARA robotic arm and Computer Vision

Notifications You must be signed in to change notification settings

logeshg2/SCARA-Shape-Sorter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

SCARA-Shape-Sorter

Shape sorter using SCARA robotic arm and Computer Vision

Demo Video: https://youtu.be/pxm8bvZ1m_0?si=mqisoMcZH2h79VTs

scara

Computer Vision:

Object Detection:

YOLOv8 Object detecting algorith is used in this project. The YOLOv8s model is fine-tuned with the custom dataset, the custom trained weight file can be found in here. A total of 290 images were used in the training process. The dataset can be found here.

Grasp Point and Inverse Projection:

Grasp point is calculated using the bounding box from the object detector. This grasp point is passed to the inverse projection equation for calculating the 3D point from the 2D image. For 3D point in camera frame, depth information from the camera (Intel Realsense D435i depth camera) is used.

Eye-to-Hand Calibration:

The calculated 3D grasp point is in camera frame, so in order for the robot to reach, the point must be in robot frame. This transformation is done by finding the transformation from base to camera (through hand-eye calibration).

I have used OpenCV's hand-eye calibration function cv2.calibrateHandEye() for obtaining the transformation from base to camera (Tb_c).

When a object is detected, its grasp point in camera frame is represented as Tc_o (transformation from camera to object). Transformation from base to camera is multiplied with transformation from camera to object to obtain Tb_o (Transformation from base to object).

$$Tb_o = Tb_c * Tc_o$$

Tb_o is sent to robot's IK to pick the object.

About

Shape sorter using SCARA robotic arm and Computer Vision

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published