Skip to content

Pick and Place progress

LuisC18 edited this page Jul 14, 2021 · 5 revisions

Static Camera Mount with rectangular prism object

Paper w/ edge detection:

  • Used OpenCV Contour/edge detection to find a rectangle from a live video feed (top down view)
  • A static “landing pad” ( a piece of paper ) was used as a reference frame for the output position, used the known size of the paper as scale for pixel to centimeter conversion.
  • Robot position was based on origin of paper in relation to robot origin
  • Object orientation (z-axis) found in image frame of camera, multiplied by a rotation matrix when passing orientation angle to robot

Limitations/Issues:

  • Accuracy decreases when object on outer bounds of paper/camera frame, the contour detection sees the side of the object as the top
  • Overall precision of the output position was unreliable. The 6mm stroke of the PGN64 gripper requires a more precise object centroid position.
  • Object is limited to the landing pad
  • Landing pad dimensions has to be known
  • Gripper can only pick up at specified angle

Relevant files:

Robot Camera Mount with rectangular prism object

Edge detection w/ no paper:

  • Used OpenCV Contour/edge detection to find a rectangle from an image (top down view)
  • No landing pad, object can be anywhere in the view of the camera
  • Contour detection looks for the largest rectangle in view
  • Extracting object info from a single image is easier to work with than continuous video feed

Limitations/Issues:

  • No reference object for pixel to centimeter conversion limits camera to a fixed height
  • Requires a solid color background or will detect table characteristics as contours
  • Successful and consistent contour detection depends on lighting conditions of environment

Relevant files:

Color vision:

  • Used OpenCV image manipulation capabilities to filter color channels before running through contour detection filter
  • filters out background to make contour detection easier and more accurate
  • helps with filtering out the shadows of the robot and the object itself