-
Notifications
You must be signed in to change notification settings - Fork 1
Tic Tac Toe Project
Mohammad Khan edited this page Mar 2, 2022
·
24 revisions
- Proof of Concept of using the robot to loop motions of computer vision to scan the board & determine next move + robot motion to pick & place an object
- A tic-tac-toe board was printed on a piece of paper
- Blocks were used for ease of picking them up.
- The computer vision was used to recognize the X's & O's rather than the blocks
- AI to determine next move is based on Minimax Algorithm code found on GitHub (insert link)
- The Human player was always O & the computer was always X
- Human goes first, followed by robot
- Tic-tac-toe board in a static location and orientation for each play
- Pieces the robot will pick up in a static location
- Specific application of tic-tac-toe is made for the Motoman MH5L robot
- Static board required precise setup for both camera view and board location. Any discrepancy in either would result in inaccurate or invalid robot moves.
- Python 2.7
- Ubuntu 18.04
- Motoman MH5L Robot
- Tic-tac-toe board printed on 8.5x11 in paper
- Blocks were used for ease of picking & were made using scrap wood
- Area for board and block placement.
- Distance from robot center to corner of top right blue tape = ~ 67 cm
- Distance between block place(left blue tape) & board placement (right blue tape) = ~ 30 cm
- Complete setup of the tic-tac-toe game
- Exploration of capabilities and limitations of using only contour detection to find position/orientation without the use of fiducial markers.
- 3D printed tictactoe board (White ABS)
- 3D printed X and O pieces (White ABS)
- The computer vision only sees player's O pieces, X pieces are in predetermined locations for robot to move
- AI to determine next move is based on Minimax Algorithm code found on GitHub (insert link)
- Utilizes rostopics for camera image data (realsense-ros)
- The Human player is O & the computer is X
- Full Tic-tac-toe board has to be in view of camera
- Camera always in same orientation when scanning board, camera rotation transform hardcoded
- Pieces the robot will pick up in a static location
- Requires custom moveit config for camera transform
- The symmetry of the square board makes full 180 degree orientation detection very difficult. The team tried using both PCA/eigenvectors and point slope from square corners. Both methods could not differentiate any multiple of 45 degrees for the square board. The board must be below a +/- 45 degree rotation perpendicular to the camera for accurate gameplay loop.
Using markers such as colored dots or ArUco markers to break the symmetry and allow OpenCv to more accurately detect orientation of the tic-tac-toe board. See Phase 3 for more information.
- Python 2.7
- Ubuntu 18.04
- Motoman MH5L Robot
- Realsense-ros
TODO: insert images and/or video
- 3D printed tic-tac-toe board with pieces
- Exploration of capabilities and limitations of using only contour detection to find position/orientation using the colored squares to break symmetry
- Using fiducial markers (3 colored squares) to detect orientation of tic-tac-toe board
- 3D printed tictactoe board (White ABS)
- 3D printed X and O pieces (White ABS)
- The computer vision only sees player's O pieces, X pieces are in predetermined locations for robot to move
- AI to determine next move is based on Minimax Algorithm code found on GitHub (insert link)
- Utilizes rostopics for camera image data (realsense-ros)
- Colored dots are recognized using Dream 3D for first attempt --> Dream3D works with static image not frames from video
- Used color detection to find red-blue-green squares and obtain orientation
- Dream3D works with static images and the game board can move in-between player moves. Using a static image would not be ideal in this scenario. Now using an image topic to obtain a frame and perform object & orientation detection.
- Python 2.7
- Ubuntu 18.04
- Motoman MH5L Robot
- Realsense-ros
- OpenCV version 4.2.0
Above images: Top left: Game board with colored squares as markers. Top Right: Orientation detection using colored squares as markers
- Using image kernel to detect location of blue, green, and red squares in the image
- Obtain location in image, obtain orientation
- Using image Kernel of RBG to pass over image, create a heat map of where the color red, blue, green are in the image.
- Determine pixel location based on heatmap
- Determine orientation by how much kernel matrix needs to rotate to match RGB values
Reason for Image Kernel:
- We know that the colors green, red, blue are in the image, but we need to find the location.
- Phase 3 method finds the color in the image and then finds the location. The limitations of Phase 3 is detecting colors purely through RGB values that fall within a threshold. This leads to inaccurate detection, inconsistent readings, and fluctuating color detection.
- Phase 4 implements a kernel which reads the RGB values in the image and sees if the match to Red (255,0,0), Green (0,255,0), or Blue (0,0,255) and create a heatmap of where the colors closest to True Red, Green, and Blue are in the image. This is more accurate because kernels do not rely on a threshold to detect the color, they use convolutions to output the spot that match closely to the RGB values of Red, Green, and Blue. Even if the lighting fluctuates, this method should be robust enough to detect the color location.
- Python 2.7
- Ubuntu 18.04
- Motoman MH5L Robot
- Realsense-ros
- OpenCV version 4.2.0
- X-pieces at constant distance from robot pedestal
- transform of board obtained using fiducial markers (Red, Green, Blue circles on corners of board)
- Robot motion needs to be constrained or smoothed out to avoid extraneous robot motion Example of Extraneous Robot Motion Context: Robot should move linearly down to pick X, linearly back up to move to the location where it wants to put the X, then down to drop X.
- Template Matching Algorithm can be more robust to lighting, scale, and rotation changes