-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using OpenCV for localization. #442
Comments
This looks promising now |
Is this running on the BBB? On Sun, Jan 24, 2016 at 5:43 PM, Ahmed Samara [email protected]
|
http://stackoverflow.com/questions/7263621/how-to-find-corners-on-a-image-using-opencv Here's another strategy for generic objects. |
Two cameras might help, here are instructions for setting up the second: |
After much struggling, it seems that we won't be able to use the modified QRCodeStateEstimation library on the final robot, due to the ZBAR library being incompatible with the BeagleBone, as well as our struggling to get BOOST library working to work.
Looking at the source code for that library though, it's not that complicated and we should be easily recreated in python.
All you really need is a set of (known) points, and to be able to identify them in the images captured by the webcam.
This is the part that made Zbar really simple. When it finds a QR code, it also tells you where the vertices are, and this is the part that I'm running into problems with now.
I was hoping to solve this problem using SURF points
The change in how we're approaching this is explained in my senior design presentation:
Basically the problem I'm running into is that even though I can now identify a bunch of known points on the QR code, there's still a few problems:
How do we isolate the points just on the QR code?
The text was updated successfully, but these errors were encountered: