-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate available camera capabilities we can leverage #47
Comments
Strangely (or obviously since we're talking about Apple) iPhone 5(s) cameras aren't mentioned. |
Camera properties we can control directly from C++ (VideoCapture): https://github.com/opencv/opencv/blob/master/modules/videoio/include/opencv2/videoio.hpp#L486-L491 |
Okay here is the list of all currently available frame metadata that iOS can give us. Basically
|
That list means nothing to me. Can you summarize the significance of these metadata properties @azasypkin |
Well the summary is pretty short - there is nothing useful for us at this stage in iOS framework, we'll have to rely on something based on OpenCV. AVFoundationFramework can only give us additional metadata about faces or machine readable codes it sees in the photo (location, bounding rect, roll/yaw angle and type for the codes). We may need it later though, but it's too early to think about it. |
@sfoster @punamdahiya feel free to add ideas and capabilities you think would be useful to have.
I see if we can rely on something that tells us that we have
an object
in focus, not blurred etc. Otherwise maybe I'll check if we can use OpenCV for that (eg. detect contours, extract keypoints and see if we have enough of them).The text was updated successfully, but these errors were encountered: