You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am not sure where goes wrong, leading my generated app following your shared code "BoneDetecter", and run on iPhone 8 Plus, does not generate a video that having points and vectors of the Pose, the output video simple the same video I capture or selected from iPhone's album.
Pre-condition: #1 iPhone 8 Plus with Xcode 10; #2 Download pod install, and download MobileOpenPose.mlmodel #3 Run on Phone, the UI allow select or capture video for processing #4 The Processing taking longer time than your "Stickman Animator" as apple-to-apple compare #5 The App said video generated, and i go and check the video, it is simply the same as input
Questions:
Q1: Does the shared source code very different to your "Stickman Animator" on AppStore?
Q2: Where I miss, leading the generated video does not contains the recognised vectors?
Many thanks for your sharing again.
The text was updated successfully, but these errors were encountered:
Pre-condition:
"Bone Detecter" is confirmed to work with Xcode 9.
Xcode 10 is Beta, it is not a priority task.
Questions:
Q1: Does the shared source code very different to your "Stickman Animator" on AppStore?
It is different from "Stickman Animator" code.
"BoneDetecter" is a suggestion, a minimal implementation.
Q2: Where I miss, leading the generated video does not contains the recognised vectors?
In conclusion I do not know what is the cause.
This is the terminal at hand.
iPhoneX, iPhone 7, iPhone 6, iPad 5 th, iPad Pro
If it is caused by iPhone 8 Plus, it is very difficult for me to solve the problem.
First, your sharing and effort are great!
I am not sure where goes wrong, leading my generated app following your shared code "BoneDetecter", and run on iPhone 8 Plus, does not generate a video that having points and vectors of the Pose, the output video simple the same video I capture or selected from iPhone's album.
Pre-condition:
#1 iPhone 8 Plus with Xcode 10;
#2 Download pod install, and download MobileOpenPose.mlmodel
#3 Run on Phone, the UI allow select or capture video for processing
#4 The Processing taking longer time than your "Stickman Animator" as apple-to-apple compare
#5 The App said video generated, and i go and check the video, it is simply the same as input
Questions:
Q1: Does the shared source code very different to your "Stickman Animator" on AppStore?
Q2: Where I miss, leading the generated video does not contains the recognised vectors?
Many thanks for your sharing again.
The text was updated successfully, but these errors were encountered: