You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 15, 2022. It is now read-only.
Thanks. I was able to install and run the pre-trained model.
Question: (how to)
Is there a way to remove the background (bg=black) and only show the key point detections (skeletons)? I know the original openpose had a flag that controlled this feature. I will take a look at pose_detector.py draw_person_pose function...
Update: 1/13
One Solution was to modify the video capture image after pose detector was called: https://pastebin.com/CC2dV6MS
person_pose_array, _ = pose_detector(img)
img = np.zeros(img.shape) #kkhatak - line of code added to make background black ###
res_img = draw_person_pose(img, person_pose_array)
res_img = cv2.addWeighted(img, 0.6, draw_person_pose(img, person_pose_array), 0.4, 0)
output_movie.write(np.uint8(res_img)) #kkhatak
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Thanks. I was able to install and run the pre-trained model.
Question: (how to)
Is there a way to remove the background (bg=black) and only show the key point detections (skeletons)? I know the original openpose had a flag that controlled this feature. I will take a look at pose_detector.py draw_person_pose function...
Update: 1/13
One Solution was to modify the video capture image after pose detector was called:
https://pastebin.com/CC2dV6MS
The text was updated successfully, but these errors were encountered: