-
Notifications
You must be signed in to change notification settings - Fork 1
Home
This index page is split into four sections:
- Creative concept
- Technical concept for development on MacOS/Swift
- Technical concept for development on Windows/C++
- Technical concept for development on Web/JavaScript
The project was conceived by Christien Meindertsma for Dutch Design Week 2021, which will be taking place this year in Eindhoven from 16th-24th October 2021. This is the deadline for the project.
The idea is to create a real-time interactive installation that will allow one person at a time to learn about the atomic composition of the human body. The installation will have several different modes or acts:
- Body - the user sees themselves reflected on the screen (dimensions TBC), the video fades away and is replaced by a real-time particle simluation of all the different elements that make up the human body. As the user moves, the swarm tracks their body silhouette.
- Orbit - the body fades away, and the particles remain, swarming, attracted to a central point.
- Grid - the particles form into a square grid, allowing observers to easily see the different proportions of different elements.
- Stream - the body outline/shadow fades back in, but with a stream of particles passing from left to right - expressing the idea that we all come from stars and will return to them.
The particles would have a Seurat style to them - i.e. an alpha'd edge that fades to transparency. NOT hard edged, but perhaps labelled with their chemical symbol. Colour scheme is yet to be confirmed, but could be CPK colouring - which is the one used for physical chemical models.
The software/hardware challege can be broadly broken down into four areas:
- Video capture
- Pose detection
- Particle simulation
- Scene transition / interaction design for user experience
+'s: high performance, native code. No custom hardware required. Apple hardware easy to support. Low power, high performance M1. Useful example for the wider creative coding community
-'s: smaller developer community, less familiar language, particle and body tracking
Software:
- AVFoundation for connecting to webcam and getting live video as a series of images
- Vision for Pose Estimation: https://developer.apple.com/documentation/vision/detecting_human_body_poses_in_images
- SpriteKit for 2D particle drawing and Physics simulation
- Metal for particle simulation to get to 100K or 1000K realtime, either native or using Satin
Hardware:
- M1 Mac Mini with 16Gb memory and 256Gb hard drive
- Logitech Brio Stream Webcam, Ultra HD 4K Streaming Edition
References:
- https://developer.apple.com/videos/play/wwdc2020/10653 - "Detect Body and Hand Pose with Vision", WWDC 2020 talk, with source code for Hand Tracking, but not Body Tracking: https://developer.apple.com/documentation/vision/detecting_hand_poses_with_vision
- https://developer.apple.com/videos/play/wwdc2021/10040 - "Detect people, faces, and poses using Vision", WWDC 2021 talk
- https://developer.apple.com/videos/play/wwdc2020/10099 - "Explore the Action & Vision app", iOS app, WWDC 2020 talk
- https://developer.apple.com/videos/play/wwdc2019/222 - "Understanding Images in Vision Framework", WWDC 2019 talk
- https://developer.apple.com/videos/play/wwdc2017/506 - "Vision Framework: Building on Core ML", WWDC 2017, intro to Vision
- https://developer.apple.com/machine-learning/models/ - see the PoseNet model here, with example MacOS project linked as well as the following tutorial: https://developer.apple.com/documentation/coreml/detecting_human_body_poses_in_an_image which then links to the updated "pure" Swift Apple code, using the Apple Vision API: https://developer.apple.com/documentation/vision/detecting_human_body_poses_in_images
- Open Source Core ML models including PoseNet for pose detection and DeeplabV3 for image segmentation (human body foreground/background splitting).
- https://flexmonkey.blogspot.com/2015/08/ios-live-camera-controlled-particles-in.html - many particles reacting to the iPad camera
- Source for accessing "Facetime" camera on MacOS: https://github.com/fbukevin/AccessCamera
- Above source, but updated for Swift 5: https://github.com/yagobonardi/AccessCamera
- https://medium.com/@barbulescualex/making-a-custom-camera-in-ios-ea44e3087563 - tutorial on making a custom camera app on iOS, with source: https://github.com/barbulescualex/iOSCustomCamera, also link to tutorial on CIFilters for affecting the image using Metal: https://betterprogramming.pub/using-cifilters-metal-to-make-a-custom-camera-in-ios-c76134993316, with source: https://github.com/barbulescualex/iOSMetalCamera
- https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/avcam_building_a_camera_app - AVCam example iOS app from Apple
- https://izziswift.com/avfoundation-how-to-mirror-video-from-webcam-mac-os-x/ - flipping a webcam image so it feels "right" to visitors
- https://izziswift.com/avfoundation-capturing-video-with-custom-resolution/ - using AVFoundation to capture at custom resolutions
- https://www.appcoda.com/avfoundation-swift-guide/ - building a full screen camera app (iOS) using AVFoundation
- https://flexmonkey.blogspot.com/2014/07/using-swift-and-sprite-kit-for-physics.html - Swift and SpriteKit for physics
- https://github.com/FlexMonkey/ParticleLab - 4000K particles in Metal
- https://github.com/christopherkriens/boids - Boids in Swift
- https://github.com/jVirus/spritekit-water-node - SpriteKit water simulation
- https://github.com/jVirus/skcomponents-kit - rope simulation in SpriteKit
- https://github.com/jVirus/ios-spritekit-shader-sandbox - SpriteKit shader sandbox
- https://github.com/jeffreymorganio/spritekit-leaf-simulation - SpriteKit leaf simulation - what happens if we try this with 1K, 10K, 100K, 1000K leaves?
- https://pragprog.com/titles/tcswift/apple-game-frameworks-and-technologies/ - April 2021 textbook on "Apple Game Frameworks and Technologies - Build 2D Games with SpriteKit & Swift"
- https://github.com/backslash-f/pragprog-apple-game-frameworks - all the above examples ported from iOS to macOS
- https://github.com/topics/spritekit?l=swift&o=desc&s=updated - way of researching specific Apple API's on Github.
- https://github.com/matteocrippa/awesome-swift, https://github.com/serhii-londar/open-source-mac-os-apps and https://github.com/dkhamsing/open-source-ios-apps
+'s: high performance, native code. All consitutent technical challenges solved (video input, 3D body tracking, large scale particle simulation). Familiar platform for development.
-'s: custom body tracking hardware required, PC required, installation more challenging as a result, remote support more challenging, availabilty of Azure Kinect, challenges running Kinect V1 or V2 UK versions in Netherlands
Software:
- openFrameworks
- https://github.com/prisonerjohn/ofxAzureKinect for interfacing with MS Azure Kinect
- https://github.com/vanderlin/ofxBox2d for physics simulation
- https://github.com/CMU-Perceptual-Computing-Lab/openpose for person tracking with generic web cam?
Hardware:
- Generic High Performance PC, spec TBC
- https://www.microsoft.com/en-gb/d/azure-kinect-dk/8pp5vxmd9nhq?activetab=pivot%3aoverviewtab Azure Kinect Developer Kit or Kinect V2 or Kinect V1, which JGL already has, or generic webcam:
- Logitech Brio Stream Webcam, Ultra HD 4K Streaming Edition
References:
- https://learnopencv.com/deep-learning-based-human-pose-estimation-using-opencv-cpp-python/ - pure openCV pose detection
- https://github.com/Geekrick88/ofxCaffe - can use the model above?
- https://github.com/Qengineering/TensorFlow_Lite_Pose_RPi_32-bits 5fps on RPi! Also on the Jetson: https://github.com/Qengineering/TensorFlow_Lite_Pose_Jetson-Nano
- https://github.com/fusefactory/ofxFastParticleSystem - particle addon
- https://github.com/armadillu/ofxSceneManager - fadeable scene transitions
- https://ofxaddons.com/categories - directory of openFrameworks addons
- https://github.com/zkmkarlsruhe/ofxTensorFlow2 - for using Tensorflow (Bodypix and Posenet?) on C++/oF
+'s: web-based platform for easy sharing on multiple platforms, no custom hardware needed, prototype already constructed
-'s: performance hit of having be interpreted - non native code
This was the initial path for development, using the following platforms:
Software:
- p5.js as overall integration library. p5.SceneMangager for scene management.
- ml5.js (therefore Tensorflow.js) for pose detection via the Posenet model and person/background segmentation via the BodyPix model.
- matter.js for particle physics simulation. https://github.com/liabru/matter-attractors for attractor physics. https://github.com/liabru/matter-wrap for wrapping the simulation around the display.
Hardware:
- M1 Mac Mini with 16Gb memory and 256Gb hard drive
- Logitech Brio Stream Webcam, Ultra HD 4K Streaming Edition
The project could migrate to three.js if necessary for 100K or 1000K particle simulation.
References:
- https://blog.tensorflow.org/2018/05/real-time-human-pose-estimation-in.html
- https://www.theverge.com/2020/11/17/21572418/google-chrome-run-natively-on-apples-arm-macs-m1
- https://medium.com/analytics-vidhya/m1-mac-mini-scores-higher-than-my-nvidia-rtx-2080ti-in-tensorflow-speed-test-9f3db2b02d74
- https://caffeinedev.medium.com/how-to-install-tensorflow-on-m1-mac-8e9b91d93706
- https://towardsdatascience.com/installing-tensorflow-on-the-m1-mac-410bb36b776
Go back to the wiki homepage.