-
Notifications
You must be signed in to change notification settings - Fork 405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Capture an audio-reactive application #125
Comments
usually just capture the video, and merge with the audio file afterwards. Obsidian is not audio reactive, it's scripted |
Ah ok, thank you. Will try pre-generating the FFT data as suggested in this issue. |
I'd like to be able to add audio to the video files rendered in the browser. Non-real-time is fine for my use case. I can render audio using an OfflineAudioContext but so far I haven't managed to accomplish the "merge" step inside the browser. I see that some peeps have ported ffmpeg to the browser using web assembly so I might try to get my head around that at some point, though it seems a bit over kill for such a simple task, but I really want my application's users to be able to download one completed file. I did some experiments with the browser's Media Encoder API, but my machine wasn't fast enough to transcode 50fps without audio glitches in real-time even just loading a WebM and an MP3 from disk. Thanks for the cool library. |
I have a Three.js program that uses the Web Audio API.
From previous issues, it seems there is currently no way to manipulate Web Audio as needed for capture:
The meshes in my program "react" to the audio, so it is not possible to add the audio afterwards.
How was the Obsidian example in this project's README captured? The demo looks to be audio-reactive...
The text was updated successfully, but these errors were encountered: