Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MIDI message level interface design #45

Open
Yikai-Liao opened this issue Jun 13, 2024 · 3 comments
Open

MIDI message level interface design #45

Yikai-Liao opened this issue Jun 13, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@Yikai-Liao
Copy link
Owner

I'm preparing to add midi mesage level interface to symusic, but it's not very clear yet how the interface should be designed.

So I thought I'd ask your opinion here @wrongbad

@Yikai-Liao Yikai-Liao added the enhancement New feature or request label Jun 13, 2024
@wrongbad
Copy link

wrongbad commented Jun 13, 2024

The first question is what level of detail do you need? Probably all the channel messages are in. Do you think anyone would want sysex, real-time or meta messages?

You probably want to offer a single array view of the event stream so users don't have to reconstruct timing. But then you need a way to present different message object types. In tensormidi I used a super-set struct with multi-purpose fields, but that might not match the style of this project.

Since you have a pointer-indirection array already, maybe you could have it hold multiple backend buffers for different message types, but present a single pointer / object sequence.

Another option is using a single super-set struct in the c++ backend, but construct different python types based on message type which view the underlying struct.

@Yikai-Liao
Copy link
Owner Author

After thinking it over, I feel like I still haven't figured out what the usage scenario is for adding message level Api? For me, it's more of a debug tool that can be used to analyze if there are any errors when saving midi. I think it's important to analyze the possible usage scenarios to figure out what pattern to design it as.

@wrongbad
Copy link

Oh I see. In my case, I'm trying to model noteon/noteoff events causally, so I can condition the generation on a live stream (e.g. from a digital instrument).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants