-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for control by expression pedals #9
Comments
It's not a bad question. I've had this thought as well. The main reason why it's not implemented is that we did not have the resources to implement this (yet). I work full-time in research and develop this software in my spare time. JACK supports MIDI I/O as well, so we would likely make use of this, since we already use JACK for audio I/O. This would also allow you to route MIDI commands between JACK-aware applications. The server does support an API for the UI (implemented in JavaScript) to communicate with it. All commands are sent as HTTP POST (or GET, but in practice, we only use POST, though the server accepts both) commands via a TLS-secured connection. You can use "application/x-www-form-urlencoded" or "multipart/form-data" as MIME-type and then provide data with the respective encoding in the HTTP request body. You might be able to "hack together" a "MIDI to TLS bridge" this way, but it would be hard to get it to adjust to the different configurations (numbers of channels, effects units within a chain and the parameters that they have, etc.). "Native" support of MIDI control messages by the DSP process would therefore be favorable. Another reason to implement it directly in go-dsp-guitar is that inter-process communication is expensive, parameter adjustments can wait on certain data structures to become "unlocked" and are only done "per block", so without "native" support, the parameter update would not be smooth. One of the main issues with implementing MIDI support is that expression pedals (or other controllers like rotary knobs, etc.) can send lots of different MIDI messages and to allow for flexibility across different vendors and devices, we'd have to support all of them. Also, some controllers use a single channel (providing 7 bits of resolution), while some others "bond" two channels (providing 15 bits of resolution). Supporting these is especially complicated since it is not specified, for example, in which order data is sent over two channels or what it means when data is transmitted on only one of them and not on the other. Do we have to wait for both to arrive to update the value or is, for example, an update of the least significant bits possible without retransmitting the most significant bits as well? What would we do after we have received the most significant bits? Should we wait for the least significant bits to arrive before we should consider the value "complete"? Can we be sure that they're actually going to arrive? (For example, if it's a 7-bit device, they won't, but we can't really distinguish that since even a 15-bit device is allowed to omit them.) There's quite some room for interpretation in the MIDI specification, and probably also devices out there which break with the specification. Therefore, it's hard to implement this. And we'd also have to "expose" it somehow to the user to enable him to configure all of this variability, since we don't know which devices he's gonna attach and how they're gonna behave. So yeah, I think it's a good idea, but it's really huge. Also, in some places within the code, we make assumptions that values are not changed regularly - for example, at least not during audio "periods" from JACK. If we want to make transitions smooth, we gotta handle "continuous changes". (The MIDI messages have a time stamp so that we can reconstruct exactly when within a period the parameters were changed.) But for some parameters, this might be "too expensive" (for example, changing the levels in the "power amp" simulation causes the FIR filters to get "recompiled", which is an expensive operation) and it might be better to only update "per period". In some other places, we could do "smooth updates", also within a period, but we might lose some optimizations. These are tough decisions to make and I currently wouldn't know what "the right way" is. Last but not least, we have no way to update the UI from the DSP process. And the MIDI commands would directly arrive at the DSP and not at the UI. Therefore, the UI would be unaware of MIDI changes, so that we cannot, for example, update a "knob position" in the UI when the expression pedal changes. Therefore, we'd probably have to let the user assign parameters to MIDI channels and then we'd show these parameters simply as "disabled" in the UI to indicate that the parameters are currently not under the UI's control (but are instead mapped to a MIDI device). So yes, it certainly should be possible, but it's a huge thing to implement this in a general sense. |
Ok, wow. Thanks for clueing me in on the route to a "MIDI to TLS bridge"... I understand the caveats. I'll continue to use this and may eventually test rapidly pinging the UI with the API to observe how well or poorly some parameters can be updated. If there is any opportunity (even if it's limited) on that end then I may take the experiment further. I know it must be the worst way to inject changes into the pipeline... but it's also trivial =) Thanks again! |
I'm pretty to new all this so sorry if this is a 'bad' question: Is there any chance that control by expression pedals might be made possible?
Best I can tell, the 'most proper' way to do this would be MIDI control messages.
There are numerous gaps...
Would love to know your thoughts,
Thanks for this very cool tool!
The text was updated successfully, but these errors were encountered: