Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Web MIDI Synths #154

Closed
notator opened this issue Sep 25, 2015 · 1 comment
Closed

Web MIDI Synths #154

notator opened this issue Sep 25, 2015 · 1 comment

Comments

@notator
Copy link

notator commented Sep 25, 2015

This is the continuation of my recent posts to #150 and #153.
I take Chris' answer to #153 as an invitation to continue here, not at public-audio-dev. :-)

First, a definition:
A Web MIDI Synth is a software device that implements the WebMIDIAPI MIDIOutput interface (WebMIDIAPI §10 with MIDIPort type="output"). Web MIDI Synths consist of Javascript files that reside on websites, where they are accessible to host applications that send them MIDI messages.

Host applications can choose to send their messages either to a Web MIDI Synth or to a hardware output device provided by the browser's implementation of the WebMIDIAPI. The browser's implementation does not have to be invoked if the host only needs a local Web MIDI Synth. [1]

I think the Web MIDI API needs to be extended to include a standard defining the interface for (software) Web MIDI Synths. Such a standard would not affect current browser implementations in any way since browsers only implement the interfaces to hardware devices.

As the author of a Web MIDI Synth's host application, I want to be the one who decides which sounds are going to be used. I also know which MIDI messages I'm going to send (which patches, pitches, controllers etc. I need). In other words, I'd like to be able to customize the synth for my particular case. Its a waste of space and time loading a synth that implements the whole MIDI spec if I'm not going to use it all. There's a fundamental difference here between hard- and software synthesizers.

There is a core of messages that all synths probably have to support (NoteOn, NoteOff, AllSoundOff ...).
I think that highly optimised (low latency) versions of these might quickly become available, and that they would not need to be re-programmed very often.
Wavetable synths also need to load samples of some sort, and they could be bundled conveniently in a SoundFont [2]. The code for loading soundFonts would be needed in most situations, so it could be included in the basic WebMIDISynth. As the host, I know which patches and pitches I need, so I can supply a soundFont that is no larger than necessary.
Proposal: The core functions could be defined inside a WebMIDISynth object that would be loaded using:
<script src="WebMIDISynth.js" type="text/javascript"></script>
and constructed using:
var synth = new WebMIDISynth();
Synths that need soundFonts would have a mutable 'soundFont' attribute that can be set by the host.
The javascript file would not be normative, of course. There could be various versions available on the web. But the synth's interface needs defining.

Customised synths don't have to support all the continuous controllers, and I'd like to be able to choose between different "modulation" modules. If I were a Web Audio programmer (which I'm not) I'd like to be allowed to program my own.
Proposal:
Continuous controller objects should be loaded/defined in separate files:
<script src="cc1.js" type="text/javascript"></script>
<script src="cc2.js" type="text/javascript"></script>
etc.
and there should be a function that connects these objects to controller indices:
loadContinuousController(0, cc1)
loadContinuousController(1, cc2)
etc.
(The first argument would be the index used by the host, in MIDI messages, to identify the controller.)

I think that libraries of all these modules could easily come into existence somewhere, but that I would only load the particular modules I need onto my website.

Summary: I'd currently like to load the basic synth constructor and continuous controller objects using:

<script src="WebMIDISynth.js" type="text/javascript"></script>
<script src="cc1.js" type="text/javascript"></script>
<script src="cc2.js" type="text/javascript"></script>

Then do

var synth = new WebMIDISynth(soundFontUrl);
synth.loadContinuousController(0, cc1);
synth.loadContinuousController(1, cc2);

and then call

synth.send(message, timestamp);

All the best,
James

[1] I'd be quite happy for Chrome to put its WebMIDIAPI implementation back behind a flag. That code only needs to be loaded if the user has MIDI hardware plugged in. Its extremely important, of course, that the code remains available. Lots of applications need to be able to use hardware MIDI input and output devices. Switching on support for MIDI hardware in Chrome is something one could do when plugging in the hardware. The synth described above would not, however, need the flag to be enabled.

[2] SoundFont may not be the only format to consider. Its just one I happen to have used. This needs discussing. There are lots of (freeware) soundFonts available (see, for example, the list at http://coolsoft.altervista.org/en/virtualmidisynth), and third party software for editing them and creating new ones. (I've used Audacity and the Viena editor from http://www.synthfont.com/.)

@cwilso
Copy link
Contributor

cwilso commented Sep 25, 2015

We have a tracking item for this already - #124

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants