Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolves gh-34 and gh-35. Fixes implementation of AudioContext Clock. #36

Merged
merged 4 commits into from
Mar 7, 2023

Conversation

colinbdclark
Copy link
Collaborator

This PR fixes the broken start() and stop() implementations in AudioContext Clock and adds basic unit tests for it.

.

Properly implements start() and stop() for the AudioContext Clock.
Adds unit tests for the AudioContext clock (which require the autoplay policy to be disabled to run successfully).
@duhrer
Copy link

duhrer commented Mar 1, 2023

I will leave feedback on the code changes here ASAP. Could you cut a dev release so I can also try this work out downstream?

package.json Outdated
@@ -33,7 +33,8 @@
},
"scripts": {
"prepare": "npx grunt",
"browser-test": "npx testem ci --file tests/testem.json",
"browser-webaudio-test": "npx testem ci --file tests/testem-webaudio.json",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I totally understand why you did this, but a side effect is that you can no longer pass overrides to testem as part of the command, as in npm test -- --launch Chrome, as those options are not passed along to the individual commands. This is relevant when problematic browsers (like Safari) need to be skipped.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you propose an alternative that maintains the ability to run the core tests across all supported browsers, and ensures we can target the Web Audio API-dependent tests to those browsers that support command line flags for disabling the autoplay policy?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, why would we skip Safari? It's a supported platform and the core tests do pass there.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've commented elsewhere on the seeming brokenness of Safari in some environments.

I have been working on adding command-line (or environment variable) overrides that would survive a rollup, as an improvement to fluid-testem. I was also thinking we could fan out Chrome options to all variants using fluid-testem and infusion merging of options. But I do think we should both keep looking (and asking) around in general.

One option I'd be totally happy with is just documenting how to launch the test fixtures individually with testem options, like a line or two in the testing section of the documentation.

{
"test_page": "tests/html/audiocontext-clock-tests.html",
"timeout": 300,
"skip": "PhantomJS,IE,Firefox,Headless Firefox,Safari,Brave,Headless Brave,Headless Chrome",
Copy link

@duhrer duhrer Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my case this exhaustive list of "skips" was not quite enough, as I also have Opera and Safari (the former fails because of the "user intent" issue, the latter because it's, well, Safari).

In other work, I have been thinking a lot about switching to using launch instead, as skips kind of imply that you support every new browser testem adds support for, which is especially not the case here.

If that makes sense to you, I would suggest only launching "Headless Chrome" here. Headless runs do work, and take four seconds instead of over twenty.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safari is included in this skip list. Are you saying Safari was still invoked on your machine?

Can you explain what launch does?

Testing only in headless browsers where they are available would be nice, and I'm not averse to it. The question that lingers for me, though is whether we think they'll continue to be reliable for the requestAnimationFrame clock tests, which do in theory have some coupling to the display frame rate. The tests pass just fine in the headless browsers and arguably might be more resilient, but it's a question of trusting that we're testing in a sufficiently stable and realistic environment. I had filed #37 last night to add tests that are workable in a CI context where I assume the headless browsers are the only option.

Copy link

@duhrer duhrer Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safari is included in this skip list. Are you saying Safari was still invoked on your machine?

You have two files, you only skip Safari in the web audio one.

Can you explain what launch does?

launch is the opposite of skip, it only runs the tests in the listed browsers rather than running in all available browsers and skipping selected ones. I would argue for specifically using it here as you only apply the fix for one Chrome variant. You could also set that flag for a bunch of variants and skip the ones that there's no autoplay fix for.

I can see the rationale for running "headed" for RAF, that's a really good point. You can run "headed" browsers using xvfb in CI, that's what fluid-testem does at the moment (here's the CI config).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It took a bit of digging to even find out that Testem supports a launch configuration option as you say. This seems much better than endless skip lists. Do we know if it is resilient to specifying a launcher that isn't installed on the platform? Like—just an example—if I include Safari in my launch list, will it still succeed on Windows and just ignore the request for Safari?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you're on to something there, the run will indeed fail if the requested browser isn't installed. This suggests to me that launch is best for very specific circumstances like these, where you only have the required configuration options for a single browser.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this seem comprehensive of what we want for the core tests if we have to go with a skip declaration instead of using launch?

"skip": "Brave,Chrome Canary,Edge,Headless Brave,Headless Chrome,Headless Firefox,Headless Opera,IE,Opera"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't Safari also be skipped for the web audio tests?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll push my latest changes shortly, but I'm using the skip statement above for the core tests, and launch with only Chrome for the Web Audio tests.

"skip": "PhantomJS,IE,Firefox,Headless Firefox,Safari,Brave,Headless Brave,Headless Chrome",
"browser_args": {
"Chrome": [
"--autoplay-policy=no-user-gesture-required"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you decide to go headless by default, this should be "Headless Chrome".

@@ -1,5 +1,5 @@
{
"test_page": "tests/all-tests.html",
"timeout": 300,
"skip": "PhantomJS,IE,Headless Chrome"
"skip": "PhantomJS,IE"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see using skip here, as the tests do pass in a wider range of browsers. However, it does make the runs take a long time if you have lots of browsers (and as noted, you can't pass a flag to npm test to override this any longer). Launching only headless firefox and chrome gives a pretty good speed boost, but I understand if you'd rather run on everything by default.

If you decide to keep using skip, do please add Safari to the list.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not clear why we'd skip Safari? I admittedly do try to stay a version or two back of the latest versions of macOS (for obvious reasons as a musician), but we do support Safari and the core tests do pass there (with the usual annoying manual intervention). Did they fail on your machine? If so, can you file a issue that includes the error you receive and the versions of macOS and Safari you're running?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Safari tests seem more deeply broken, a blank window comes up but no test output is displayed and the window never closes (I'm on Monterey with the latest non-preview version of Safari installed).

It's been this way for a while at least for me, and it may be the same behaviour Justin reported here in Catalina. There's a workaround in the ticket.

@duhrer
Copy link

duhrer commented Mar 1, 2023

Just to confirm, the Safari behaviour I'm seeing matches what Justin reported, and if I open the file in the prompt that appears, the tests do run.

package.json Outdated
@@ -33,7 +33,8 @@
},
"scripts": {
"prepare": "npx grunt",
"browser-test": "npx testem ci --file tests/testem.json",
"browser-webaudio-test": "npx testem ci --file tests/testem-webaudio.json",
"browser-test": "npm run browser-webaudio-test && npx testem ci --file tests/testem.json",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There probably should be a non rollup for the core tests, with this rollup I can't pass command-line options to the core tests. Just updating npm test to rollup all three would be fine.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The latest commit does what I was hoping, I can now cherry pick which suite of tests to run and pass testem options. Thanks!

…de, core, and Web Audio tests.

Narrows the list of browsers we test with using Testem's launch option.
@duhrer
Copy link

duhrer commented Mar 2, 2023

I was just noticing that the "core" tests are still branded with "all", it seems clearer to rename it to "core" or "non-webaudio", something like that.

@@ -90,30 +90,42 @@ var fluid = fluid || require("infusion"),
"onStart.startAudioContext": {
priority: "after:updateState",
funcName: "berg.clock.autoAudioContext.start",
args: ["{that}.context", "{that}.scriptNode"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might get a second opinion, but in recent experience I've found the startup lifecycle to be one of the key places where you want to be stricter about your references. Seems like you could find a way to preserve the previous specificity, maybe just adding {that}.tick to the arguments and using that in the start function.

scriptNode.onaudioprocess = undefined;
berg.clock.autoAudioContext.stop = function (that) {
try {
that.scriptNode.disconnect(that.context.destination);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might also throw because scriptNode doesn't exist for whatever reason, it feels like the more specific notation is easy to preserve here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If scriptNode were undefined or invalid, we'd want this code to throw an error, right? Because it would be an error in the implementation of the component.

If I've read the Web Audio API documentation correctly, the only time an InvalidAccessError would be thrown is if we're trying to disconnect a node that hasn't been connected to the destination. So we catch that and move on, otherwise we'll rethrow the error. I don't love this implementation, but I think it supports the scenario you're thinking of? Should I adopt something better?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the chats you discussed converting to using a model variable and connecting/disconnecting based on that. A model based approach would avoid the onStart issues with functions that are only passed that.

If you still want to use the onStart event, though, the "precise args" style you used before is better.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I spent an hour trying a few different variations of this, and I just think they introduce too much complexity for the situation. As best as I can tell, there's no way to avoid using excludeSource: "init" in this situation (presumably that's what it's for), and the extra complexity of either a separate model component that mediates the connection or additional model listeners within this component seems hard to justify for what it does.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spelling out the material the function needs fully in the listener args is what I'd still recommend. It's what you did before, and it's more robust.

However, given the time you've already put in, I'd be happy with just leaving a TODO on the "relaxed that-ist" listener definitions suggesting that we might need to improve the code later if there are timing issues on startup.

}
]
});

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you should also have a "start after stop" test.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if a "stop after restart" test would be useful in ensuring that a "start after stop" does not leave anything unclean that prevents a further stop.

Not asking for you to loop further, the first "start after stop" test would make sure "stop" leaves things in the right state, and the "stop after restart" test would make sure "start after stop" is done safely. That should hopefully cover the possible states that both start and stop should encounter, however many times you loop.

// Only swallow the error if was thrown because
// the script node wasn't connected,
// which can occur if stop() is called before start().
if (e.name !== "InvalidAccessError") {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only branch that wasn't hit in the coverage report I created, but it's fine, as error tests are often harder to simulate.

@duhrer
Copy link

duhrer commented Mar 2, 2023

I find it useful to look at code coverage as part of a pull, and also wanted to remind myself what was involved in converting a project, so I created a branch with basic code coverage. The coverage for the changes in the pull is fine, there's one uncovered error case which you'd have to make some kind of mock to simulate, it's fine. You can check out the branch and run npm test if you want to see the coverage for yourself.

@duhrer
Copy link

duhrer commented Mar 2, 2023

Otherwise I have looked through and added what comments I can for now.

@@ -90,13 +90,13 @@ var fluid = fluid || require("infusion"),
"onStart.startAudioContext": {
priority: "after:updateState",
funcName: "berg.clock.autoAudioContext.start",
args: ["{that}"]
args: ["{that}.scriptNode", "{that}.context", "{that}.tick"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is all I hoped for and I am totally happy with it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, there was no debate from me about that, only the question of whether we wanted to work around the Web Audio API's lack of a way to determine if a node has already been connected to a given destination. If you're good with this all as-is, I'll merge and cut a new release and then whenever you get time you can see if it has any impact on Youme's demos?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy for this to be merged and to test/track any remaining work separately.

@colinbdclark colinbdclark merged commit 148f418 into lichen-community-systems:main Mar 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants