Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolves gh-34 and gh-35. Fixes implementation of AudioContext Clock. #36

Merged
merged 4 commits into from
Mar 7, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,8 @@
},
"scripts": {
"prepare": "npx grunt",
"browser-test": "npx testem ci --file tests/testem.json",
"browser-webaudio-test": "npx testem ci --file tests/testem-webaudio.json",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I totally understand why you did this, but a side effect is that you can no longer pass overrides to testem as part of the command, as in npm test -- --launch Chrome, as those options are not passed along to the individual commands. This is relevant when problematic browsers (like Safari) need to be skipped.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you propose an alternative that maintains the ability to run the core tests across all supported browsers, and ensures we can target the Web Audio API-dependent tests to those browsers that support command line flags for disabling the autoplay policy?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, why would we skip Safari? It's a supported platform and the core tests do pass there.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've commented elsewhere on the seeming brokenness of Safari in some environments.

I have been working on adding command-line (or environment variable) overrides that would survive a rollup, as an improvement to fluid-testem. I was also thinking we could fan out Chrome options to all variants using fluid-testem and infusion merging of options. But I do think we should both keep looking (and asking) around in general.

One option I'd be totally happy with is just documenting how to launch the test fixtures individually with testem options, like a line or two in the testing section of the documentation.

"browser-test": "npm run browser-webaudio-test && npx testem ci --file tests/testem.json",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There probably should be a non rollup for the core tests, with this rollup I can't pass command-line options to the core tests. Just updating npm test to rollup all three would be fine.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The latest commit does what I was hoping, I can now cherry pick which suite of tests to run and pass testem options. Thanks!

"node-test": "node tests/node-all-tests.js",
"test": "npm run node-test && npm run browser-test"
}
Expand Down
38 changes: 25 additions & 13 deletions src/js/audiocontext-clock.js
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ var fluid = fluid || require("infusion"),
scriptNode: {
expander: {
funcName: "berg.clock.autoAudioContext.createScriptNode",
args: ["{that}.context", "{that}.options.blockSize", "{that}.tick"]
args: ["{that}.context", "{that}.options.blockSize"]
}
}
},
Expand All @@ -90,30 +90,42 @@ var fluid = fluid || require("infusion"),
"onStart.startAudioContext": {
priority: "after:updateState",
funcName: "berg.clock.autoAudioContext.start",
args: ["{that}.context", "{that}.scriptNode"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might get a second opinion, but in recent experience I've found the startup lifecycle to be one of the key places where you want to be stricter about your references. Seems like you could find a way to preserve the previous specificity, maybe just adding {that}.tick to the arguments and using that in the start function.

args: ["{that}"]
},

"onStop.stopAudioContext": {
priority: "after:updateState",
funcName: "berg.clock.autoAudioContext.stop",
args: ["{that}.context", "{that}.scriptNode"]
args: ["{that}"]
}
}
});

berg.clock.autoAudioContext.createScriptNode = function (context, blockSize, tick) {
var sp = context.createScriptProcessor(blockSize, 1, 1);
sp.onaudioprocess = tick;
return sp;
berg.clock.autoAudioContext.createScriptNode = function (context,
blockSize) {
var scriptNode = context.createScriptProcessor(blockSize, 1, 1);
return scriptNode;
};

berg.clock.autoAudioContext.start = function (context, scriptNode) {
scriptNode.connect(context.destination);
context.resume();
berg.clock.autoAudioContext.start = function (that) {
that.scriptNode.connect(that.context.destination);
that.scriptNode.onaudioprocess = that.tick;
that.context.resume();
};

berg.clock.autoAudioContext.stop = function (context, scriptNode) {
scriptNode.disconnect(context.destination);
scriptNode.onaudioprocess = undefined;
berg.clock.autoAudioContext.stop = function (that) {
try {
that.scriptNode.disconnect(that.context.destination);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might also throw because scriptNode doesn't exist for whatever reason, it feels like the more specific notation is easy to preserve here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If scriptNode were undefined or invalid, we'd want this code to throw an error, right? Because it would be an error in the implementation of the component.

If I've read the Web Audio API documentation correctly, the only time an InvalidAccessError would be thrown is if we're trying to disconnect a node that hasn't been connected to the destination. So we catch that and move on, otherwise we'll rethrow the error. I don't love this implementation, but I think it supports the scenario you're thinking of? Should I adopt something better?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the chats you discussed converting to using a model variable and connecting/disconnecting based on that. A model based approach would avoid the onStart issues with functions that are only passed that.

If you still want to use the onStart event, though, the "precise args" style you used before is better.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I spent an hour trying a few different variations of this, and I just think they introduce too much complexity for the situation. As best as I can tell, there's no way to avoid using excludeSource: "init" in this situation (presumably that's what it's for), and the extra complexity of either a separate model component that mediates the connection or additional model listeners within this component seems hard to justify for what it does.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spelling out the material the function needs fully in the listener args is what I'd still recommend. It's what you did before, and it's more robust.

However, given the time you've already put in, I'd be happy with just leaving a TODO on the "relaxed that-ist" listener definitions suggesting that we might need to improve the code later if there are timing issues on startup.

} catch (e) {
// Only swallow the error if was thrown because
// the script node wasn't connected,
// which can occur if stop() is called before start().
if (e.name !== "InvalidAccessError") {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only branch that wasn't hit in the coverage report I created, but it's fine, as error tests are often harder to simulate.

throw e;
}
}

that.scriptNode.onaudioprocess = undefined;
that.context.suspend();
};
})();
1 change: 0 additions & 1 deletion tests/all-tests.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@
<script src="../node_modules/infusion/tests/lib/qunit/addons/composite/qunit-composite.js"></script>
<script src="/testem.js"></script>

<!-- find . -name "*-tests.html" | awk '{print "\""$1"\","}' -->
<script>
QUnit.testSuites("Bergson Tests", [
"html/offline-clock-tests.html",
Expand Down
43 changes: 43 additions & 0 deletions tests/html/audiocontext-clock-tests.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
<!DOCTYPE html>
<html lang="en" dir="ltr" id="html">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>AudioContext Clock Tests</title>

<link rel="stylesheet" media="screen" href="../../node_modules/infusion/tests/lib/qunit/css/qunit.css" />

<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/jquery.standalone.js"></script>
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/Fluid.js"></script>
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/FluidPromises.js"></script>
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/FluidDebugging.js"></script>
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/FluidIoC.js"></script>
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/DataBinding.js"></script>

<script type="text/javascript" src="../../node_modules/infusion/tests/lib/qunit/js/qunit.js"></script>
<script type="text/javascript" src="../../node_modules/infusion/tests/test-core/jqUnit/js/jqUnit.js"></script>

<script type="text/javascript" src="../../src/js/clock.js"></script>
<script type="text/javascript" src="../../src/js/audiocontext-clock.js"></script>

<script type="text/javascript" src="../js/utils/clock-tester.js"></script>
<script type="text/javascript" src="../js/utils/realtime-tester.js"></script>
<script type="text/javascript" src="../js/utils/clock-test-utilities.js"></script>
<script type="text/javascript" src="../js/utils/audiocontext-tester.js"></script>

<script type="text/javascript" src="../js/audiocontext-clock-tests.js"></script>
<script src="/testem.js"></script>
</head>

<body id="body">
<h1 id="qunit-header">AudioContext Clock Tests</h1>
<h2 id="qunit-banner"></h2>
<div id="qunit-testrunner-toolbar"></div>
<h2 id="qunit-userAgent"></h2>
<ol id="qunit-tests"></ol>

<!-- Test HTML -->
<div id="qunit-fixture" style="display: none;">

</div>
</body>
</html>
84 changes: 84 additions & 0 deletions tests/js/audiocontext-clock-tests.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
/*
* Bergson AudioContext Clock Tests
* http://github.com/colinbdclark/bergson
*
* Copyright 2023, Colin Clark
* Dual licensed under the MIT and GPL Version 2 licenses.
*/
/*global require*/
var fluid = fluid || require("infusion"),
berg = fluid.registerNamespace("berg");

(function () {
"use strict";

var QUnit = fluid.registerNamespace("QUnit");

fluid.registerNamespace("berg.test.AudioContextClock");

QUnit.module("AudioContext Clock Tests");

QUnit.test("Instantiation", function () {
var clock = berg.clock.autoAudioContext();
QUnit.ok(clock, "Clock was successfully instantiated.");
});

QUnit.test("Start", function () {
var clock = berg.clock.autoAudioContext();

try {
clock.start();
QUnit.ok(true, "Clock successfully started.");
} catch (e) {
QUnit.ok(false, "Clock failed to start successfully", e);
}
});

QUnit.test("Stop before start", function () {
var clock = berg.clock.autoAudioContext();

try {
clock.stop();
QUnit.ok(true, "Calling stop() before starting has no effect.");
} catch (e) {
QUnit.ok(false, "Calling stop() before starting failed: " + e.message);
}
});

QUnit.test("Stop after start", function () {
var clock = berg.clock.autoAudioContext();

try {
clock.start();
clock.stop();
QUnit.ok(true, "Clock successfully stopped after starting.");
} catch (e) {
QUnit.ok(false, "Calling stop() after starting failed.", e);
}
});


fluid.defaults("berg.test.clock.autoAudioContextClockTestSuite", {
gradeNames: ["berg.test.clock.testSuite"],

tests: [
{
name: "Initial state, default options",
initOnly: true,
tester: {
type: "berg.test.clock.tester.audioContext"
}
},

{
name: "tick() time update",
tester: {
type: "berg.test.clock.tester.audioContext"
}
}
]
});

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you should also have a "start after stop" test.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if a "stop after restart" test would be useful in ensuring that a "start after stop" does not leave anything unclean that prevents a further stop.

Not asking for you to loop further, the first "start after stop" test would make sure "stop" leaves things in the right state, and the "stop after restart" test would make sure "start after stop" is done safely. That should hopefully cover the possible states that both start and stop should encounter, however many times you loop.

var testSuite = berg.test.clock.autoAudioContextClockTestSuite();
testSuite.run();
})();
53 changes: 29 additions & 24 deletions tests/js/utils/audiocontext-tester.js
Original file line number Diff line number Diff line change
Expand Up @@ -29,19 +29,24 @@
});

berg.test.clock.testCase.audioContext.testInitial = function (clock, tester, maxJitter) {
QUnit.equal(clock.freq, tester.model.expectedFreq,
QUnit.equal(clock.freq, tester.options.expectedFreq,
"The clock should be initialized with a freq of " +
tester.model.expectedFreq + ".");
berg.test.assertTimeEqual(clock.time, tester.model.expectedTime, maxJitter,
tester.options.expectedFreq + ".");

berg.test.assertTimeEqual(clock.time,
tester.options.expectedInitialTime,
maxJitter,
"The clock should be initialized with the current time.");

QUnit.equal(clock.tickDuration, tester.model.expectedTickDuration,
QUnit.equal(clock.tickDuration,
tester.options.expectedTickDuration,
"The clock should have been initialized with a tick duration of " +
tester.model.expectedTickDuration + " seconds.");
tester.options.expectedTickDuration + " seconds.");
};

berg.test.clock.testCase.audioContext.testTick = function (clock, time, maxJitter, tester) {
var expectedTime = tester.model.expectedTime + tester.model.expectedTickDuration;
var expectedTime = tester.model.expectedTime +
tester.options.expectedTickDuration;

berg.test.assertTimeEqual(clock.time, expectedTime, maxJitter,
"The clock's time should reflect the current expected time.");
Expand All @@ -53,32 +58,32 @@

fluid.defaults("berg.test.clock.tester.audioContext", {
gradeNames: [
// TODO: The order of these two grades matters crucially. Why?
"berg.test.clock.tester.external",
"berg.test.clock.tester.realtime"
],

maxJitter: 0.05,
maxJitter: Number.EPSILON,

// TODO: These were moved into the model (instead of options)
// do to expansion issues. But all other testers expect to find
// these in the options. This should be normalized.
model: {
expectedTime: "{clock}.context.currentTime",
expectedFreq: {
expander: {
funcName: "berg.test.clock.tester.audioContext.calcFreq",
args: ["{clock}.context", "{clock}.options.blockSize"]
}
},
expectedTickDuration: {
expander: {
funcName: "berg.test.clock.tester.audioContext.calcTickDuration",
args: ["{clock}.context", "{clock}.options.blockSize"]
}
expectedInitialTime: "{clock}.context.currentTime",

expectedFreq: {
expander: {
funcName: "berg.test.clock.tester.audioContext.calcFreq",
args: ["{clock}.context", "{clock}.options.blockSize"]
}
},

expectedTickDuration: {
expander: {
funcName: "berg.test.clock.tester.audioContext.calcTickDuration",
args: ["{clock}.context", "{clock}.options.blockSize"]
}
},

model: {
expectedTime: 0
},

components: {
testCase: {
type: "berg.test.clock.testCase.audioContext"
Expand Down
2 changes: 1 addition & 1 deletion tests/js/utils/clock-test-utilities.js
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ var fluid = fluid || require("infusion"),
" Expected time: " + expected +
", actual time was: " + actual +
" Tolerance is " + tolerance +
"; difference was: " + diff + "ms.");
"; difference was: " + diff);
};

berg.test.clock.manualTicker = function (numTicks, clock) {
Expand Down
10 changes: 10 additions & 0 deletions tests/testem-webaudio.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"test_page": "tests/html/audiocontext-clock-tests.html",
"timeout": 300,
"skip": "PhantomJS,IE,Firefox,Headless Firefox,Safari,Brave,Headless Brave,Headless Chrome",
Copy link

@duhrer duhrer Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my case this exhaustive list of "skips" was not quite enough, as I also have Opera and Safari (the former fails because of the "user intent" issue, the latter because it's, well, Safari).

In other work, I have been thinking a lot about switching to using launch instead, as skips kind of imply that you support every new browser testem adds support for, which is especially not the case here.

If that makes sense to you, I would suggest only launching "Headless Chrome" here. Headless runs do work, and take four seconds instead of over twenty.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safari is included in this skip list. Are you saying Safari was still invoked on your machine?

Can you explain what launch does?

Testing only in headless browsers where they are available would be nice, and I'm not averse to it. The question that lingers for me, though is whether we think they'll continue to be reliable for the requestAnimationFrame clock tests, which do in theory have some coupling to the display frame rate. The tests pass just fine in the headless browsers and arguably might be more resilient, but it's a question of trusting that we're testing in a sufficiently stable and realistic environment. I had filed #37 last night to add tests that are workable in a CI context where I assume the headless browsers are the only option.

Copy link

@duhrer duhrer Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Safari is included in this skip list. Are you saying Safari was still invoked on your machine?

You have two files, you only skip Safari in the web audio one.

Can you explain what launch does?

launch is the opposite of skip, it only runs the tests in the listed browsers rather than running in all available browsers and skipping selected ones. I would argue for specifically using it here as you only apply the fix for one Chrome variant. You could also set that flag for a bunch of variants and skip the ones that there's no autoplay fix for.

I can see the rationale for running "headed" for RAF, that's a really good point. You can run "headed" browsers using xvfb in CI, that's what fluid-testem does at the moment (here's the CI config).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It took a bit of digging to even find out that Testem supports a launch configuration option as you say. This seems much better than endless skip lists. Do we know if it is resilient to specifying a launcher that isn't installed on the platform? Like—just an example—if I include Safari in my launch list, will it still succeed on Windows and just ignore the request for Safari?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you're on to something there, the run will indeed fail if the requested browser isn't installed. This suggests to me that launch is best for very specific circumstances like these, where you only have the required configuration options for a single browser.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this seem comprehensive of what we want for the core tests if we have to go with a skip declaration instead of using launch?

"skip": "Brave,Chrome Canary,Edge,Headless Brave,Headless Chrome,Headless Firefox,Headless Opera,IE,Opera"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't Safari also be skipped for the web audio tests?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll push my latest changes shortly, but I'm using the skip statement above for the core tests, and launch with only Chrome for the Web Audio tests.

"browser_args": {
"Chrome": [
"--autoplay-policy=no-user-gesture-required"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you decide to go headless by default, this should be "Headless Chrome".

]
}
}
2 changes: 1 addition & 1 deletion tests/testem.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"test_page": "tests/all-tests.html",
"timeout": 300,
"skip": "PhantomJS,IE,Headless Chrome"
"skip": "PhantomJS,IE"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see using skip here, as the tests do pass in a wider range of browsers. However, it does make the runs take a long time if you have lots of browsers (and as noted, you can't pass a flag to npm test to override this any longer). Launching only headless firefox and chrome gives a pretty good speed boost, but I understand if you'd rather run on everything by default.

If you decide to keep using skip, do please add Safari to the list.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not clear why we'd skip Safari? I admittedly do try to stay a version or two back of the latest versions of macOS (for obvious reasons as a musician), but we do support Safari and the core tests do pass there (with the usual annoying manual intervention). Did they fail on your machine? If so, can you file a issue that includes the error you receive and the versions of macOS and Safari you're running?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Safari tests seem more deeply broken, a blank window comes up but no test output is displayed and the window never closes (I'm on Monterey with the latest non-preview version of Safari installed).

It's been this way for a while at least for me, and it may be the same behaviour Justin reported here in Catalina. There's a workaround in the ticket.

}