-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resolves gh-34 and gh-35. Fixes implementation of AudioContext Clock. #36
Changes from 2 commits
057feb7
1f8f084
47c3254
148f418
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -33,7 +33,8 @@ | |
}, | ||
"scripts": { | ||
"prepare": "npx grunt", | ||
"browser-test": "npx testem ci --file tests/testem.json", | ||
"browser-webaudio-test": "npx testem ci --file tests/testem-webaudio.json", | ||
"browser-test": "npm run browser-webaudio-test && npx testem ci --file tests/testem.json", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There probably should be a non rollup for the core tests, with this rollup I can't pass command-line options to the core tests. Just updating npm test to rollup all three would be fine. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The latest commit does what I was hoping, I can now cherry pick which suite of tests to run and pass testem options. Thanks! |
||
"node-test": "node tests/node-all-tests.js", | ||
"test": "npm run node-test && npm run browser-test" | ||
} | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -81,7 +81,7 @@ var fluid = fluid || require("infusion"), | |
scriptNode: { | ||
expander: { | ||
funcName: "berg.clock.autoAudioContext.createScriptNode", | ||
args: ["{that}.context", "{that}.options.blockSize", "{that}.tick"] | ||
args: ["{that}.context", "{that}.options.blockSize"] | ||
} | ||
} | ||
}, | ||
|
@@ -90,30 +90,42 @@ var fluid = fluid || require("infusion"), | |
"onStart.startAudioContext": { | ||
priority: "after:updateState", | ||
funcName: "berg.clock.autoAudioContext.start", | ||
args: ["{that}.context", "{that}.scriptNode"] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You might get a second opinion, but in recent experience I've found the startup lifecycle to be one of the key places where you want to be stricter about your references. Seems like you could find a way to preserve the previous specificity, maybe just adding |
||
args: ["{that}"] | ||
}, | ||
|
||
"onStop.stopAudioContext": { | ||
priority: "after:updateState", | ||
funcName: "berg.clock.autoAudioContext.stop", | ||
args: ["{that}.context", "{that}.scriptNode"] | ||
args: ["{that}"] | ||
} | ||
} | ||
}); | ||
|
||
berg.clock.autoAudioContext.createScriptNode = function (context, blockSize, tick) { | ||
var sp = context.createScriptProcessor(blockSize, 1, 1); | ||
sp.onaudioprocess = tick; | ||
return sp; | ||
berg.clock.autoAudioContext.createScriptNode = function (context, | ||
blockSize) { | ||
var scriptNode = context.createScriptProcessor(blockSize, 1, 1); | ||
return scriptNode; | ||
}; | ||
|
||
berg.clock.autoAudioContext.start = function (context, scriptNode) { | ||
scriptNode.connect(context.destination); | ||
context.resume(); | ||
berg.clock.autoAudioContext.start = function (that) { | ||
that.scriptNode.connect(that.context.destination); | ||
that.scriptNode.onaudioprocess = that.tick; | ||
that.context.resume(); | ||
}; | ||
|
||
berg.clock.autoAudioContext.stop = function (context, scriptNode) { | ||
scriptNode.disconnect(context.destination); | ||
scriptNode.onaudioprocess = undefined; | ||
berg.clock.autoAudioContext.stop = function (that) { | ||
try { | ||
that.scriptNode.disconnect(that.context.destination); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This might also throw because There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If If I've read the Web Audio API documentation correctly, the only time an InvalidAccessError would be thrown is if we're trying to disconnect a node that hasn't been connected to the destination. So we catch that and move on, otherwise we'll rethrow the error. I don't love this implementation, but I think it supports the scenario you're thinking of? Should I adopt something better? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In the chats you discussed converting to using a model variable and connecting/disconnecting based on that. A model based approach would avoid the onStart issues with functions that are only passed If you still want to use the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I spent an hour trying a few different variations of this, and I just think they introduce too much complexity for the situation. As best as I can tell, there's no way to avoid using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Spelling out the material the function needs fully in the listener args is what I'd still recommend. It's what you did before, and it's more robust. However, given the time you've already put in, I'd be happy with just leaving a TODO on the "relaxed that-ist" listener definitions suggesting that we might need to improve the code later if there are timing issues on startup. |
||
} catch (e) { | ||
// Only swallow the error if was thrown because | ||
// the script node wasn't connected, | ||
// which can occur if stop() is called before start(). | ||
if (e.name !== "InvalidAccessError") { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is the only branch that wasn't hit in the coverage report I created, but it's fine, as error tests are often harder to simulate. |
||
throw e; | ||
} | ||
} | ||
|
||
that.scriptNode.onaudioprocess = undefined; | ||
that.context.suspend(); | ||
}; | ||
})(); |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,43 @@ | ||
<!DOCTYPE html> | ||
<html lang="en" dir="ltr" id="html"> | ||
<head> | ||
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> | ||
<title>AudioContext Clock Tests</title> | ||
|
||
<link rel="stylesheet" media="screen" href="../../node_modules/infusion/tests/lib/qunit/css/qunit.css" /> | ||
|
||
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/jquery.standalone.js"></script> | ||
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/Fluid.js"></script> | ||
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/FluidPromises.js"></script> | ||
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/FluidDebugging.js"></script> | ||
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/FluidIoC.js"></script> | ||
<script type="text/javascript" src="../../node_modules/infusion/src/framework/core/js/DataBinding.js"></script> | ||
|
||
<script type="text/javascript" src="../../node_modules/infusion/tests/lib/qunit/js/qunit.js"></script> | ||
<script type="text/javascript" src="../../node_modules/infusion/tests/test-core/jqUnit/js/jqUnit.js"></script> | ||
|
||
<script type="text/javascript" src="../../src/js/clock.js"></script> | ||
<script type="text/javascript" src="../../src/js/audiocontext-clock.js"></script> | ||
|
||
<script type="text/javascript" src="../js/utils/clock-tester.js"></script> | ||
<script type="text/javascript" src="../js/utils/realtime-tester.js"></script> | ||
<script type="text/javascript" src="../js/utils/clock-test-utilities.js"></script> | ||
<script type="text/javascript" src="../js/utils/audiocontext-tester.js"></script> | ||
|
||
<script type="text/javascript" src="../js/audiocontext-clock-tests.js"></script> | ||
<script src="/testem.js"></script> | ||
</head> | ||
|
||
<body id="body"> | ||
<h1 id="qunit-header">AudioContext Clock Tests</h1> | ||
<h2 id="qunit-banner"></h2> | ||
<div id="qunit-testrunner-toolbar"></div> | ||
<h2 id="qunit-userAgent"></h2> | ||
<ol id="qunit-tests"></ol> | ||
|
||
<!-- Test HTML --> | ||
<div id="qunit-fixture" style="display: none;"> | ||
|
||
</div> | ||
</body> | ||
</html> |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,84 @@ | ||
/* | ||
* Bergson AudioContext Clock Tests | ||
* http://github.com/colinbdclark/bergson | ||
* | ||
* Copyright 2023, Colin Clark | ||
* Dual licensed under the MIT and GPL Version 2 licenses. | ||
*/ | ||
/*global require*/ | ||
var fluid = fluid || require("infusion"), | ||
berg = fluid.registerNamespace("berg"); | ||
|
||
(function () { | ||
"use strict"; | ||
|
||
var QUnit = fluid.registerNamespace("QUnit"); | ||
|
||
fluid.registerNamespace("berg.test.AudioContextClock"); | ||
|
||
QUnit.module("AudioContext Clock Tests"); | ||
|
||
QUnit.test("Instantiation", function () { | ||
var clock = berg.clock.autoAudioContext(); | ||
QUnit.ok(clock, "Clock was successfully instantiated."); | ||
}); | ||
|
||
QUnit.test("Start", function () { | ||
var clock = berg.clock.autoAudioContext(); | ||
|
||
try { | ||
clock.start(); | ||
QUnit.ok(true, "Clock successfully started."); | ||
} catch (e) { | ||
QUnit.ok(false, "Clock failed to start successfully", e); | ||
} | ||
}); | ||
|
||
QUnit.test("Stop before start", function () { | ||
var clock = berg.clock.autoAudioContext(); | ||
|
||
try { | ||
clock.stop(); | ||
QUnit.ok(true, "Calling stop() before starting has no effect."); | ||
} catch (e) { | ||
QUnit.ok(false, "Calling stop() before starting failed: " + e.message); | ||
} | ||
}); | ||
|
||
QUnit.test("Stop after start", function () { | ||
var clock = berg.clock.autoAudioContext(); | ||
|
||
try { | ||
clock.start(); | ||
clock.stop(); | ||
QUnit.ok(true, "Clock successfully stopped after starting."); | ||
} catch (e) { | ||
QUnit.ok(false, "Calling stop() after starting failed.", e); | ||
} | ||
}); | ||
|
||
|
||
fluid.defaults("berg.test.clock.autoAudioContextClockTestSuite", { | ||
gradeNames: ["berg.test.clock.testSuite"], | ||
|
||
tests: [ | ||
{ | ||
name: "Initial state, default options", | ||
initOnly: true, | ||
tester: { | ||
type: "berg.test.clock.tester.audioContext" | ||
} | ||
}, | ||
|
||
{ | ||
name: "tick() time update", | ||
tester: { | ||
type: "berg.test.clock.tester.audioContext" | ||
} | ||
} | ||
] | ||
}); | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It seems like you should also have a "start after stop" test. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wonder if a "stop after restart" test would be useful in ensuring that a "start after stop" does not leave anything unclean that prevents a further stop. Not asking for you to loop further, the first "start after stop" test would make sure "stop" leaves things in the right state, and the "stop after restart" test would make sure "start after stop" is done safely. That should hopefully cover the possible states that both start and stop should encounter, however many times you loop. |
||
var testSuite = berg.test.clock.autoAudioContextClockTestSuite(); | ||
testSuite.run(); | ||
})(); |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
{ | ||
"test_page": "tests/html/audiocontext-clock-tests.html", | ||
"timeout": 300, | ||
"skip": "PhantomJS,IE,Firefox,Headless Firefox,Safari,Brave,Headless Brave,Headless Chrome", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In my case this exhaustive list of "skips" was not quite enough, as I also have Opera and Safari (the former fails because of the "user intent" issue, the latter because it's, well, Safari). In other work, I have been thinking a lot about switching to using If that makes sense to you, I would suggest only launching "Headless Chrome" here. Headless runs do work, and take four seconds instead of over twenty. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Safari is included in this skip list. Are you saying Safari was still invoked on your machine? Can you explain what Testing only in headless browsers where they are available would be nice, and I'm not averse to it. The question that lingers for me, though is whether we think they'll continue to be reliable for the requestAnimationFrame clock tests, which do in theory have some coupling to the display frame rate. The tests pass just fine in the headless browsers and arguably might be more resilient, but it's a question of trusting that we're testing in a sufficiently stable and realistic environment. I had filed #37 last night to add tests that are workable in a CI context where I assume the headless browsers are the only option. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
You have two files, you only skip Safari in the web audio one.
I can see the rationale for running "headed" for RAF, that's a really good point. You can run "headed" browsers using xvfb in CI, that's what fluid-testem does at the moment (here's the CI config). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It took a bit of digging to even find out that Testem supports a launch configuration option as you say. This seems much better than endless skip lists. Do we know if it is resilient to specifying a launcher that isn't installed on the platform? Like—just an example—if I include There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It seems like you're on to something there, the run will indeed fail if the requested browser isn't installed. This suggests to me that launch is best for very specific circumstances like these, where you only have the required configuration options for a single browser. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does this seem comprehensive of what we want for the core tests if we have to go with a
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Shouldn't There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'll push my latest changes shortly, but I'm using the |
||
"browser_args": { | ||
"Chrome": [ | ||
"--autoplay-policy=no-user-gesture-required" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If you decide to go headless by default, this should be "Headless Chrome". |
||
] | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,5 @@ | ||
{ | ||
"test_page": "tests/all-tests.html", | ||
"timeout": 300, | ||
"skip": "PhantomJS,IE,Headless Chrome" | ||
"skip": "PhantomJS,IE" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I can see using If you decide to keep using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not clear why we'd skip Safari? I admittedly do try to stay a version or two back of the latest versions of macOS (for obvious reasons as a musician), but we do support Safari and the core tests do pass there (with the usual annoying manual intervention). Did they fail on your machine? If so, can you file a issue that includes the error you receive and the versions of macOS and Safari you're running? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The Safari tests seem more deeply broken, a blank window comes up but no test output is displayed and the window never closes (I'm on Monterey with the latest non-preview version of Safari installed). It's been this way for a while at least for me, and it may be the same behaviour Justin reported here in Catalina. There's a workaround in the ticket. |
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I totally understand why you did this, but a side effect is that you can no longer pass overrides to testem as part of the command, as in
npm test -- --launch Chrome
, as those options are not passed along to the individual commands. This is relevant when problematic browsers (like Safari) need to be skipped.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you propose an alternative that maintains the ability to run the core tests across all supported browsers, and ensures we can target the Web Audio API-dependent tests to those browsers that support command line flags for disabling the autoplay policy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, why would we skip Safari? It's a supported platform and the core tests do pass there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've commented elsewhere on the seeming brokenness of Safari in some environments.
I have been working on adding command-line (or environment variable) overrides that would survive a rollup, as an improvement to
fluid-testem
. I was also thinking we could fan out Chrome options to all variants usingfluid-testem
and infusion merging of options. But I do think we should both keep looking (and asking) around in general.One option I'd be totally happy with is just documenting how to launch the test fixtures individually with testem options, like a line or two in the testing section of the documentation.