Skip to content

Commit

Permalink
Use algorithmic style more (#144)
Browse files Browse the repository at this point in the history
SHA: c0694cb
Reason: push, by padenot

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
  • Loading branch information
padenot and github-actions[bot] committed Feb 28, 2025
1 parent 62df0cc commit 8db450e
Showing 1 changed file with 24 additions and 13 deletions.
37 changes: 24 additions & 13 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
<title>Web Speech API</title>
<meta content="CG-DRAFT" name="w3c-status">
<link href="https://www.w3.org/StyleSheets/TR/2021/cg-draft" rel="stylesheet">
<meta content="Bikeshed version 47d87adb7, updated Tue Feb 18 17:18:43 2025 -0800" name="generator">
<meta content="Bikeshed version 60c422380, updated Thu Feb 20 19:11:22 2025 -0800" name="generator">
<link href="https://webaudio.github.io/web-speech-api/" rel="canonical">
<link href="https://www.w3.org/2008/site/images/favicon.ico" rel="icon">
<meta content="6356249e3d9389d18ac9a140b29f2828e44f761b" name="revision">
<meta content="c0694cbc3c657ba4ecc15a153afdbdaf33bd23aa" name="revision">
<meta content="dark light" name="color-scheme">
<link href="https://www.w3.org/StyleSheets/TR/2021/dark.css" media="(prefers-color-scheme: dark)" rel="stylesheet" type="text/css">
<style>/* Boilerplate: style-autolinks */
Expand Down Expand Up @@ -696,7 +696,7 @@
<div class="head">
<p data-fill-with="logo"><a class="logo" href="https://www.w3.org/"> <img alt="W3C" height="48" src="https://www.w3.org/StyleSheets/TR/2021/logos/W3C" width="72"> </a> </p>
<h1 class="p-name no-ref" id="title">Web Speech API</h1>
<p id="w3c-state"><a href="https://www.w3.org/standards/types/#CG-DRAFT">Draft Community Group Report</a>, <time class="dt-updated" datetime="2025-02-20">20 February 2025</time></p>
<p id="w3c-state"><a href="https://www.w3.org/standards/types/#CG-DRAFT">Draft Community Group Report</a>, <time class="dt-updated" datetime="2025-02-28">28 February 2025</time></p>
<details open>
<summary>More details about this document</summary>
<div data-fill-with="spec-metadata">
Expand Down Expand Up @@ -843,7 +843,7 @@ <h2 class="heading settled" data-level="3" id="security"><span class="secno">3.
User consent can include, for example:
<ul>
<li>User click on a visible speech input element which has an obvious graphical representation showing that it will start speech input.
<li>Accepting a permission prompt shown as the result of a call to <a class="idl-code" data-link-type="method" href="#dom-speechrecognition-start" id="ref-for-dom-speechrecognition-start">start()</a>.
<li>Accepting a permission prompt shown as the result of a call to <code class="idl"><a data-link-type="idl" href="#dom-speechrecognition-start" id="ref-for-dom-speechrecognition-start">start()</a></code>.
<li>Consent previously granted to always allow speech input for this web page.
</ul>
<li>
Expand Down Expand Up @@ -1002,19 +1002,23 @@ <h4 class="heading settled" data-level="4.1.2" id="speechreco-methods"><span cla
<dl>
<dt><dfn class="dfn-paneled idl-code" data-dfn-for="SpeechRecognition" data-dfn-type="method" data-export id="dom-speechrecognition-start"><code>start()</code></dfn> method
<dd>
Start the speech recognition process, directly from a microphone on the device.
When invoked, run the following steps:
<ol>
<li data-md>
<p>Let <var>requestMicrophonePermission</var> to <code>true</code>.</p>
<p>Let <var>requestMicrophonePermission</var> be a boolan variable set to to <code>true</code>.</p>
<li data-md>
<p>Run the <a data-link-type="dfn" href="#start-session-algorithm" id="ref-for-start-session-algorithm">start session algorithm</a> with <var>requestMicrophonePermission</var>.</p>
</ol>
<dt><dfn class="dfn-paneled idl-code" data-dfn-for="SpeechRecognition" data-dfn-type="method" data-export data-lt="start(audioTrack)" id="dom-speechrecognition-start-audiotrack"><code>start(<code class="idl"><a data-link-type="idl" href="https://w3c.github.io/mediacapture-main/#dom-mediastreamtrack" id="ref-for-dom-mediastreamtrack①">MediaStreamTrack</a></code> audioTrack)</code></dfn> method
<dd>
Start the speech recognition process, using a <code class="idl"><a data-link-type="idl" href="https://w3c.github.io/mediacapture-main/#dom-mediastreamtrack" id="ref-for-dom-mediastreamtrack②">MediaStreamTrack</a></code> When invoked, run the following steps:
<ol>
<li data-md>
<p>Let <var>audioTrack</var> be the first argument.</p>
<li data-md>
<p>If <var>audioTrack</var>’s <code class="idl"><a data-link-type="idl" href="https://w3c.github.io/mediacapture-main/#dom-mediastreamtrack-kind" id="ref-for-dom-mediastreamtrack-kind">kind</a></code> attribute is NOT <code>"audio"</code>, throw an <code class="idl"><a data-link-type="idl" href="https://webidl.spec.whatwg.org/#invalidstateerror" id="ref-for-invalidstateerror">InvalidStateError</a></code> and abort these steps.</p>
<p>If <var>audioTrack</var>’s <code class="idl"><a data-link-type="idl" href="https://w3c.github.io/mediacapture-main/#dom-mediastreamtrack-kind" id="ref-for-dom-mediastreamtrack-kind">kind</a></code> attribute is NOT <code>"audio"</code>,
throw an <code class="idl"><a data-link-type="idl" href="https://webidl.spec.whatwg.org/#invalidstateerror" id="ref-for-invalidstateerror">InvalidStateError</a></code> and abort these steps.</p>
<li data-md>
<p>If <var>audioTrack</var>’s <code class="idl"><a data-link-type="idl" href="https://w3c.github.io/mediacapture-main/#dom-mediastreamtrack-readystate" id="ref-for-dom-mediastreamtrack-readystate">readyState</a></code> attribute is NOT <code>"live"</code>, throw an <code class="idl"><a data-link-type="idl" href="https://webidl.spec.whatwg.org/#invalidstateerror" id="ref-for-invalidstateerror①">InvalidStateError</a></code> and abort these steps.</p>
<li data-md>
Expand All @@ -1039,20 +1043,22 @@ <h4 class="heading settled" data-level="4.1.2" id="speechreco-methods"><span cla
<dt><dfn class="dfn-paneled idl-code" data-dfn-for="SpeechRecognition" data-dfn-type="method" data-export data-lt="installOnDevice(lang)" id="dom-speechrecognition-installondevice"><code>installOnDevice(<code class="idl"><a data-link-type="idl" href="https://webidl.spec.whatwg.org/#idl-DOMString" id="ref-for-idl-DOMString①⓪">DOMString</a></code> lang)</code></dfn> method
<dd>The installOnDevice method returns a Promise that resolves to a boolean indicating whether the installation of on-device speech recognition for a given BCP 47 language tag initiated successfully. <a data-link-type="biblio" href="#biblio-bcp47" title="Tags for Identifying Languages">[BCP47]</a>
</dl>
<p>When the <dfn class="dfn-paneled" data-dfn-type="dfn" data-noexport id="start-session-algorithm">start session algorithm</dfn> with <var>requestMicrophonePermission</var> is invoked, the user agent MUST run the following steps: </p>
<p>When the <dfn class="dfn-paneled" data-dfn-type="dfn" data-noexport id="start-session-algorithm">start session algorithm</dfn> with <var>requestMicrophonePermission</var> is invoked, the user agent MUST run the
following steps:</p>
<ol>
<li data-md>
<p>If the <a data-link-type="dfn" href="https://html.spec.whatwg.org/multipage/webappapis.html#current-settings-object" id="ref-for-current-settings-object">current settings object</a>’s <a data-link-type="dfn" href="https://html.spec.whatwg.org/multipage/webappapis.html#concept-relevant-global" id="ref-for-concept-relevant-global">relevant global object</a>’s <a data-link-type="dfn" href="https://html.spec.whatwg.org/multipage/nav-history-apis.html#concept-document-window" id="ref-for-concept-document-window">associated Document</a> is NOT <a data-link-type="dfn" href="https://html.spec.whatwg.org/multipage/document-sequences.html#fully-active" id="ref-for-fully-active">fully active</a>, throw an <code class="idl"><a data-link-type="idl" href="https://webidl.spec.whatwg.org/#invalidstateerror" id="ref-for-invalidstateerror②">InvalidStateError</a></code> and abort these steps.</p>
<li data-md>
<p>If <code class="idl"><a data-link-type="idl" href="#dom-speechrecognition-started-slot" id="ref-for-dom-speechrecognition-started-slot">[[started]]</a></code> is <code>true</code> and no <a class="idl-code" data-link-type="event" href="#eventdef-speechrecognition-error" id="ref-for-eventdef-speechrecognition-error②">error</a> or <a class="idl-code" data-link-type="event" href="#eventdef-speechrecognition-end" id="ref-for-eventdef-speechrecognition-end③">end</a> event has fired, throw an <code class="idl"><a data-link-type="idl" href="https://webidl.spec.whatwg.org/#invalidstateerror" id="ref-for-invalidstateerror③">InvalidStateError</a></code> and abort these steps.</p>
<p>If <code class="idl"><a data-link-type="idl" href="#dom-speechrecognition-started-slot" id="ref-for-dom-speechrecognition-started-slot">[[started]]</a></code> is <code>true</code> and no <a class="idl-code" data-link-type="event" href="#eventdef-speechrecognition-error" id="ref-for-eventdef-speechrecognition-error②">error</a> or <a class="idl-code" data-link-type="event" href="#eventdef-speechrecognition-end" id="ref-for-eventdef-speechrecognition-end③">end</a> event
have fired, throw an <code class="idl"><a data-link-type="idl" href="https://webidl.spec.whatwg.org/#invalidstateerror" id="ref-for-invalidstateerror③">InvalidStateError</a></code> and abort these steps.</p>
<li data-md>
<p>Set <code class="idl"><a data-link-type="idl" href="#dom-speechrecognition-started-slot" id="ref-for-dom-speechrecognition-started-slot①">[[started]]</a></code> to <code>true</code>.</p>
<li data-md>
<p>If <var>requestMicrophonePermission</var> is <code>true</code> and <a data-link-type="dfn" href="https://w3c.github.io/permissions/#dfn-request-permission-to-use" id="ref-for-dfn-request-permission-to-use">request permission to use</a> "<code>microphone</code>" is <a data-link-type="dfn" href="https://w3c.github.io/permissions/#dfn-denied" id="ref-for-dfn-denied">"denied"</a>, abort these steps.</p>
<p>If <var>requestMicrophonePermission</var> is <code>true</code> and <a data-link-type="dfn" href="https://w3c.github.io/permissions/#dfn-request-permission-to-use" id="ref-for-dfn-request-permission-to-use">request permission to use</a> "<code>microphone</code>" is <a data-link-type="dfn" href="https://w3c.github.io/permissions/#dfn-denied" id="ref-for-dfn-denied">"denied"</a>, abort
these steps.</p>
<li data-md>
<p>Once the system is successfully listening to the recognition, <a data-link-type="dfn" href="https://dom.spec.whatwg.org/#concept-event-fire" id="ref-for-concept-event-fire">fire an event</a> named <a class="idl-code" data-link-type="event" href="#eventdef-speechrecognition-start" id="ref-for-eventdef-speechrecognition-start">start</a> at <a data-link-type="dfn" href="https://webidl.spec.whatwg.org/#this" id="ref-for-this">this</a>.</p>
<p>Once the system is successfully listening to the recognition, queue a task to <a data-link-type="dfn" href="https://dom.spec.whatwg.org/#concept-event-fire" id="ref-for-concept-event-fire">fire an event</a> named <a class="idl-code" data-link-type="event" href="#eventdef-speechrecognition-start" id="ref-for-eventdef-speechrecognition-start">start</a> at <a data-link-type="dfn" href="https://webidl.spec.whatwg.org/#this" id="ref-for-this">this</a>.</p>
</ol>
<p></p>
<h4 class="heading settled" data-level="4.1.3" id="speechreco-events"><span class="secno">4.1.3. </span><span class="content">SpeechRecognition Events</span><a class="self-link" href="#speechreco-events"></a></h4>
<p>The DOM Level 2 Event Model is used for speech recognition events.
The methods in the EventTarget interface should be used for registering event listeners.
Expand Down Expand Up @@ -2386,7 +2392,7 @@ <h2 class="no-num no-ref heading settled" id="issues-index"><span class="content
"889e932f": {"dfnID":"889e932f","dfnText":"Exposed","external":true,"refSections":[{"refs":[{"id":"ref-for-Exposed"},{"id":"ref-for-Exposed\u2460"},{"id":"ref-for-Exposed\u2461"},{"id":"ref-for-Exposed\u2462"},{"id":"ref-for-Exposed\u2463"},{"id":"ref-for-Exposed\u2464"}],"title":"4.1. The SpeechRecognition Interface"},{"refs":[{"id":"ref-for-Exposed\u2465"},{"id":"ref-for-Exposed\u2466"},{"id":"ref-for-Exposed\u2467"},{"id":"ref-for-Exposed\u2468"},{"id":"ref-for-Exposed\u2460\u24ea"}],"title":"4.2. The SpeechSynthesis Interface"}],"url":"https://webidl.spec.whatwg.org/#Exposed"},
"9cce47fd": {"dfnID":"9cce47fd","dfnText":"sequence","external":true,"refSections":[{"refs":[{"id":"ref-for-idl-sequence"}],"title":"4.2. The SpeechSynthesis Interface"}],"url":"https://webidl.spec.whatwg.org/#idl-sequence"},
"a5c91173": {"dfnID":"a5c91173","dfnText":"SameObject","external":true,"refSections":[{"refs":[{"id":"ref-for-SameObject"}],"title":"4.2. The SpeechSynthesis Interface"}],"url":"https://webidl.spec.whatwg.org/#SameObject"},
"bc1e4fa1": {"dfnID":"bc1e4fa1","dfnText":"MediaStreamTrack","external":true,"refSections":[{"refs":[{"id":"ref-for-dom-mediastreamtrack"}],"title":"4.1. The SpeechRecognition Interface"},{"refs":[{"id":"ref-for-dom-mediastreamtrack\u2460"}],"title":"4.1.2. SpeechRecognition Methods"}],"url":"https://w3c.github.io/mediacapture-main/#dom-mediastreamtrack"},
"bc1e4fa1": {"dfnID":"bc1e4fa1","dfnText":"MediaStreamTrack","external":true,"refSections":[{"refs":[{"id":"ref-for-dom-mediastreamtrack"}],"title":"4.1. The SpeechRecognition Interface"},{"refs":[{"id":"ref-for-dom-mediastreamtrack\u2460"},{"id":"ref-for-dom-mediastreamtrack\u2461"}],"title":"4.1.2. SpeechRecognition Methods"}],"url":"https://w3c.github.io/mediacapture-main/#dom-mediastreamtrack"},
"bcc085e3": {"dfnID":"bcc085e3","dfnText":"request permission to use","external":true,"refSections":[{"refs":[{"id":"ref-for-dfn-request-permission-to-use"}],"title":"4.1.2. SpeechRecognition Methods"}],"url":"https://w3c.github.io/permissions/#dfn-request-permission-to-use"},
"bdbd19d1": {"dfnID":"bdbd19d1","dfnText":"Promise","external":true,"refSections":[{"refs":[{"id":"ref-for-idl-promise"},{"id":"ref-for-idl-promise\u2460"}],"title":"4.1. The SpeechRecognition Interface"}],"url":"https://webidl.spec.whatwg.org/#idl-promise"},
"d36324c1": {"dfnID":"d36324c1","dfnText":"readyState","external":true,"refSections":[{"refs":[{"id":"ref-for-dom-mediastreamtrack-readystate"}],"title":"4.1.2. SpeechRecognition Methods"}],"url":"https://w3c.github.io/mediacapture-main/#dom-mediastreamtrack-readystate"},
Expand Down Expand Up @@ -3149,7 +3155,12 @@ <h2 class="no-num no-ref heading settled" id="issues-index"><span class="content
function showRefHint(link) {
if(link.classList.contains("dfn-link")) return;
const url = link.getAttribute("href");
const ref = refsData[url];
const refHintKey = link.getAttribute("data-refhint-key");
let key = url;
if(refHintKey) {
key = refHintKey + "_" + url;
}
const ref = refsData[key];
if(!ref) return;

hideAllRefHints(); // Only display one at this time.
Expand Down

0 comments on commit 8db450e

Please sign in to comment.