diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..bc5c320e --- /dev/null +++ b/404.html @@ -0,0 +1,276 @@ + + + + + + + + OVOS User Documentation + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • +
  • +
+
+
+
+
+ + +

404

+ +

Page not found

+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/about/index.html b/about/index.html new file mode 100644 index 00000000..0a9e17eb --- /dev/null +++ b/about/index.html @@ -0,0 +1,304 @@ + + + + + + + + About - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Home »
  • + +
  • +
  • +
+
+
+
+
+ +

About OpenVoiceOS

+

Introducing OpenVoiceOS - The Free and Open-Source Personal Assistant and Smart Speaker.

+

OpenVoiceOS is a new player in the smart speaker market, offering a powerful and flexible alternative to proprietary solutions like Amazon Echo and Google Home.

+

With OpenVoiceOS, you have complete control over your personal data and the ability to customize and extend the functionality of your smart speaker.

+

Built on open-source software, OpenVoiceOS is designed to provide users with a seamless and intuitive voice interface for controlling their smart home devices, playing music, setting reminders, and much more.

+

The platform leverages cutting-edge technology, including machine learning and natural language processing, to deliver a highly responsive and accurate experience.

+

In addition to its voice capabilities, OpenVoiceOS features a touch-screen GUI made using QT5 and the KF5 framework.

+

The GUI provides an intuitive, user-friendly interface that allows you to access the full range of OpenVoiceOS features and functionality.

+

Whether you prefer voice commands or a more traditional touch interface, OpenVoiceOS has you covered.

+

One of the key advantages of OpenVoiceOS is its open-source nature, which means that anyone with the technical skills can contribute to the platform and help shape its future.

+

Whether you're a software developer, data scientist, someone with a passion for technology, or just a casual user that would like to experience what OVOS has to offer, you can get involved and help build the next generation of personal assistants and smart speakers.

+

With OpenVoiceOS, you have the option to run the platform fully offline, giving you complete control over your data and ensuring that your information is never shared with third parties. This makes OpenVoiceOS the perfect choice for anyone who values privacy and security.

+

So if you're looking for a personal assistant and smart speaker that gives you the freedom and control you deserve, be sure to check out OpenVoiceOS today!

+

Disclaimer: This post was written in collaboration with ChatGPT

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/architecture/index.html b/architecture/index.html new file mode 100644 index 00000000..7e1cb81e --- /dev/null +++ b/architecture/index.html @@ -0,0 +1,420 @@ + + + + + + + + OpenVoiceOS Architecture - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Home »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Architecture

+

This section can be a bit technical, but is included for reference. It is not necessary to read this section for day-to-day usage of OVOS.

+

OVOS is a collection of modular services that work together to provide a seamless, private, open source voice assistant.

+

The suggested way to start OVOS is with systemd service files. Most of the images run these services as a normal user instead of system wide. If you get an error when using the system files, try using it as a system service.

+

NOTE The ovos.service is just a wrapper to control the other OVOS services. It is used here as an example showing --user vs system.

+
    +
  • user service
  • +
  • systemctl --user status ovos.service
  • +
  • system service
  • +
  • systemctl status ovos.service
  • +
+

ovos-core

+

ovos-core

+

This service provides the main instance for OVOS and handles all of the skill loading, and intent processing.

+

All user queries are handled by the skills service. You can think of it as OVOS's brain

+

typical systemd command

+

systemctl --user status ovos-skills

+

systemctl --user restart ovos-skills

+

Technical Docs on Skills

+

Messagebus

+

ovos-messagebus

+

C++ version

+

NOTE This is an alpha version and mostly Proof of Concept. It has been known to crash often.

+

ovos-bus-service

+

You can think of the bus service as OVOS's nervous system.

+

The ovos-bus is considered an internal and private websocket, external clients should not connect directly to it. Please do not expose the messagebus to the outside world!

+

Technical docs for messagebus

+

typical systemd command

+

systemctl --user start ovos-messagebus

+

Listener

+

ovos-dinkum-listener

+

The listener service is used to detect your voice. It controls the WakeWord, STT (Speech To Text), and VAD (Voice Activity Detection) Plugins. You can modify microphone settings and enable additional features under the listener section of your mycroft.conf file, such as wake word / utterance recording / uploading.

+

The ovos-dinkum-listener is the new OVOS listener that replaced the original ovos-listener and has many more options. Others still work, but are not recommended.

+

Technical Listener Docs

+

typical systemd command

+

systemctl --user start ovos-dinkum-listener

+

STT Plugins

+

This is where speech is transcribed into text and forwarded to the skills service.

+

Two STT plugins may be loaded at once. If the primary plugin fails, the second will be used.

+

Having a lower accuracy offline model as fallback will account for internet outages, which ensures your device never becomes fully unusable.

+

Several different STT (Speech To Text) plugins are available for use. OVOS provides a number of public services using the ovos-stt-plugin-server plugin which are hosted by OVOS trusted members (Members hosting services). No additional configuration is required.

+

OVOS Supported STT Plugins

+

Changing STT Plugin

+

Hotwords

+

OVOS uses "Hotwords" to trigger any number of actions. You can load any number of hotwords in parallel and trigger different actions when they are detected. Each Hotword can do one or more of the following:

+
    +
  • trigger listening, also called a Wake word
  • +
  • play a sound
  • +
  • emit a bus event
  • +
  • take ovos-core out of sleep mode, also called a wakeup_word or standup_word
  • +
  • take ovos-core out of recording mode, also called a stop_word
  • +
+

Setting and adding Hotwords

+

WakeWord Plugins

+

A Wake word is what OVOS uses to activate the device. By default Hey Mycroft is used by OVOS. Like other things in the OVOS ecosystem, this is configurable.

+

Wake word plugins

+

Changing the Wake word

+

VAD Plugins

+

VAD Plugins detect when you are actually speaking to the device, and when you quit talking.

+

Most of the time, this will not need changed. If you are having trouble with your microphone hearing you, or stopping listening when you are done talking, you might change this and see if it helps your issue.

+

Supported VAD Plugins

+

Changing VAD Plugin

+

Audio

+

ovos-audio

+

The audio service handles the output of all audio. It is how you hear the voice responses, music, or any other sound from your OVOS device.

+

Configuring Audio

+

TTS Plugins

+

TTS (Text To Speech) is the verbal response from OVOS. There are several plugins available that support different engines. Multiple languages and voices are available to use.

+

OVOS provides a set of public TTS servers hosted by OVOS trusted members (Members hosting services). It uses the ovos-tts-server-plugin, and no additional configuration is needed.

+

Supported TTS Plugins

+

Changing TTS Plugin

+

PHAL

+

ovos-PHAL

+

PHAL stands for Plugin-based Hardware Abstraction Layer. It is used to allow access of different hardware devices access to use the OVOS software stack. It completely replaces the concept of hardcoded "enclosure" from mycroft-core.

+

Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark 1 plugin

+

Supported PHAL Plugins

+

PHAL Plugins

+

Admin PHAL

+

Similar to regular PHAL, but is used when sudo or privlidged user is needed +Be extremely careful when adding admin-phal plugins. They give OVOS administrative privileges, or root privileges to your operating system +Admin PHAL

+

GUI

+

OVOS uses the standard mycroft-gui framework, you can find the official documentation here

+

The GUI service provides a websocket for GUI clients to connect to, it is responsible for implementing the GUI protocol under ovos-core.

+

You can find in depth documentation here

+

Other OVOS services

+

OVOS provides a number of helper scripts to allow the user to control the device at the command line.

+
    +
  • ovos-say-to This provides a way to communicate an intent to ovos.
  • +
  • ovos-say-to "what time is it"
  • +
  • ovos-listen This opens the microphone for listening, just like if you would have said the WakeWord. It is expecting a verbal command.
  • +
  • Continue by speaking to your device "what time is it"
  • +
  • ovos-speak This takes your command and runs it through the TTS (Text To Speech) engine and speaks what was provided.
  • +
  • ovos-speak "hello world" will output "hello world" in the configured TTS voice
  • +
  • ovos-config is a command line interface that allows you to view and set configuration values.
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/config/index.html b/config/index.html new file mode 100644 index 00000000..d291f378 --- /dev/null +++ b/config/index.html @@ -0,0 +1,322 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Configuration »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Configuration

+

When you first start OVOS, there should not be any configuration needed to have a working device.

+

NOTE To continue with the examples, you will need access to a shell on your device. This can be achieved with SSH. Connect to your device with the command ssh ovos@<device_ip_address> and enter the password ovos.

+

This password is EXTREMELY insecure and should be changed or use ssh keys for logging in.

+

Securing SSH

+

This section will explain how the configuration works, and how to do basic configuration changes.

+

The rest of this section will assume you have shell access to your device.

+

How it works

+

OVOS will load configuration files from several locations and combine them into a single json file that is used throughout the software. The file that is loaded last, is what the user should use to modify any configuration values. This is usually located at `~/.config/mycroft/mycroft.conf

+
    +
  • {ovos-config-path}/mycroft.conf
  • +
  • usually in <python_install_path>/site-packages/ovos_config/mycroft.conf
  • +
  • This is the default configuration file distributed with ovos-core
  • +
  • os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/mycroft/mycroft.conf
  • +
  • This is the default configuration file used by images, and may change specific values to correspond with how the image works.
  • +
  • os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH/mycroft/web_cache.json
  • +
  • ~/.mycroft/mycroft.conf (Deprecated)
  • +
  • XDG_CONFIG_DIRS + /mycroft/mycroft.conf
  • +
  • /etc/xdg/mycroft/mycroft.conf
  • +
  • XDG_CONFIG_HOME (default ~/.config) + /mycroft/mycroft.conf
  • +
  • This is the file that you should use to modify the configuration.
  • +
+

When the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that exist in multiple configuration files will be overridden by the last file to contain the value. This process results in a minimal amount being written for a specific device and user, without modifying default distribution files.

+

Advanced Configuration Docs

+

Included Tools

+

OVOS provides a command line tool ovos-config for viewing and changing configuration values.

+

Values can also be set manually in config files instead of using the CLI tool.

+

These methods will be used later in the How To section of these Docs.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/config_manual_config/index.html b/config_manual_config/index.html new file mode 100644 index 00000000..6cab3595 --- /dev/null +++ b/config_manual_config/index.html @@ -0,0 +1,313 @@ + + + + + + + + Manualy Editing Files - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Configuration »
  • + +
  • +
  • +
+
+
+
+
+ +

Configuration - Manually Change Files

+

User configuration should be set in the XDG_CONFIG_HOME file. Usually located at ~/.config/mycroft/mycroft.conf. This file may or may not exist by default. If it does NOT exist, create it.

+

mkdir -p ~/.config/mycroft

+

touch ~/.config/mycroft/mycroft.conf

+

Now you can edit that file. To continue with the previous example, we will change the host of the TTS server, then add the value manually to the user's mycroft.conf file.

+

Open the file for editing. It is not uncommon for this file to exist, but be empty.

+

nano ~/.config/mycroft/mycroft.conf

+

Enter the following into the file. NOTE this file must be valid json or yaml format. OVOS knows how to read both

+
{
+  "tts": {
+    "module": "ovos-tts-plugin-server",
+    "ovos-tts-plugin-server": {
+      "host": "https://pipertts.ziggyai.online"
+    }
+  }
+}
+
+

You can check the formatting of your file with the jq command.

+

cat ~/.config/mycroft/mycroft.conf | jq +If your distribution does not include jq it can be installed with the command sudo apt install jq or the equivalent for your distro.

+

If there are no errors, it will output the complete file. On error, it will output the line where the error is. You can use an online JSON checker if you want also.

+

online json checker +online yaml checker

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/config_ovos_config/index.html b/config_ovos_config/index.html new file mode 100644 index 00000000..0bdcdb8e --- /dev/null +++ b/config_ovos_config/index.html @@ -0,0 +1,345 @@ + + + + + + + + Using ovos-config - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Configuration »
  • + +
  • +
  • +
+
+
+
+
+ +

Configuration - ovos-config

+

OVOS provides a small command line tool, ovos-config, for viewing and setting configuration values in the OVOS ecosystem.

+

NOTE The CLI of this script is new, and may contain some bugs. Please report issues to the ovos-config github page.

+

Viewing Configuration Settings

+

ovos-config --help will show a list of commands to use with this tool.

+

ovos-config show will display a table representing all of the current configuration values.

+

To get the values of a specific section:

+

ovos-config show --section tts will show just the "tts" section of the configuration

+
ovos-config show --section tts
+┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
+┃ Configuration keys (Configuration: Joined, Section: tts)       ┃ Value                  ┃
+┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩
+│ pulse_duck                                                   │ False                  │
+│ module                                                       │ ovos-tts-plugin-server │
+│ fallback_module                                              │ ovos-tts-plugin-mimic  │
+├──────────────────────────────────────────────────────────────┼────────────────────────┤
+│ ovos-tts-plugin-server                                       │                        │
+│     host                                                     │                        │
+└──────────────────────────────────────────────────────────────┴────────────────────────┘
+
+

Changing Configuration Values

+

We will continue with the example above, TTS.

+

Change the host of the TTS server:

+

ovos-config set -k tts will show a table of values that can be edited

+
set -k tts
+┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓
+┃ # ┃ Path                            ┃ Value ┃
+┡━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩
+│ 0 │ tts/pulse_duck                  │ False │
+│ 1 │ tts/ovos-tts-plugin-server/host │       │
+└───┴─────────────────────────────────┴───────┘
+Which value should be changed? (2='Exit') [0/1/2]:
+
+

Enter 1 to change the value of tts/ovos-tts-plugin-server/host

+

Please enter the value to be stored (type: str) :

+

Enter the value for the tts server that you want ovos to use.

+

https://pipertts.ziggyai.online

+

Use ovos-config show --section tts to check your results

+
ovos-config show --section tts
+┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
+┃ Configuration keys (Configuration: Joined, Section: tts)       ┃ Value                           ┃
+┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
+│ pulse_duck                                                   │ False                           │
+│ module                                                       │ ovos-tts-plugin-server          │
+│ fallback_module                                              │ ovos-tts-plugin-mimic           │
+├──────────────────────────────────────────────────────────────┼─────────────────────────────────┤
+│ ovos-tts-plugin-server                                       │                                 │
+│     host                                                     │ https://pipertts.ziggyai.online │
+└──────────────────────────────────────────────────────────────┴─────────────────────────────────┘
+
+

This can be done for any of the values in the configuration stack.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/css/extra.css b/css/extra.css new file mode 100644 index 00000000..00a18c90 --- /dev/null +++ b/css/extra.css @@ -0,0 +1,3 @@ +.wy-nav-content { + max-width: 1200px !important; +} \ No newline at end of file diff --git a/css/fonts/Roboto-Slab-Bold.woff b/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 00000000..6cb60000 Binary files /dev/null and b/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/css/fonts/Roboto-Slab-Bold.woff2 b/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 00000000..7059e231 Binary files /dev/null and b/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/css/fonts/Roboto-Slab-Regular.woff b/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 00000000..f815f63f Binary files /dev/null and b/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/css/fonts/Roboto-Slab-Regular.woff2 b/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 00000000..f2c76e5b Binary files /dev/null and b/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/css/fonts/fontawesome-webfont.eot b/css/fonts/fontawesome-webfont.eot new file mode 100644 index 00000000..e9f60ca9 Binary files /dev/null and b/css/fonts/fontawesome-webfont.eot differ diff --git a/css/fonts/fontawesome-webfont.svg b/css/fonts/fontawesome-webfont.svg new file mode 100644 index 00000000..855c845e --- /dev/null +++ b/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/css/fonts/fontawesome-webfont.ttf b/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 00000000..35acda2f Binary files /dev/null and b/css/fonts/fontawesome-webfont.ttf differ diff --git a/css/fonts/fontawesome-webfont.woff b/css/fonts/fontawesome-webfont.woff new file mode 100644 index 00000000..400014a4 Binary files /dev/null and b/css/fonts/fontawesome-webfont.woff differ diff --git a/css/fonts/fontawesome-webfont.woff2 b/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 00000000..4d13fc60 Binary files /dev/null and b/css/fonts/fontawesome-webfont.woff2 differ diff --git a/css/fonts/lato-bold-italic.woff b/css/fonts/lato-bold-italic.woff new file mode 100644 index 00000000..88ad05b9 Binary files /dev/null and b/css/fonts/lato-bold-italic.woff differ diff --git a/css/fonts/lato-bold-italic.woff2 b/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 00000000..c4e3d804 Binary files /dev/null and b/css/fonts/lato-bold-italic.woff2 differ diff --git a/css/fonts/lato-bold.woff b/css/fonts/lato-bold.woff new file mode 100644 index 00000000..c6dff51f Binary files /dev/null and b/css/fonts/lato-bold.woff differ diff --git a/css/fonts/lato-bold.woff2 b/css/fonts/lato-bold.woff2 new file mode 100644 index 00000000..bb195043 Binary files /dev/null and b/css/fonts/lato-bold.woff2 differ diff --git a/css/fonts/lato-normal-italic.woff b/css/fonts/lato-normal-italic.woff new file mode 100644 index 00000000..76114bc0 Binary files /dev/null and b/css/fonts/lato-normal-italic.woff differ diff --git a/css/fonts/lato-normal-italic.woff2 b/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 00000000..3404f37e Binary files /dev/null and b/css/fonts/lato-normal-italic.woff2 differ diff --git a/css/fonts/lato-normal.woff b/css/fonts/lato-normal.woff new file mode 100644 index 00000000..ae1307ff Binary files /dev/null and b/css/fonts/lato-normal.woff differ diff --git a/css/fonts/lato-normal.woff2 b/css/fonts/lato-normal.woff2 new file mode 100644 index 00000000..3bf98433 Binary files /dev/null and b/css/fonts/lato-normal.woff2 differ diff --git a/css/theme.css b/css/theme.css new file mode 100644 index 00000000..ad773009 --- /dev/null +++ b/css/theme.css @@ -0,0 +1,13 @@ +/* + * This file is copied from the upstream ReadTheDocs Sphinx + * theme. To aid upgradability this file should *not* be edited. + * modifications we need should be included in theme_extra.css. + * + * https://github.com/readthedocs/sphinx_rtd_theme + */ + + /* sphinx_rtd_theme version 1.2.0 | MIT license */ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs>li{display:inline-block;padding-top:5px}.wy-breadcrumbs>li.wy-breadcrumbs-aside{float:right}.rst-content .wy-breadcrumbs>li code,.rst-content .wy-breadcrumbs>li tt,.wy-breadcrumbs>li .rst-content tt,.wy-breadcrumbs>li code{all:inherit;color:inherit}.breadcrumb-item:before{content:"/";color:#bbb;font-size:13px;padding:0 6px 0 3px}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search>a:hover{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.version{margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content p a{overflow-wrap:anywhere}.rst-content .wy-table td p,.rst-content .wy-table td ul,.rst-content .wy-table th p,.rst-content .wy-table th ul,.rst-content table.docutils td p,.rst-content table.docutils td ul,.rst-content table.docutils th p,.rst-content table.docutils th ul,.rst-content table.field-list td p,.rst-content table.field-list td ul,.rst-content table.field-list th p,.rst-content table.field-list th ul{font-size:inherit}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .citation-reference>span.fn-bracket,.rst-content .footnote-reference>span.fn-bracket{display:none}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:auto minmax(80%,95%)}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{display:inline-grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{display:grid;grid-template-columns:auto auto minmax(.65rem,auto) minmax(40%,95%)}html.writer-html5 .rst-content aside.citation>span.label,html.writer-html5 .rst-content aside.footnote>span.label,html.writer-html5 .rst-content div.citation>span.label{grid-column-start:1;grid-column-end:2}html.writer-html5 .rst-content aside.citation>span.backrefs,html.writer-html5 .rst-content aside.footnote>span.backrefs,html.writer-html5 .rst-content div.citation>span.backrefs{grid-column-start:2;grid-column-end:3;grid-row-start:1;grid-row-end:3}html.writer-html5 .rst-content aside.citation>p,html.writer-html5 .rst-content aside.footnote>p,html.writer-html5 .rst-content div.citation>p{grid-column-start:4;grid-column-end:5}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{margin-bottom:24px}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.citation>dt>span.brackets:before,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.citation>dt>span.brackets:after,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a{word-break:keep-all}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a:not(:first-child):before,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.citation>dd p,html.writer-html5 .rst-content dl.footnote>dd p{font-size:.9rem}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{padding-left:1rem;padding-right:1rem;font-size:.9rem;line-height:1.2rem}html.writer-html5 .rst-content aside.citation p,html.writer-html5 .rst-content aside.footnote p,html.writer-html5 .rst-content div.citation p{font-size:.9rem;line-height:1.2rem;margin-bottom:12px}html.writer-html5 .rst-content aside.citation span.backrefs,html.writer-html5 .rst-content aside.footnote span.backrefs,html.writer-html5 .rst-content div.citation span.backrefs{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content aside.citation span.backrefs>a,html.writer-html5 .rst-content aside.footnote span.backrefs>a,html.writer-html5 .rst-content div.citation span.backrefs>a{word-break:keep-all}html.writer-html5 .rst-content aside.citation span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content aside.footnote span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content div.citation span.backrefs>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content aside.citation span.label,html.writer-html5 .rst-content aside.footnote span.label,html.writer-html5 .rst-content div.citation span.label{line-height:1.2rem}html.writer-html5 .rst-content aside.citation-list,html.writer-html5 .rst-content aside.footnote-list,html.writer-html5 .rst-content div.citation-list{margin-bottom:24px}html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content aside.footnote-list aside.footnote,html.writer-html5 .rst-content div.citation-list>div.citation,html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content aside.footnote-list aside.footnote code,html.writer-html5 .rst-content aside.footnote-list aside.footnote tt,html.writer-html5 .rst-content aside.footnote code,html.writer-html5 .rst-content aside.footnote tt,html.writer-html5 .rst-content div.citation-list>div.citation code,html.writer-html5 .rst-content div.citation-list>div.citation tt,html.writer-html5 .rst-content dl.citation code,html.writer-html5 .rst-content dl.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040;overflow-wrap:normal}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}.rst-content dl dd>ol:last-child,.rst-content dl dd>p:last-child,.rst-content dl dd>table:last-child,.rst-content dl dd>ul:last-child{margin-bottom:0}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel{border:1px solid #7fbbe3;background:#e7f2fa;font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>.kbd,.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>kbd{color:inherit;font-size:80%;background-color:#fff;border:1px solid #a6a6a6;border-radius:4px;box-shadow:0 2px grey;padding:2.4px 6px;margin:auto 0}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} diff --git a/css/theme_extra.css b/css/theme_extra.css new file mode 100644 index 00000000..9f4b063c --- /dev/null +++ b/css/theme_extra.css @@ -0,0 +1,191 @@ +/* + * Wrap inline code samples otherwise they shoot of the side and + * can't be read at all. + * + * https://github.com/mkdocs/mkdocs/issues/313 + * https://github.com/mkdocs/mkdocs/issues/233 + * https://github.com/mkdocs/mkdocs/issues/834 + */ +.rst-content code { + white-space: pre-wrap; + word-wrap: break-word; + padding: 2px 5px; +} + +/** + * Make code blocks display as blocks and give them the appropriate + * font size and padding. + * + * https://github.com/mkdocs/mkdocs/issues/855 + * https://github.com/mkdocs/mkdocs/issues/834 + * https://github.com/mkdocs/mkdocs/issues/233 + */ +.rst-content pre code { + white-space: pre; + word-wrap: normal; + display: block; + padding: 12px; + font-size: 12px; +} + +/** + * Fix code colors + * + * https://github.com/mkdocs/mkdocs/issues/2027 + */ +.rst-content code { + color: #E74C3C; +} + +.rst-content pre code { + color: #000; + background: #f8f8f8; +} + +/* + * Fix link colors when the link text is inline code. + * + * https://github.com/mkdocs/mkdocs/issues/718 + */ +a code { + color: #2980B9; +} +a:hover code { + color: #3091d1; +} +a:visited code { + color: #9B59B6; +} + +/* + * The CSS classes from highlight.js seem to clash with the + * ReadTheDocs theme causing some code to be incorrectly made + * bold and italic. + * + * https://github.com/mkdocs/mkdocs/issues/411 + */ +pre .cs, pre .c { + font-weight: inherit; + font-style: inherit; +} + +/* + * Fix some issues with the theme and non-highlighted code + * samples. Without and highlighting styles attached the + * formatting is broken. + * + * https://github.com/mkdocs/mkdocs/issues/319 + */ +.rst-content .no-highlight { + display: block; + padding: 0.5em; + color: #333; +} + + +/* + * Additions specific to the search functionality provided by MkDocs + */ + +.search-results { + margin-top: 23px; +} + +.search-results article { + border-top: 1px solid #E1E4E5; + padding-top: 24px; +} + +.search-results article:first-child { + border-top: none; +} + +form .search-query { + width: 100%; + border-radius: 50px; + padding: 6px 12px; /* csslint allow: box-model */ + border-color: #D1D4D5; +} + +/* + * Improve inline code blocks within admonitions. + * + * https://github.com/mkdocs/mkdocs/issues/656 + */ + .rst-content .admonition code { + color: #404040; + border: 1px solid #c7c9cb; + border: 1px solid rgba(0, 0, 0, 0.2); + background: #f8fbfd; + background: rgba(255, 255, 255, 0.7); +} + +/* + * Account for wide tables which go off the side. + * Override borders to avoid weirdness on narrow tables. + * + * https://github.com/mkdocs/mkdocs/issues/834 + * https://github.com/mkdocs/mkdocs/pull/1034 + */ +.rst-content .section .docutils { + width: 100%; + overflow: auto; + display: block; + border: none; +} + +td, th { + border: 1px solid #e1e4e5 !important; /* csslint allow: important */ + border-collapse: collapse; +} + +/* + * Without the following amendments, the navigation in the theme will be + * slightly cut off. This is due to the fact that the .wy-nav-side has a + * padding-bottom of 2em, which must not necessarily align with the font-size of + * 90 % on the .rst-current-version container, combined with the padding of 12px + * above and below. These amendments fix this in two steps: First, make sure the + * .rst-current-version container has a fixed height of 40px, achieved using + * line-height, and then applying a padding-bottom of 40px to this container. In + * a second step, the items within that container are re-aligned using flexbox. + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ + .wy-nav-side { + padding-bottom: 40px; +} + +/* + * The second step of above amendment: Here we make sure the items are aligned + * correctly within the .rst-current-version container. Using flexbox, we + * achieve it in such a way that it will look like the following: + * + * [No repo_name] + * Next >> // On the first page + * << Previous Next >> // On all subsequent pages + * + * [With repo_name] + * Next >> // On the first page + * << Previous Next >> // On all subsequent pages + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ +.rst-versions .rst-current-version { + padding: 0 12px; + display: flex; + font-size: initial; + justify-content: space-between; + align-items: center; + line-height: 40px; +} + +/* + * Please note that this amendment also involves removing certain inline-styles + * from the file ./mkdocs/themes/readthedocs/versions.html. + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ +.rst-current-version span { + flex: 1; + text-align: center; +} diff --git a/faq/index.html b/faq/index.html new file mode 100644 index 00000000..c35d6a3d --- /dev/null +++ b/faq/index.html @@ -0,0 +1,460 @@ + + + + + + + + FAQ - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Appendix »
  • + +
  • +
  • +
+
+
+
+
+ +

Frequently Asked Questions

+ +

What is OVOS?

+

OVOS aims to be a full operating system that is free and open source. The Open Voice Operating System consists of OVOS +packages (programs specifically released by the OVOS Project) as well as free software released by third parties such as +skills and plugins. OVOS makes it possible to voice enable technology without software that would trample your freedom.

+

Historically OVOS has been used to refer to several things, the team, the github organization and the reference +buildroot implementation

+

How did OVOS start?

+

OVOS started as MycroftOS, you can find the original mycroft forums +thread here.

+

Over time more mycroft community members joined the project, and it was renamed to OpenVoiceOS to avoid trademark issues.

+

Initially OVOS was focused on bundling mycroft-core and on creating only companion software, but due to contributions +not being accepted upstream we now maintain an enhanced reference fork of mycroft-core with extra functionality, while +keeping all companion software mycroft-core (dev branch) compatible

+

You can think of OVOS as the unsanctioned "Mycroft Community Edition"

+

Who is behind OVOS?

+

Everyone in the OVOS team is a long term mycroft community member and has experience working with the mycroft code base

+

Meet the team:

+ +

What is the relationship between OVOS and Mycroft?

+

Both projects are fully independent, initially OVOS was focused on wrapping mycroft-core with a minimal OS, but as both +projects matured, ovos-core was created to include extra functionality and make OVOS development faster and more +efficient. OVOS has been committed to keeping our components compatible with Mycroft and many of our changes are +submitted to Mycroft to include in their projects at their discretion.

+

How does OVOS make money?

+

We don't, OVOS is a volunteer project with no source of income or business model

+

However, we want to acknowledge Blue Systems and NeonGeckoCom, a lot of +the work in OVOS is done on paid company time from these projects

+

Where is your website?

+ +

Does OVOS have any default skills?

+

We provide essential skills and those are bundled in all our reference images.

+

ovos-core does not manage your skills, unlike mycroft it won't install or update anything by itself. if you installed +ovos-core manually you also need to install skills manually

+

Does OVOS work offline?

+

By default ovos-core does not require a backend internet server to operate. Some skills can be accessed (via command line) entirely offline. The default speech-to-text (STT) engine currently requires an internet connection, though some self-hosted, offline options are available. Individual skills and plugins may require internet, and most of the time you will want to use those.

+

Does OVOS depend on any servers?

+

no! you can integrate ovos-core with selene +or personal backend but that is fully optional

+

we provide some microservices for some of our skills, but you can also use your own api keys

+

How many voices does OVOS support?

+

hundreds! nearly everything in OVOS is modular and configurable, that includes Text To Speech.

+

Voices depend on language and the plugins you have installed, you can find a non-exhaustive list of plugins +in the ovos plugins awesome list

+

Can I change the wake word?

+

yes, ovos-core supports several wake word plugins.

+

Additionally, OVOS allows you to load any number of hot words in parallel and trigger different actions when they are +detected

+

each hotword can do one or more of the following:

+
    +
  • trigger listening, also called a wake_word
  • +
  • play a sound
  • +
  • emit a bus event
  • +
  • take ovos-core out of sleep mode, also called a wakeup_word or standup_word
  • +
  • take ovos-core out of recording mode, also called a stop_word
  • +
+

Can OVOS run without a wake word?

+

mostly yes, depending on exactly what you mean by this question

+

OVOS can run without any wake word configured, in this case you will only be able to interact via CLI or button press, +best for privacy, not so great for a smart speaker

+

ovos-core also provides a couple experimental settings, if you enable continuous listening then VAD will be used to +detect speech and no wake word is needed, just speak to mycroft and it should answer! However, this setting is +experimental for a reason, you may find that mycroft answers your TV or even tries to answer itself if your hardware +does not have AEC

+

Another experimental setting is hybrid mode, with hybrid mode you can ask follow-up questions, up to 45 seconds after the +last mycroft interaction, if you do not interact with mycroft it will go back to waiting for a wake word

+

How fast can OVOS respond?

+

By default, to answer a request:

+
    +
  1. Detects the wake word
  2. +
  3. Records 3 - 10 seconds of audio
  4. +
  5. Transcribes the audio and returns the text transcription , either locally or remotely, depending on the speech-to-text (STT) engine in use
  6. +
  7. Parses the text to understand the intent
  8. +
  9. Sends the text to the intent handler with the highest confidence
  10. +
  11. Allows the Skill to perform some action and provide the text to be spoken
  12. +
  13. Synthesizes audio from the given text, either locally or remotely, depending on the text-to-speech (TTS) engine in use
  14. +
  15. Plays the synthesized spoken audio.
  16. +
+

Through this process there are a number of factors that can affect the perceived speed of responses:

+
    +
  • System resources - more processing power and memory never hurts!
  • +
  • Network latency - depending on configured plugins, network latency and connection speed can play a significant role in slowing down response times.
  • +
  • Streaming STT - we have been experimenting with the use of streaming services. This transcribes audio as it's received rather than waiting for the entire utterance to be finished and sending the resulting audio file to a server to be processed in its entirety. It is possible to switch to a streaming STT service. See STT Plugins for a list of options available.
  • +
  • Dialog structure - a long sentence will always take more time to synthesize than a short one. Skill developers can help provide quicker response times by considering the structure of their dialog and breaking that dialog up.
  • +
  • TTS Caching - synthesized audio is cached meaning common recently generated phrases don't need to be generated, they can be returned immediately.
  • +
+

How do I run OVOS behind a proxy?

+

Many schools, universities and workplaces run a proxy on their network. If you need to type in a username and password to access the external internet, then you are likely behind a proxy.

+

If you plan to use OVOS behind a proxy, then you will need to do an additional configuration step.

+

NOTE: In order to complete this step, you will need to know the hostname and port for the proxy server. Your network administrator will be able to provide these details. Your network administrator may want information on what type of traffic OVOS will be using. We use https traffic on port 443, primarily for accessing ReST-based APIs.

+

Using OVOS behind a proxy without authentication

+

If you are using OVOS behind a proxy without authentication, add the following environment variables, changing the proxy_hostname.com and proxy_port for the values for your network. These commands are executed from the Linux command line interface (CLI).

+
$ export http_proxy=http://proxy_hostname.com:proxy_port
+$ export https_port=http://proxy_hostname.com:proxy_port
+$ export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,0.0.0.0,::1"
+
+

Using OVOS behind an authenticated proxy

+

If you are behind a proxy which requires authentication, add the following environment variables, changing the proxy_hostname.com and proxy_port for the values for your network. These commands are executed from the Linux command line interface (CLI).

+
$ export http_proxy=http://user:password@proxy_hostname.com:proxy_port
+$ export https_port=http://user:password@proxy_hostname.com:proxy_port
+$ export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,0.0.0.0,::1"
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/friends/index.html b/friends/index.html new file mode 100644 index 00000000..026e6e09 --- /dev/null +++ b/friends/index.html @@ -0,0 +1,334 @@ + + + + + + + + Friends - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Appendix »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Friends

+

OpenVoiceOS is part of a larger ecosystem of FOSS voice technology, we work closely with the following projects

+

HiveMind

+

HiveMind is a community-developed superset or extension of OpenVoiceOS

+

With HiveMind, you can extend one (or more, but usually just one!) instance of Mycroft to as many devices as you want, +including devices that can't ordinarily run Mycroft!

+

HiveMind's developers have successfully connected to Mycroft from a PinePhone, a 2009 MacBook, and a Raspberry Pi 0, +among other devices. Mycroft itself usually runs on our desktop computers or our home servers, but you can use any +Mycroft-branded device, or OpenVoiceOS, as your central unit.

+

You find the website here and the source +code here

+

Plasma Bigscreen

+

Plasma Bigscreen integrates and uses OpenVoiceOS as voice framework stack to serve voice queries and voice +applications (skills with a homescreen), one can easily enable mycroft / ovos integration in the bigscreen launcher by +installing ovos core and required services and enabling the integration switch in the bigscreen KCM

+

You find the website here and the source +code here

+

plasma

+

NeonGecko

+

Neon was one of the first projects ever to adopt ovos-core as a library to build their own voice assistant, +Neon works closely together with OVOS and both projects are mostly compatible

+

You find the website here and the source code here

+

Mycroft

+

Mycroft AI started it all, it was one of the first ever FOSS voice assistants and is the project OVOS descends from.

+

Most applications made for mycroft will work in OVOS and vice-versa

+

You find the website here and the source code here

+

Secret Sauce AI

+

Secret Sauce AI is a coordinated community of tech minded AI enthusiasts working together on projects to identify +blockers and improve the basic open source tools and pipeline components in the AI (voice) assistant pipeline (wakeword, +ASR, NLU, NLG, TTS). The focus is mostly geared toward deployment on edge devices and self-hosted solutions. This is not +a voice assistant project in and of itself, rather Secret Sauce AI helps AI (voice) assistant projects come together as +individuals and solve basic problems faced by the entire community.

+

You find the website here and the source +code here

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/glossary/index.html b/glossary/index.html new file mode 100644 index 00000000..c554498d --- /dev/null +++ b/glossary/index.html @@ -0,0 +1,485 @@ + + + + + + + + Glossary - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Appendix »
  • + +
  • +
  • +
+
+
+
+
+ +

Glossary

+

Editor's Note +Some of the more detailed definitions will be moved to other pages, it's just here to keep track of the information for now.

+

The Project

+

The OpenVoiceOS Project (OVOS)

+

All the repositories under OpenVoiceOS organization

+

The OpenVoiceOS Team

+

The team behind OVOS

+

Terms

+

Confirmations

+

Confirmation approaches can also be defined by Statements or Prompts , but when we talk about them in the context of confirmations we call them Implicit and Explicit.

+

Implicit Confirmation

+

This type of confirmation is also a statement. The idea is to parrot the information back to the user to confirm that it +was correct, but not require additional input from the user. The implicit confirmation can be used in a majority of +situations.

+

Explicit Confirmation

+

This type of confirmation requires an input from the user to verify everything is correct.

+

Conversations

+

Any time the user needs to input a lot of information or the user needs to sort through a variety of options a conversation will be needed. +Users may be used to systems that require them to separate input into different chunks.

+

Context

+

Allows for natural conversation by having skills set a "context" that can be used by subsequent handlers. Context could be anything from person to location. Context can also create "bubbles" of available intent handlers, to make sure certain Intents can't be triggered unless some previous stage in a conversation has occurred.

+

You can find an example Tea Skill using conversational context on Github.

+

As you can see, Conversational Context lends itself well to implementing a dialog tree or conversation tree.

+

Grapheme

+

All of the letters and letter combinations that represent a phoneme.

+

Home Screen

+

The OpenVoiceOS home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections.

+

Intent

+

When an utterance is classified for its action and entities (e.g. 'turn on the kitchen lights' -> skill: home assistant, action: turn on/off, entity: kitchen lights)

+

MPRIS

+

(Media Player Remote Interfacing Specification) is a standard D-Bus interface which aims to provide a common programmatic API for controlling media players. +More Inforamtion

+

mycroft.conf

+

Primary configuration file for the voice assistant. Possible locations: +- /home/ovos/.local/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf +- /etc/mycroft/mycroft.conf +- /home/ovos/.config/mycroft/mycroft.conf +- /etc/xdg/mycroft/mycroft.conf +- /home/ovos/.mycroft/mycroft.conf +More Information

+

OCP

+

OCP stands for OpenVoiceOS Common Play, it is a full fledged +media player

+

OCP is a OVOSAbstractApplication, this +means it is a standalone but native OVOS application with full voice integration

+

OCP differs from mycroft-core in several aspects:

+
    +
  • Can run standalone, only needs a bus connection
  • +
  • OCP provides its own intents as if it was a skill
  • +
  • OCP provides its own GUI as if it was a skill
  • +
  • mycroft-core CommonPlay skill framework is disabled when OCP loads
  • +
  • OCP skills have a dedicated MycroftSkill class and decorators in ovos-workshop
  • +
  • OCP skills act as media providers, they do not (usually) handle playback
  • +
  • mycroft-core CommonPlay skills have an imperfect compatibility layer and are given lower priority over OCP skills
  • +
  • OCP handles several kinds of playback, including video
  • +
  • OCP has a sub-intent parser for matching requested media types
  • +
  • AudioService becomes a subsystem for OCP
  • +
  • OCP also has AudioService plugin component introducing a compatibility layer for skills using "old style audioservice + api"
  • +
  • OCP integrates with MPRIS, it can be controlled from external apps, e.g. KdeConnect in your phone
  • +
  • OCP manages external MPRIS enabled players, you can voice control 3rd party apps without writing a skill for it via + OCP
  • +
+

ovos-core

+

The central repository where the voice assistant "brain" is developed

+

OPM

+

OPM is the OVOS Plugin Manager, this base package provides arbitrary plugins to the ovos ecosystem

+

OPM plugins import their base classes from OPM making them portable and independent from core, plugins can be used in your standalone projects

+

By using OPM you can ensure a standard interface to plugins and easily make them configurable in your project, plugin code and example configurations are mapped to a string via python entrypoints in setup.py

+

Some projects using OPM are ovos-core +, hivemind-voice-sat +, ovos-personal-backend +, ovos-stt-server +and ovos-tts-server

+

OVOS-shell

+

The gui service in ovos-core will expose a websocket to +the GUI client following the protocol +outlined here

+

The GUI library which implements the protocol lives in the mycroft-gui +repository, The repository also hosts a development client for skill developers wanting to develop on the desktop.

+

OVOS-shell is the OpenVoiceOS client implementation of the mycroft-gui +library used in our embedded device images, other distributions may offer alternative implementations such +as plasma-bigscreen* +or mycroft mark2

+

OVOS-shell is tightly coupled to PHAL, the following companion plugins should be installed if you are +using ovos-shell

+ +

PHAL

+

Physical Hardware Abstraction Layer +PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the +concept of hardcoded "enclosure" from mycroft-core

+

Any number of plugins providing functionality can be loaded and validated at runtime, plugins can +be system integrations to handle things like reboot and +shutdown, or hardware drivers such as mycroft mark2 plugin

+

PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it +does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images

+

Phoneme

+

The smallest phonetic unit in a language that is capable of conveying a distinction in meaning, as the m of mat and the b of bat in English.

+

Service

+

Snapcast

+

Snapcast is a multiroom client-server audio player, where all clients are time synchronized with the server to play perfectly synced audio. It's not a standalone player, but an extension that turns your existing audio player into a Sonos-like multiroom solution. +More Information

+

Prompts and Statements

+

You can think of Prompts as questions and Statements as providing information to the user that does not need a follow-up response.

+

QML

+

Qt Markup Language, the language for Qt Quick UIs. More Information

+

The Mycroft GUI Framework uses QML.

+

STT

+

Speech To Text +Also known as ASR, automated speech recognition, the process of converting audio into words

+

TTS

+

Text To Speech +The process of generating the audio with the responses

+

Utterance

+

Command, question, or query from a user (eg 'turn on the kitchen lights')

+

Wake Word

+

A specific word or phrase trained used to activate the STT (eg 'hey mycroft')

+

XDG

+

XDG stands for "Cross-Desktop Group", and it's a way to help with compatibility between systems. More Information

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/hardware/index.html b/hardware/index.html new file mode 100644 index 00000000..fb89f148 --- /dev/null +++ b/hardware/index.html @@ -0,0 +1,319 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Home »
  • +
  • Supported Hardware »
  • + +
  • +
  • +
+
+
+
+
+ +

Supported Hardware

+

OVOS has been confirmed to run on several devices, and more to come.

+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/hardware_audio/index.html b/hardware_audio/index.html new file mode 100644 index 00000000..5339b2b1 --- /dev/null +++ b/hardware_audio/index.html @@ -0,0 +1,363 @@ + + + + + + + + Audio - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Home »
  • +
  • Supported Hardware »
  • + +
  • +
  • +
+
+
+
+
+ +

Audio Hardware

+

Recommendations and notes on speakers and microphones

+

Most audio devices are available to use with the help of Plugins and should for the most part work by default.

+

If your device does not work, pop in to our Matrix support channel, please create an issue or start a discussion about your device.

+

USB

+

Most USB devices should work without any issues. But, not all devices are created equally.

+

Tested USB devices

+
    +
  • Microphones
  • +
  • BlueSnowball (Works well without issues)
  • +
  • Generic webcam with mic (Works, but sound quality can be lacking. Makes wakeword detection and STT processing less accurate)
  • +
  • PS3 Eye (Works without issue)
  • +
  • Kinect V1 (Works, but may need a firmware upgrade)
  • +
  • Speakers
  • +
  • Generic USB Speakers (Works without issue, but sound quality varies)
  • +
  • Cameras
  • +
  • Generic webcam (Works, but is not guaranteed to work with some camera skills)
  • +
  • PS3 Eye (Same as Generic webcam)
  • +
  • Kinect V1 (Same as Generic webcam)
  • +
  • Andrea Electronics C1-1028100-3
  • +
+

Audio Troubleshooting - USB

+

HDMI

+

HDMI audio should work without issues if your device supports it.

+

Audio Troubleshooting - HDMI

+

Analog

+

Analog output to headphones, or external speakers should work also. There may be some configuration needed on some devices.

+

Audio Troubleshooting - Analog

+

Raspberry Pi HATs

+

There are several HAT's that are available, some with just a microphone, others that play audio out also. Several are supported and tested, others should work with the proper configuration.

+

Tested RPi HATs

+
    +
  • Respeaker
  • +
  • 2/4/6/8 mic boards (Works native with Buildroot image. Others needs configuration)
  • +
  • AIY VoiceHat V1 (Works with /boot/config.txt modification)
  • +
  • AIY VoiceBonnet V2 (Works with custom driver update and /boot/config.txt modification)
  • +
  • Audio Troubleshooting - HATs
  • +
+

Specialty Hardware

+

Some special sound boards are also supported.

+
    +
  • SJ201 sound board (Mark 2 sound board)
  • +
  • Buildroot Native support
  • +
  • Supported on other devices with manual install of drivers
  • +
  • +

    Audio Troubleshooting - SJ201

    +
  • +
  • +

    Mark 1 custom sound board

    +
  • +
  • (Native support - raspbian-ovos mark1 image)
  • +
  • Other device support with /boot.config.txt modification
  • +
  • Audio Troubleshooting - Mark 1
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/hardware_video/index.html b/hardware_video/index.html new file mode 100644 index 00000000..8aa4b1d3 --- /dev/null +++ b/hardware_video/index.html @@ -0,0 +1,301 @@ + + + + + + + + Video - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Home »
  • +
  • Supported Hardware »
  • + +
  • +
  • +
+
+
+
+
+ +

Supported Hardware - Video

+

Display

+

If your device supports video out, you can use a screen on your device. (RPI3/3b/3b+ will not the OVOS GUI, ovos-shell, due to lack of processing power, but you can access a command prompt on a locally connected screen)

+

Touchscreens

+

OVOS supports touchscreen interaction, but not all are created equally. It has been noted that on some USB touchscreens, the touch matrix is not synced with the OVOS display and requires an x11 setup with a window manager to adjust the settings to work.

+

Troubleshooting Video

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_change_voice/index.html b/ht_change_voice/index.html new file mode 100644 index 00000000..95ab9522 --- /dev/null +++ b/ht_change_voice/index.html @@ -0,0 +1,293 @@ + + + + + + + + Change Your Assistant's Voice - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • TTS »
  • + +
  • +
  • +
+
+
+
+
+ +

How To Change Your Assistant's Voice

+

WIP

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_config_homescreen/index.html b/ht_config_homescreen/index.html new file mode 100644 index 00000000..3cd7b08f --- /dev/null +++ b/ht_config_homescreen/index.html @@ -0,0 +1,374 @@ + + + + + + + + Configuration - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • Homescreen »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Home Screen

+

The home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections.

+

+

Features

+

Night Mode Feature

+

The Night Mode feature lets you quickly switch your home screen into a dark standby clock, reducing the amount of light emitted by your device. This is especially useful if you are using your device in a dark room or at night. You can enable the night mode feature by tapping on the left edge pill button on the home screen.

+

+

Quick Actions Dashboard

+

The Quick Actions Dashboard provides you with a card-based interface to quickly access and add your most used action. The Quick Actions dashboard comes with a variety of pre-defined actions like the ability to quickly add a new alarm, start a new timer or add a new note. You can also add your own custom actions to the dashboard by tapping on the plus button in the top right corner of the dashboard. The Quick Actions dashboard is accessible by tapping on the right edge pill button on the home screen.

+

+

Application Launcher

+

OpenVoiceOS comes with support for dedicated voice applications. Voice Applications can be dedicated skills or PHAL plugins, providing their own dedicated user interface. The application launcher will show you a list of all available voice applications. You can access the application launcher by tapping on the center pill button on the bottom of the home screen.

+

+

Wallpapers

+

The home screen supports custom wallpapers and comes with a bunch of wallpapers to choose from. You can easily change your custom wallpaper by swiping from right to left on the home screen.

+

+

Widgets

+

+

Notifications Widget

+

The notifications widget provides you with a quick overview of all your notifications. The notifications bell icon will be displayed in the top left corner of the home screen. You can access the notifications overview by tapping on the bell icon when it is displayed.

+

Timer Widget

+

The timer widget is displayed in top left corner after the notifications bell icon. It will show up when you have an active timer running. Clicking on the timer widget will open the timers overview.

+

Alarm Widget

+

The alarm widget is displayed in top left corner after the timer widget. It will show up when you have an active alarm set. Clicking on the alarm widget will open the alarms overview.

+

Media Player Widget

+

The media player widget is displayed in the bottom of the home screen, It replaces the examples widget when a media player is active. The media player widget will show you the currently playing media and provide you with a quick way to pause, resume or skip the current media. You can also quickly access the media player by tapping the quick display media player button on the right side of the media player widget.

+

+

Configuration

+

Settings

+

The homescreen has several customizations available. This is sample settings.json file with all of the options explained

+
{
+    "__mycroft_skill_firstrun": false,
+    "weather_skill": "skill-weather.openvoiceos",
+    "datetime_skill": "skill-date-time.mycroftai",
+    "examples_skill": "ovos-skills-info.openvoiceos",
+    "wallpaper": "default.jpg",
+    "persistent_menu_hint": false,
+    "examples_enabled": true,
+    "randomize_examples": true,
+    "examples_prefix": true
+}
+
+
    +
  • __mycroft_skill_firstrun: This is automatically set on first load of skill
  • +
  • weather_skill: DEPRECATED and has no effect - PR pending
  • +
  • datetime_skill: Allows you to use a custom skill to display the date and time. Defaults to skill-ovos-date-time.openvoiceos
  • +
  • examples_skill: Allows use of a custom skill for the displayed examples. Defaults to ovos_skills_manager.utils.get_skills_example() function
  • +
  • wallpaper: Allows a custom wallpaper to be displayed. Use a complete url without any tilde ~
  • +
  • persistent_menu_hint: When true, displayes a hint of the pull-down menu at the top of the page
  • +
  • examples_enabled: When false, the examples at the bottom of the screen will be hidden
  • +
  • randomize_examples: When false, the rotation of the examples will follow the way they are loaded
  • +
  • examples_prefix: When false, the prefix 'Ask Me' will NOT be displayed with the examples
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_config_security/index.html b/ht_config_security/index.html new file mode 100644 index 00000000..d8772f5f --- /dev/null +++ b/ht_config_security/index.html @@ -0,0 +1,326 @@ + + + + + + + + OpenVoiceOS Security - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Security

+

Securing SSH

+

Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication.

+

When connecting from a Linux or MacOS client

+

Create a keyfile (you can change ovos to whatever you want)

+
ssh-keygen -t ed25519 -f ~/.ssh/ovos
+
+

Copy to host (use the same filename as above, specify the user and hostname you are using)

+
ssh-copy-id -i ~/.ssh/ovos  ovos@mycroft
+
+

On your dekstop, edit ~/.ssh/config and add the following lines

+
Host rp2
+  user ovos
+  IdentityFile ~/.ssh/ovos
+
+

On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line:

+
PasswordAuthentication no
+
+

restart sshd or reboot

+
sudo systemctl restart sshd
+
+

Message Bus Security

+

Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system!

+

You can read more about the security issues over at Nhoya/MycroftAI-RCE

+

in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills

+

you can see a demonstration of this problem with BusBrickerSkill

+

"shared_connection": false ensures each skill gets its own websocket connection and avoids this problem

+

Additionally, it is recommended you change "host": "127.0.0.1", this will ensure no outside world connections are allowed

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_intro/index.html b/ht_intro/index.html new file mode 100644 index 00000000..7dd5a0f7 --- /dev/null +++ b/ht_intro/index.html @@ -0,0 +1,299 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • + +
  • +
  • +
+
+
+
+
+ +

How Do I???

+

This section is provided as a basic Q&A for common questions.

+

How do I - Skills

+

How do I - TTS

+

How do I - Listener

+

How do I - WakeWord

+

How do I - HotWords

+

How do I - PHAL

+

How do I - Custom Services

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_listener_intro/index.html b/ht_listener_intro/index.html new file mode 100644 index 00000000..b20ced10 --- /dev/null +++ b/ht_listener_intro/index.html @@ -0,0 +1,295 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • Listener »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Listener

+

The listener is responsible for loading STT, VAD and Wake Word plugins

+

Speech is transcribed into text and forwarded to the skills service.

+

The newest listener that OVOS uses is ovos-dinkum-listener. It is a version of the listener from the Mycroft Dinkum software for the Mark 2 modified for use with OVOS.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_mic/index.html b/ht_mic/index.html new file mode 100644 index 00000000..16f75a98 --- /dev/null +++ b/ht_mic/index.html @@ -0,0 +1,357 @@ + + + + + + + + Microphone - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • Listener »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Listener - Microphone

+

OVOS uses microphone plugins to support different setups and devices.

+

NOTE only ovos-dinkum-listener has this support.

+

The default plugin that OVOS uses is ovos-microphone-plugin-alsa and for most cases should work fine.

+

If you are running OVOS on a Mac, you need a different plugin to access the audio. ovos-microphone-plugin-sounddevice

+

OVOS microphone plugins are available on PyPi

+

pip install ovos-microphone-plugin-sounddevice

+

or

+

pip install --pre ovos-microphone-plugin-sounddevice

+

for the latest alpha versions.

+

NOTE The alpha versions may be needed until the release of ovos-core 0.1.0

+

List of OVOS microphone plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginUsage
ovos-microphone-plugin-alsaDefault plugin - should work in most cases
ovos-microphone-plugin-sounddeviceThis plugin is needed when running OVOS on a Mac but also works on other platforms
ovos-microphone-plugin-socketUsed to connect a websocket microphone for remote usage
ovos-microphone-plugin-filesWill use a file as the voice input instead of a microphone
ovos-microphone-plugin-pyaudioUses PyAudio for audio processing
ovos-microphone-plugin-arecordUses arecord to get input from the microphone. In some cases this may be faster than the default alsa
+

Configuration

+

Microphone plugin configuration is located under the top level listener value.

+
{
+    "listener": {
+        "microphone": {
+            "module": "ovos-microphone-plugin-alsa",
+            "ovos-microphone-plugin-alsa": {
+                "device": "default"
+            }
+        }
+    }
+}
+
+

The only required section is "module". The plugin will then use the default values.

+

The "device" section is used if you have several microphones attached, this can be used to specify which one to use.

+

Specific plugins may have other values that can be set. Check the GitHub repo of each plugin for more details.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_phal/index.html b/ht_phal/index.html new file mode 100644 index 00000000..b776fe65 --- /dev/null +++ b/ht_phal/index.html @@ -0,0 +1,396 @@ + + + + + + + + PHAL - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS PHAL

+

PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded "enclosure" from mycroft-core.

+

Any number of plugins providing functionality can be loaded and validated at runtime, plugins can +be system integrations to handle things like reboot and shutdown, or hardware drivers such as Mycroft Mark 2 plugin.

+

PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images.

+

Plugins

+

Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginDescription
ovos-PHAL-plugin-alsavolume control
ovos-PHAL-plugin-systemreboot / shutdown / factory reset
ovos-PHAL-plugin-mk1mycroft mark1 integration
ovos-PHAL-plugin-mk2mycroft mark2 integration
ovos-PHAL-plugin-respeaker-2micrespeaker 2mic hat integration
ovos-PHAL-plugin-respeaker-4micrespeaker 4mic hat integration
ovos-PHAL-plugin-wifi-setupwifi setup (central plugin)
ovos-PHAL-plugin-gui-network-clientwifi setup (GUI interface)
ovos-PHAL-plugin-balena-wifiwifi setup (hotspot)
ovos-PHAL-plugin-network-managerwifi setup (network manager)
ovos-PHAL-plugin-brightness-control-rpibrightness control
ovos-PHAL-plugin-ipgeoautomatic geolocation (IP address)
ovos-PHAL-plugin-gpsdautomatic geolocation (GPS)
ovos-PHAL-plugin-dashboarddashboard control (ovos-shell)
ovos-PHAL-plugin-notification-widgetssystem notifications (ovos-shell)
ovos-PHAL-plugin-color-scheme-managerGUI color schemes (ovos-shell)
ovos-PHAL-plugin-configuration-providerUI to edit mycroft.conf (ovos-shell)
ovos-PHAL-plugin-analog-media-devicesvideo/audio capture devices (OCP)
+

flow

+

Admin PHAL

+

AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root privileges.

+

This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely

+

NOTE: Because this service runs as root, plugins it loads are responsible for not writing configuration changes which would result in breaking config file permissions.

+

Admin Plugins

+

AdminPlugins are just like regular PHAL plugins that run with root privileges.

+

Admin plugins will only load if their configuration contains "enabled": true. All admin plugins need to be explicitly enabled.

+

You can find plugin packaging documentation here.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_skills/index.html b/ht_skills/index.html new file mode 100644 index 00000000..1b2175c7 --- /dev/null +++ b/ht_skills/index.html @@ -0,0 +1,374 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • Skills »
  • + +
  • +
  • +
+
+
+
+
+ +

How do I - Skills

+

What can a Skill do?

+

Skills give OVOS the ability to perform a variety of functions. They can be installed or removed by the user, and can be easily updated to expand functionality. To get a good idea of what skills to build, let’s talk about the best use cases for a voice assistant, and what types of things OVOS can do.

+

OVOS can run on a variety of platforms from the Linux Desktop to Single Board Computers (SBCs) like the Raspberry Pi. Different devices will have slightly different use cases. Devices in the home are generally located in the living room or kitchen and are ideal for listening to the news, playing music, general information, using timers while cooking, checking the weather, and other similar activities that are easily accomplished hands-free.

+

Basic functions

+

We cover a lot of the basics with our Default Skills, things like Timers, Alarms, Weather, Time and Date, and more.

+

Information

+

We also call this General Question and Answer, and it covers all of those factual questions someone might think to ask a voice assistant. Questions like “who was the 32nd President of the United States?”, or “how tall is Eiffel Tower?” Although the Default Skills cover a great deal of questions there is room for more. There are many topics that could use a specific skill such as Science, Academics, Movie Info, TV info, and Music info, etc.

+

OVOS Fallback Skills

+

Media

+

One of the biggest use cases for Smart Speakers is playing media. The reason media playback is so popular is that it makes playing a song so easy, all you have to do is say “Hey Mycroft play the Beatles,” and you can be enjoying music without having to reach for a phone or remote. In addition to listening to music, there are skills that handle videos as well.

+

News

+

Much like listening to music, getting the latest news with a simple voice interaction is extremely convenient. OVOS supports multiple news feeds, and has the ability to support multiple news skills.

+

Smart Home

+

Another popular use case for Voice Assistants is to control Smart Home and IoT products. Within the OVOS ecosystem there are skills for Home Assistant, Wink IoT, Lifx and more, but there are many products that we do not have skill for yet. The open source community has been enthusiastically expanding OVOS's ability to voice control all kinds of smart home products.

+

Games

+

Voice games are becoming more and more popular, especially those that allow multiple users to play together. Trivia games are some of the most popular types of games to develop for voice assistants. There are several games already available for OVOS. There are native voice adventure games, ports of the popular text adventure games from infocom, a Crystal Ball game, a Number Guessing game and much more!

+

OpenVoiceOS Standard Skills

+

Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage.

+

Date / Time skill

+

You can ask your device what time or date it is just in case you lost your watch.

+
+

Hey Mycroft, what time is it?

+
+

Time

+
+

Hey Mycroft, what is the date?

+
+

Date

+

Setting an alarm

+

Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device.

+
+

Hey Mycroft, set an alarm for 8 AM.

+
+

Alarm

+

Setting of timers

+

Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers.

+
+

Hey Mycroft, set a timer for 5 minutes.

+
+

Timer

+

You can always set more timers and even name them, so you know which timers is for what.

+
+

Hey, Mycroft, set another timer called rice cooking for 7 minutes.

+
+

Timers

+

Asking the weather

+

You can ask your device what the weather is or would be at any given time or place.

+
+

Hey Mycroft, what is the weather like today?

+
+

Weather 1

+

The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen.

+

Weather 2

+

There are more installed, just try. If you don't get the response you expected, see the section on installing new skills

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_skills_config/index.html b/ht_skills_config/index.html new file mode 100644 index 00000000..9773f94c --- /dev/null +++ b/ht_skills_config/index.html @@ -0,0 +1,302 @@ + + + + + + + + Configuring - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • Skills »
  • + +
  • +
  • +
+
+
+
+
+ +

Skill Configuration

+

Each skill will have its own config file usually located at ~/.local/share/mycroft/skills/<skill_id>/settings.json

+

Skill settings provide the ability for users to configure a Skill using the command line or a web-based interface.

+

This is often used to:

+
    +
  • Change default behaviors - such as the sound used for users alarms.
  • +
  • Authenticate with external services - such as Spotify
  • +
  • Enter longer data as text rather than by voice - such as the IP address and token of the user's Home Assistant server.
  • +
+

Skill settings are completely optional.

+

Refer to each skill repository for valid configuration values.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_skills_install/index.html b/ht_skills_install/index.html new file mode 100644 index 00000000..d24e2ed5 --- /dev/null +++ b/ht_skills_install/index.html @@ -0,0 +1,367 @@ + + + + + + + + Installing Skills - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • Skills »
  • + +
  • +
  • +
+
+
+
+
+ +

How do I - Installing Skills

+

This section will help you to understand what a skill is and how to install and use skills with OVOS.

+

Finding Skills

+

OVOS official skills can be found on PyPi and the latest stable version can be installed with a pip install command.

+

pip install ovos-skill-naptime

+

If you have issues installing with this command, you may need to use the alpha versions. Pip has a command line flag for this --pre.

+

pip install --pre ovos-skill-naptime

+

will install the latest alpha version. This should fix dependency issues with the stable versions.

+

Most skills are found throughout github. The official skills can be found with a simple search in the OVOS GitHub page. There are a few other places they can be found. Neon AI has several skills, and a search through GitHub will surley find more.

+

Installing a found skill

+

There are a few ways to install skills in ovos. The preferred way is with pip and a setup.py file.

+

pip install

+

The preferred method is with pip. If a skill has a setup.py file, it can be installed this way.

+

The syntax is pip install git+<github/repository.git>.

+

ex. pip install git+https://github.com/OpenVoiceOS/skill-ovos-date-time.git should install the ovos-date-time skill.

+

Skills can be installed from a local file also.

+

Clone the repository.

+

git clone https://github.com/OpenVoiceOS/skill-ovos-date-time

+

pip install ./skill-ovos-date-time

+

After installing skills this way, ovos skills service needs to be restarted.

+

systemctl --user restart ovos-skills

+

git install

+

This is NOT the preferred method and is here for backward compatibility with the original mycroft-core skills.

+

Skills can also be directly cloned to the skill directory, usually located at ~/.local/share/mycroft/skills/.

+

enter the skill directory.

+

cd ~/.local/share/mycroft/skills

+

and clone the found skill here with git.

+

git clone <github/repository.git>

+

ex. git clone https://github.com/OpenVoiceOS/skill-ovos-date-time.git will install the ovos-date-time skill.

+

A restart of the ovos-skills service is not required when installing this way.

+

Deprecated

+

The OVOS skills manager is in need of some love, and when official skill stores are created, this will be updated to use the new methods. Until then, this method is NOT recommended, and NOT supported. The following is included just as reference.

+

OVOS skills manager

+

Install skills from any appstore!

+

The mycroft-skills-manager alternative that is not vendor locked, this means you must use it responsibly!

+

Do not install random skills, different appstores have different policies!

+

Keep in mind any skill you install can modify mycroft-core at runtime, and very likely has root access if you are running on a Raspberry Pi.

+

Supported stores

+
    +
  • OVOS - this one is really a proof of concept for now, stay tuned!
  • +
  • Mycroft Marketplace - the official mycroft skills store, all skills are reviewed by humans!
  • +
  • Pling - the official plasma bigscreen skills store, skills are accepted by default and only removed if flagged as malicious
  • +
  • [Andlo's skill list]https://github.com/andlo/mycroft-skills-list-gitbook/tree/master/skills) - not a real appstore, this is a web scrapped automatically generated list of 900+ skills from all over GitHub, there is no review at all, it will catch malicious skills
  • +
+

OpenVoiceOS Skill Manager

+
pip install ovos-skills-manager
+
+

Enable a skill store

+
osm enable --appstore [ovos|mycroft|pling|andlo|all]
+
+

Search for a skill and install it

+
osm install --search
+
+

See more osm commands

+
osm --help
+osm install --help
+
+

More Information

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_stt/index.html b/ht_stt/index.html new file mode 100644 index 00000000..69d75296 --- /dev/null +++ b/ht_stt/index.html @@ -0,0 +1,359 @@ + + + + + + + + STT - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • Listener »
  • + +
  • +
  • +
+
+
+
+
+ +

How do I - STT

+

STT (Speech to Text) is what converts your voice into commands that OVOS recognizes, then converts to an intent that is used to activate skills.

+

There are several STT engines available and OVOS uses ovos-stt-plugin-server and a list of public servers hosted by OVOS community members by default.

+

List of STT plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginOfflineType
ovos-stt-plugin-voskyesFOSS
ovos-stt-plugin-chromiumnoAPI (free)
neon-stt-plugin-google_cloud_streamingnoAPI (key)
neon-stt-plugin-scribosermoyesFOSS
neon-stt-plugin-sileroyesFOSS
neon-stt-plugin-polyglotyesFOSS
neon-stt-plugin-deepspeech_stream_localyesFOSS
ovos-stt-plugin-selenenoAPI (free)
ovos-stt-plugin-http-servernoAPI (self hosted)
ovos-stt-plugin-pocketsphinxyesFOSS
+

Advanced Usage

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_stt_config/index.html b/ht_stt_config/index.html new file mode 100644 index 00000000..299a5f88 --- /dev/null +++ b/ht_stt_config/index.html @@ -0,0 +1,303 @@ + + + + + + + + STT Configuration - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

STT Configuration

+

Several STT engines have different configuration settings for optimizing its use. Several have different voices to use, or you can specify a specific STT server to use.

+

We will cover basic configuration of the default STT engine ovos-stt-plugin-server

+

All changes will be made in the User configuration file, eg. ~/.config/mycroft/mycroft.conf

+

Open the file for editing. nano ~/.config/mycroft/mycroft.conf

+

If your file is empty, or does not have a "stt" section, you need to create it. Add this to your config

+
{
+    "stt": {
+        "module": "ovos-stt-plugin-server",
+        "fallback_module": "ovos-stt-plugin-vosk",
+        "ovos-stt-plugin-server": {
+            "url": "https://fasterwhisper.ziggyai.online/stt"
+        }
+        "ovos-stt-plugin-vosk": {}
+    }
+}
+
+

By default, the language that is configured with OVOS will be used, but should (WIP), detect the spoken language and convert it as necessary.

+

Sections explained

+

"module" - This is where you specify what STT module to use.

+

"fallback_module" - If by chance your first STT engine fails, OVOS will try to use this one. It is usually configured to use an on device engine so that you always have some output even if you are disconnected from the internet.

+

"ovos-tts-server-plugin"

+

"ovos-tts-plugin-piper" - Specify specific plugin settings here. Multiple entries are allowed. If an empty dict is provided, {}, the plugin will use its default values.

+

Refer to the STT plugin GitHub repository for specifications on each plugin

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/ht_tts/index.html b/ht_tts/index.html new file mode 100644 index 00000000..ea9f60d9 --- /dev/null +++ b/ht_tts/index.html @@ -0,0 +1,418 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • TTS »
  • + +
  • +
  • +
+
+
+
+
+ +

How do I - TTS

+

TTS plugins are responsible for converting text into audio for playback. Several options are available each with different attributes and supported languages. Some can be run on device, others need an internet connection to work.

+

As with most OVOS packages, the TTS plugins are available on PyPi and can be installed with pip install

+

pip install ovos-tts-plugin-piper

+

will install the latest stable version. If there are installation errors, you can install the latest alpha versions of the plugins.

+

pip install --pre ovos-tts-plugin-piper

+

By default, OVOS uses ovos-tts-server-plugin and a series of public TTS servers, provided by OVOS community members, to send speech to your device. If you host your own TTS server, or this option is not acceptable to you, there are many other options to use.

+

List of TTS plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginOfflineType
ovos-tts-server-pluginnoAPI (self hosted)
ovos-tts-plugin-mimic3yesFOSS
ovos-tts-plugin-mimicyesFOSS
ovos-tts-plugin-mimic2noAPI (free)
ovos-tts-plugin-maryttsnoAPI (self hosted)
neon-tts-plugin-larynx_servernoAPI (self hosted)
ovos-tts-plugin-picoyesFOSS
neon-tts-plugin-gladosyesFOSS
neon-tts-plugin-mozilla_localyesFOSS
neon-tts-plugin-pollynoAPI (key)
ovos-tts-plugin-voicerssnoAPI (key)
ovos-tts-plugin-google-TXnoAPI (free)
ovos-tts-plugin-responsivevoicenoAPI (free)
neon-tts-plugin-mozilla_remotenoAPI (self hosted)
neon-tts-plugin-tacotron2yesFOSS
ovos-tts-plugin-espeakNGyesFOSS
ovos-tts-plugin-cotoviayesFOSS
ovos-tts-plugin-catotronnoAPI (self hosted)
ovos-tts-plugin-softcatalanoAPI (self hosted)
ovos-tts-plugin-SAMyesAbandonware
ovos-tts-plugin-beepspeakyesFun
+

Advanced Usage

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_tts_config/index.html b/ht_tts_config/index.html new file mode 100644 index 00000000..ff3427c6 --- /dev/null +++ b/ht_tts_config/index.html @@ -0,0 +1,450 @@ + + + + + + + + Configuring - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • TTS »
  • + +
  • +
  • +
+
+
+
+
+ +

TTS Configuration

+

TTS plugins are responsible for converting text into audio for playback.

+

List of TTS Plugins

+

Advanced TTS Plugin Documentation

+

Several TTS engines have different configuration settings for optimizing its use. Several have different voices to use, or you can specify a TTS server to use.

+

We will cover basic configuration of the default TTS engine ovos-tts-server-plugin.

+

All changes will be made in the User configuration file, eg. ~/.config/mycroft/mycroft.conf.

+

Open the file for editing. nano ~/.config/mycroft/mycroft.conf.

+

If your file is empty, or does not have a "tts" section, you need to create it. Add this to your config

+
{
+    "tts": {
+        "module": "ovos-tts-server-plugin",
+        "fallback_module": "ovos-tts-plugin-piper",
+        "ovos-tts-server-plugin": {
+            "host": "https://pipertts.ziggyai.online",
+            "voice": "alan-low"
+        }
+        "ovos-tts-plugin-piper": {}
+    }
+}
+
+

Sections explained

+

"module" - This is where you specify what TTS plugin to use. +- ovos-tts-server-plugin in this example. + - This plugin, by default, uses a random selection of public TTS servers provided by the OVOS community. With no "host" provided, one of those will be used. + - You can still change your voice without changing the "host". The default voice is "alan-low", or the Mycroft original voice `"Alan Pope".

+

Changing your assistant's voice

+

"fallback_module" +- If by chance your first TTS engine fails, OVOS will try to use this one. It is usually configured to use an on device engine so that you always have some output even if you are disconnected from the internet.

+

"ovos-tts-server-plugin"

+

"ovos-tts-plugin-piper" +- Specify specific plugin settings here. Multiple entries are allowed. If an empty dict is provided, {}, the plugin will use its default values.

+

Refer to the TTS github repository for specifications on each plugin

+

List of TTS plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginOfflineType
ovos-tts-server-pluginnoAPI (self hosted)
ovos-tts-plugin-piperyesAPI (self hosted)
ovos-tts-plugin-mimic3yesFOSS
ovos-tts-plugin-mimicyesFOSS
ovos-tts-plugin-mimic2noAPI (free)
ovos-tts-plugin-maryttsnoAPI (self hosted)
neon-tts-plugin-larynx_servernoAPI (self hosted)
ovos-tts-plugin-picoyesFOSS
neon-tts-plugin-gladosyesFOSS
neon-tts-plugin-mozilla_localyesFOSS
neon-tts-plugin-pollynoAPI (key)
ovos-tts-plugin-voicerssnoAPI (key)
ovos-tts-plugin-google-TXnoAPI (free)
ovos-tts-plugin-responsivevoicenoAPI (free)
neon-tts-plugin-mozilla_remotenoAPI (self hosted)
neon-tts-plugin-tacotron2yesFOSS
ovos-tts-plugin-espeakNGyesFOSS
ovos-tts-plugin-cotoviayesFOSS
ovos-tts-plugin-catotronnoAPI (self hosted)
ovos-tts-plugin-softcatalanoAPI (self hosted)
ovos-tts-plugin-SAMyesAbandonware
ovos-tts-plugin-beepspeakyesFun
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_vad/index.html b/ht_vad/index.html new file mode 100644 index 00000000..7cd75e6b --- /dev/null +++ b/ht_vad/index.html @@ -0,0 +1,292 @@ + + + + + + + + VAD - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • Listener »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Listener - VAD

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/ht_ww/index.html b/ht_ww/index.html new file mode 100644 index 00000000..11b849e7 --- /dev/null +++ b/ht_ww/index.html @@ -0,0 +1,383 @@ + + + + + + + + WakeWord/HotWords - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • How To »
  • +
  • Listener »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Listener - WakeWords / HotWords

+

OVOS uses "wakewords" to activate the system. This is what "hey Google" or "Alexa" is on proprietary devices. By default, OVOS uses the WakeWord "hey Mycroft".

+

OVOS "hotwords" is the configuration section to specify what the WakeWord do. Multiple "hotwords" can be used to do a variety of things from putting OVOS into active listening mode, a WakeWord like "hey Mycroft", to issuing a command such as "stop" or "wake up"

+

As with everything else, this too can be changed, and several plugins are available. Some work better than others.

+

List of OVOS WakeWord Plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginTypeDescription
ovos-ww-plugin-precise-liteModelThe most accurate plugin available as it uses pretrained models and community models are available also
ovos-ww-plugin-openWakeWordModelUses openWakeWord for detection
ovos-ww-plugin-voskFull WordUses full word detection from a loaded model.
ovos-ww-plugin-pocketsphinxPhonomesProbably the least accurate, but can be used on almost any device
ovos-ww-plugin-hotkeysModelUse an input from keyboard or button to emulate a wakeword being said. Useful for privacy, but not so much for a smart speaker.
ovos-ww-plugin-snowboyModelUses snowboy wakeword engine
ovos-ww-plugin-nyumayaModelWakeWord plugin using Nyumaya
+

Configuration

+

The configuration for wakewords are in the "listener" section of mycroft.conf and configuration of hotwords is in the "hotwords" section of the same file.

+

This example will use the vosk plugin and change the wake word to "hey Ziggy".

+

Add the following to your ~/.config/mycroft/mycroft.conf file.

+
{
+    "listener": {
+        "wake_word": "hey_ziggy"
+    }
+    "hotwords": {
+        "hey_ziggy": {
+            "module": "ovos-ww-plugin-vosk",
+            "listen": true,
+            "active": true,
+            "sound": "snd/start_listening.wav",
+            "debug": false
+            "rule": "fuzzy",
+            "lang": "en",
+            "samples": [
+                "hey ziggy",
+                "hay ziggy"
+                ]
+        }
+    }
+}
+
+

Sections explained

+

The most important section is "wake_word": "hey_ziggy" in the "listener" section.

+

This tells OVOS what the default wakeword should be.

+

In the "hotwords" section, "active": true, is only used if multiple wakewords are being used. By default, what ever wake_word is set in the listener section is automatically set to true.

+

If you want to disable a wakeword, you can set this to false.

+

If enabling a wakeword, be sure to also set "listen": true.

+

Multiple hotwords can be configured at the same time, even the same word with different plugins. This allows for more accurate ones to be used before the less accurate, but only if the plugin is installed.

+

Advanced WakeWords/HotWords

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/images/index.html b/images/index.html new file mode 100644 index 00000000..b17dd8bf --- /dev/null +++ b/images/index.html @@ -0,0 +1,688 @@ + + + + + + + + OpenVoiceOS vs Neon A.I. vs Mycroft A.I. - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS vs Neon A.I. vs Mycroft A.I.

+

Ready to go images compared

+

OpenVoiceOS ready to use images come in two flavours; The buildroot version, being the minimal consumer type of image and the Manjaro version, being the full distribution easy / easier for developing.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpenVoiceOS
(Buildroot)
OpenVoiceOS
(Manjaro)
Neon AIMark II
(Dinkum)
Mycroft A.I.
(PiCroft)
Software - Architecture
Coreovos-coreovos-coreneon-coreDinkummycroft-core
GUIovos-shell
(mycroft-gui based)
ovos-shell
(mycroft-gui based)
ovos-shell
(mycroft-gui based)
plasma-nano
(mycroft-gui based)
N/A
Servicessystemd
user session
systemd
system session
systemd
system session
systemd
system session
N/A
Hardware - Compatibility
Raspberry Pi3/3b/3b/444Mark II
(only)
3/3b/3b/4
X86_64plannedNoWIPNoNo
Virtual ApplianceplannedNoUnknownNoNo
DockerNo
possibly in future
YesYesNoNo
Mark-1Yes
WIP
NoNoNoNo
Mark-2Yes
Dev-Kit
Retail (WIP)
Yes
Dev-Kit
Retail
Yes
Dev-Kit
Retail
Yes
Retail ONLY
No
Hardware - Peripherals
ReSpeaker2-mic
4-mic squared
4-mic linear
6-mic
2-mic
4-mic squared
4-mic linear
6-mic
UnknownNoYes
manual installation?
USBYesYesUnknownNoYes
manual installation
SJ-201YesYesYesYesNo
sandbox image maybe
Google AIY v1Yes
manual configuration
Yes
manual installation
UnknownNoNo
manual installation?
Google AIY v2No
perhaps in the future
Yes
manual installation
UnknownNoNo
manual installation?
Screen - GUI
GUI supported
Showing a GUI if a screen is attached
Yes
ovos-shell on eglfs
Yes
ovos-shell on eglfs
Yes
ovos-shell on eglfs
Yes
plasma-nano on X11
No
Network Setup - Options
Mobile WiFi Setup
Easy device "hotspot" to connect to preset network from phone or pad.
YesNoNoYesNo
On device WiFi Setup
Configure the WiFi connection on the device itself
YesYesYesNoNo
On screen keyboardYesYesYesYesNo
Reconfigure network
Easy way to change the network settings
YesYesYesNoNo
Configuration - Option
Data privacyYesYesYesPartialPartial
Offline modeYesYesYesNoNo
Color themingYesYesYesNoNo
Non-Pairing modeYesYesYesNoNo
API Access w/o pairingYesYesYesNoNo
On-Device configurationYesYesYesNoNo
Online configurationDashboard
wip
Dashboard
wip
WIPYesYes
Customization
Open Build SystemYesYesYesPartial
build tools are not public
Yes
Package managerNo
No buildtools available.
Perhaps opkg in the future
Yes
(pacman)
YesYes
*limited becuase of read-only filesystem
Yes
Updating
Update mechanism(s)pip
In the future:
Firmware updates. On-device and Over The Air
pip
package manager
Plugin-based update mechanism
OS Updates WIP
OTA
controlled by Mycroft
pip
package manager
Voice Assistant - Functionality
STT - On deviceYes
Kaldi/Vosk-API
WhisperCPP (WIP)
Whisper TFlite (WIP)
Yes
Kaldi/Vosk-API
Yes
Vosk
Deepspeech
Yes
Vosk
Coqui
No
STT - On premisesYes
Ovos STT Server
(any plugin)
Yes
Ovos STT Server
(any plugin)
Yes
Ovos STT Server (any plugin)
NoNo
STT - CloudYes
Ovos Server Proxy
Google
More...?
Yes
Ovos Server Proxy
Google
Yes
Google
Yes
Selene Google Cloud Proxy
Yes
Selene Google (Chromium) Proxy
TTS - On deviceYes
Mimic 1
More...?
Yes
Mimic 1
More...?
Yes
Mimic 1
Mimic 3
Coqui
Yes
Mimic 3
Yes
Mimic 1
TTS - On premisesYes
?
Yes
?
Yes
Coqui
Mozilla
Larynx
NoNo
TTS - CloudYes
Google
Mimic 2
Mimic 3
More...?
Yes
Google
Mimic 2
Mimic 3
More...?
Yes
Amazon Polly
NoNo
Smart Speaker - Functionality
Music player connectivity
The use of external application on other devices to connect to your device.
Yes
Airplay
Spotifyd
Bluetooth
Snapcast
KDE Connect
UnknownUnknownYes
MPD
Local Files
No
manual installation?
Music player syncYes
OCP
MPRIS
Yes
OCP
MPRIS
Yes
OCP
MPRIS
NoNo
HomeAssistant integrationunknownYes
HomeAssistant
PHAL Plugin
WIP
Mycroft Skill reported working
unknownunknown
Camera supportYeswipYesunknownunknown
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/images_buildroot/index.html b/images_buildroot/index.html new file mode 100644 index 00000000..223c10a9 --- /dev/null +++ b/images_buildroot/index.html @@ -0,0 +1,422 @@ + + + + + + + + Buildroot - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Images »
  • + +
  • +
  • +
+
+
+
+
+ +

Open Voice Operating System - Buildroot Edition

+

Auto detection and configuration of HAT's

+

The buildroot OpenVoiceOS editions is considered to be consumer friendly type of device, or as Mycroft A.I. would like to call, a retail version. However as we so not target a specific hardware platform and would like to support custom made systems we are implementing a smart way to detect and configure different type of Raspberry Pi HAT's.

+

At boot the system scan the I2C bus for known and supported HAT's and if found configures the underlying linux sound system. At the moment this is still very much in development, however the below HAT's are or should soon be supported by this system; +- ReSpeaker 2-mic HAT +- ReSpeaker 4-mic Square HAT +- ReSpeaker 4-mic linear / 6-mic HAT +- USB devices such as the PS2 EYE +- SJ-201 Dev Kits +- SJ-201 Mark2 retail device

+

Snapcast Client & Server

+

TODO - write docs

+

Remote shared folder access (SMB - Windows)

+

Your OpenVoiceOS device is accessible over the network from your Windows computer. This is still a work in process, but you can open a file explorer and navigate to you OpenVoiceOS device. + +At the moment the following directories within the user's home directory are shared over the network. +- Documents +- Music +- Pictures +These folders are also used by KDE Connect file transfer plugins and for instance the Camera skill (Hey Mycroft, take a selfie) and / or Homescreen Skill (Hey Mycroft, take a screenshot)

+

Remote shared folder access (NFS - Linux)

+

In the near future the above Windows network shares are also made available over NFS for Linux clients. This is still a Work In Progress / To Do item.

+

Development.

+

At this moment development is in very early stages and focussed on the Raspberry Pi 3B & 4. As soon as an initial first workable version +is created, other hardware might be added.

+

Source code: https://github.com/OpenVoiceOS/ovos-buildroot

+

Build Environment

+

Only use x86_64 based architecture/ hardware to build the image.

+

The following example Build environment has been tested :

+
    +
  • Architecture: x86_64
  • +
  • Hardware: Intel Core i5 processor, 8GB RAM, 240GB SSD (you can build on less RAM (2GB) and slower storage but more RAM, faster storage = quicker image building)
  • +
  • OS: Ubuntu 22.04 LTS desktop
  • +
+

Installing System Build Dependencies

+

The following system packages are required to build the image:

+
    +
  • gcc
  • +
  • subversion
  • +
  • qttools5-dev
  • +
  • qttools5-dev-tools
  • +
  • python
  • +
  • git
  • +
  • make
  • +
  • g++
  • +
  • curl
  • +
  • wget
  • +
  • qtdeclarative5-dev
  • +
+

The following firewall ports need to be allowed to the internet.

+

In addition to the usual http/https ports (tcp 80, tcp 443) a couple of other ports need to be allowed to the internet : +- tcp 9418 (git). +- tcp 21 (ftp PASV) and random ports for DATA channel. This can be optional but better to have this allowed along with the corresponding random data channel ports. (knowledge of firewalls required)

+

Getting the code.

+

First, get the code on your system! The simplest method is via git. +
+- cd ~/ +- git clone --recurse-submodules https://github.com/OpenVoiceOS/OpenVoiceOS.git +- cd OpenVoiceOS

+

Patching Buildroot.

+

(ONLY at the first clean checkout/clone) If this is the very first time you are going to build an image, you need to execute the following command once; +
+- ./scripts/br-patches.sh +
+This will patch the Buildroot packages.

+

Building the image.

+

Building the image(s) can be done by utilizing a proper Makefile; +
+To see the available commands, just run: 'make help' +
+As example to build the rpi4 version;
+- make clean +- make rpi4_64-gui-config +- make rpi4_64-gui

+

Now grab a cup of coffee, go for a walk, sleep and repeat as the build process takes up a long time pulling everything from source and cross compiling everything for the device. Especially the qtwebengine package is taking a LONG time. +
+(At the moment there is an outstanding issue which prevents the build to run completely to the end. The plasma-workspace package will error out, not finding the libGLESv4 properly linked within QT5GUI. When the build stopped because of this error, edit the following file; +

+buildroot/output/host/aarch64-buildroot-linux-gnu/sysroot/usr/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake +

+at the bottom of the file replace this line; +

+_qt5gui_find_extra_libs(OPENGL "GLESv2" "" "") +

And replace it bit this line;

+_qt5gui_find_extra_libs(OPENGL "${CMAKE_SYSROOT}/usr/lib/libGLESv2.so" "" "${CMAKE_SYSROOT}/usr/include/libdrm") +

+Then you can continue the build process by re-running the "make rpi4_64-gui" command. (DO NOT, run "make clean" and/or "make rpi4_64-gui-config" again, or you will start from scratch again !!!) +

+When everything goes fine the xz compressed image will be available within the release directory.

+

Booting image from sd card for the first time (setting up Wi-Fi and backend).

+

1.Ensure all required peripherals (mic, speakers, HDMI, usb mouse etc) are plugged in before powering on your RPI4 for the first time. +
+2. Skip this step if RPI4 is using an ethernet cable. Once powered on, the screen will present the Wifi setup screen ( a Wifi HotSpot is created). Connect to the Wifi HotSpot (ssid OVOS) from another device and follow the on-screen instructions to set up Wifi. +
+3.Once Wifi is setup a choice of Mycroft backend and Local backend is presented. Choose the Mycroft backend for now and follow the on-screen instructions, Local backend is not ready to use yet. After the pairing process has completed and skills downloaded it's time to test/ use it.

+

Accessing the CLI.

+
    +
  • SSH to ip address of RPI4
  • +
  • default credentials 'mycroft/mycroft'
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/images_intro/index.html b/images_intro/index.html new file mode 100644 index 00000000..f819ff8d --- /dev/null +++ b/images_intro/index.html @@ -0,0 +1,292 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Images »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Image Options

+

** Coming soon **

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/img/favicon.ico b/img/favicon.ico new file mode 100644 index 00000000..e85006a3 Binary files /dev/null and b/img/favicon.ico differ diff --git a/img/phal_or_skill.png b/img/phal_or_skill.png new file mode 100644 index 00000000..3a3123bc Binary files /dev/null and b/img/phal_or_skill.png differ diff --git a/index.html b/index.html new file mode 100644 index 00000000..d629e8e8 --- /dev/null +++ b/index.html @@ -0,0 +1,320 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Home »
  • + +
  • +
  • +
+
+
+
+
+ +

The OpenVoiceOS Project Documentation

+

+

The OVOS project documentation is written and maintained by users just like you!

+

These documents are your starting point for installing and using OpenVoiceOS software

+

Note some sections may be incomplete or outdated

+

Please open Issues and Pull Requests!

+

Just want to get started with OVOS?

+

Check out our Quick Start Guide for help with installing an image, your first boot, and basic configuration.

+

Getting Involved

+

If this is your first experience with OpenVoiceOS, or you're not sure where to get started, +say hi in OpenVoiceOS Chat and a team member would be happy to mentor you. +Join the Discussions for questions and answers.

+ + +

The below links are in the process of being deprecated.

+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + + Next » + + +
+ + + + + + + + + diff --git a/install_gui/index.html b/install_gui/index.html new file mode 100644 index 00000000..ef47c1c5 --- /dev/null +++ b/install_gui/index.html @@ -0,0 +1,327 @@ + + + + + + + + Install GUI - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Installation Options »
  • + +
  • +
  • +
+
+
+
+
+ +

Installing OVOS-GUI

+

The GUI is a totaly optional component of OVOS, but adds a ton more functionality to the device. Some skills will not work without one.

+

About the GUI

+

The OVOS GUI is an independent component of OVOS which uses QT5/6 to display information on your devices screen. It is touchscreen compliant 1, and has an on-screen keyboard for entering data. On a Raspberry Pi, the GUI runs in a framebuffer, therefore not needing a full window manager. This saves resources on underpowered devices.

+

mycroft-gui-qt5 is a fork of the original mycroft-gui

+

mycroft-gui-qt6 is in the works, but not all skills support it yet.

+

Installing the GUI

+

The GUI software comes with a nice script which will install the needed packages for you.

+

To get the software we will use git and the dev_setup.sh script that is provided.

+
cd ~
+git clone https://github.com/OpenVoiceOS/mycroft-gui-qt5
+cd mycroft-gui-qt5
+bash dev_setup.sh
+
+

NOTE The mycroft-gui is NOT a python script, therefore will not run in the venv created for the rest of the software stack.

+

That's it !!

+

That is all it takes to install the GUI for OVOS. Invoke the GUI with the command:

+

ovos-gui-app

+

You can refer to the README in the mycroft-gui-qt5 repository for more information

+

Starting the GUI

+

Footnotes

+

1

+

It has been my experience that while the touchscreen will work with OVOS, some have the touch matrix opposite of what the screen is displayed. With one of these screens, it is still possible to use it, but you will need a full window manager installed instead of the GUI running in a framebuffer.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/install_intro/index.html b/install_intro/index.html new file mode 100644 index 00000000..e5a25bb7 --- /dev/null +++ b/install_intro/index.html @@ -0,0 +1,324 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Installation Options »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS

+

OpenVoiceOS is an open source platform for smart speakers and other voice-centric devices.

+

ovos-core is a backwards-compatible descendant of Mycroft-core, the central component of Mycroft. It contains extensions and features not present upstream.

+

All Mycroft Skills and Plugins should work normally with OVOS-core.

+

ovos-core is fully modular. Furthermore, common components have been repackaged as plugins. That means it isn't just a great assistant on its own, but also a pretty small library!

+

Getting Started

+

There a couple of ways to install and use the OVOS ecosystem.

+

Prebuilt Images

+

The easiest and fastest way to experience what OVOS has to offer is to use one of the prebuilt images that the OVOS team has provided.

+

NOTE Images are currently only available for a RPi3b/b+/4. More may be on the way.

+
    +
  • Buildroot-ovos
  • +
  • The most complete and advanced image OVOS provides, complete with a default set of skills and a GUI.
  • +
  • raspbian-ovos
  • +
  • The newest image from the OVOS team. This is a "headless" image (NO GUI), but comes with a preinstalled set of skills also. This image will provide you with an experience similar to the origional picroft
  • +
+

Get started with an image

+

From source as a library

+

Images are not the only way to use OVOS. It can be installed on almost any system as a set of Python libraries. ovos-core is very modular; depending on where you are running ovos-core you may want to run only a subset of the services

+

This is an advanced setup and requires access to a command shell and can take more effort to get working.

+

Get started with OVOS libraries

+

Docker

+

Docker images are also available and have been tested and working on Linux, Windows, and even Mac.

+

Get started with OVOS Docker

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/install_ovos_core/index.html b/install_ovos_core/index.html new file mode 100644 index 00000000..05348227 --- /dev/null +++ b/install_ovos_core/index.html @@ -0,0 +1,407 @@ + + + + + + + + Install as a Library - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Installation Options »
  • + +
  • +
  • +
+
+
+
+
+ +

Installing OVOS

+

The OVOS ecosystem is very modular, depending on where you are running ovos-core you may want to run only a subset of the services.

+

By default ovos-core only installs the minimum components common to all services, for the purposes of this document we will assume you want a full install with a GUI.

+

NOTE The GUI requires separate packages in addition to what is required by ovos-core. The GUI installation is covered in its own section.

+

Host system

+

OVOS requires some system dependencies, how to do this will depend on your distro.

+

Ubuntu/Debian based images.

+
sudo apt install build-essential python3-dev python3-pip swig libssl-dev libfann-dev portaudio19-dev libpulse-dev
+
+

A few packages are not necessary, but are needed to install from source and may be required for some plugins. To add these packages run this command.

+
sudo apt install git libpulse-dev cmake libncurses-dev pulseaudio-utils pulseaudio
+
+

NOTE: MycroftAI's dev_setup.sh does not exist in OVOS-core. See the community provided, WIP, manual_user_install for a minimal, almost, replacement.

+

Get started

+

We suggest you do this in a virtualenv.

+

Create and activate the virtual environment.

+
python -m venv .venv
+. .venv/bin/activate
+
+

Update pip and install wheel

+

pip install -U pip wheel

+

From PyPi

+

ovos-core

+

To install a full OVOS software stack with enough skills and plugins to have a working system, the OVOS team includes a subset of packages that can be installed automatically with pip.

+

It is recommended to use the latest alpha versions until the 0.1.0 release as it contains all of the latest bug fixes and improvements.

+

latest stable

+

ovos-core 0.0.7 does not include the new extras [mycroft] so we use [all].

+

pip install ovos-core[all]

+

alpha version

+

pip install --pre ovos-core[mycroft]

+

This should install everything needed for a basic OVOS software stack.

+

There are additional extras options available other than [mycroft] and can be found in the ovos-core setup.py file.

+

Starting OVOS

+

Individual Modules

+

Each module can be installed independently to only include the parts needed or wanted for a specific system.

+

ovos-core

+

pip install --pre ovos-core

+

Starting Modules - core

+

ovos-messagebus

+

pip install --pre ovos-messagebus

+

Starting Modules - messagebus

+

ovos-audio

+

pip install --pre ovos-audio

+

Starting Modules - audio

+

dinkum-listener

+

pip install --pre ovos-dinkum-listener

+

Starting Modules - listener

+

ovos-phal

+

pip install --pre ovos-phal

+

Starting Modules - phal

+

From Source

+

We will use git to clone the repositories to a local directory. While not specifically necessary, we are assuming this to be the users HOME directory.

+

ovos-core

+

Install ovos-core from github source files.

+

git clone https://github.com/OpenVoiceOS/ovos-core

+

The ovos-core repository provides extra requirements files. For the complete stack, we will use the mycroft.txt file.

+

pip install ~/ovos-core[mycroft]

+

This should install everything needed to use the basic OVOS software stack.

+

NOTE this also installs lgpl licenced software.

+

Starting OVOS

+

Install individual modules

+

Some systems may not require a full install of OVOS. Luckily, it can be installed as individual modules.

+

core library

+

git clone https://github.com/OpenVoiceOS/ovos-core

+

pip install ~/ovos-core

+

This is the minimal library needed as the brain of the system. There are no skills, no messagebus, and no plugins installed yet.

+

Starting Core

+

messagebus

+

git clone https://github.com/OpenVoiceOS/ovos-messagebus

+

pip install ~/ovos-messagebus

+

This is the nervous system of OVOS needed for modules to talk to each other.

+

Starting the Messagebus

+

listener

+

OVOS has updated their listener to use ovos-dinkum-listener instead of ovos-listener. It is code from mycroft-dinkum adopted for use with the OVOS ecosystem. Previous listeners are still available, but not recommended.

+

git clone https://github.com/OpenVoiceOS/ovos-dinkum-listener

+

pip install ~/ovos-dinkum-listener

+

You now have what is needed for OVOS to use a microphone and its associated services, WakeWords, HotWords, and STT

+

Starting the listener

+

PHAL

+

The OVOS Plugin based Hardware Abstraction Layer is what is used to allow the OVOS software to communicate with hardware devices such as the operating system or interacting with the Mycroft Mark 1 device.

+

The PHAL system consists of two interfaces.

+

ovos-phal is the basic interface that normal plugins would use.

+

ovos-admin-phal is used where superuser privileges are needed.

+

Be extremely careful when installing admin-phal plugins as they provide full control over the host system.

+

git clone https://github.com/OpenVoiceOS/ovos-PHAL

+

pip install ~/ovos-PHAL

+

This just installs the basic system that allows the plugins to work.

+

Starting PHAL

+

audio

+

This is the service that is used by OVOS to play all of the audio. It can be a voice response, or a stream from somewhere such as music, or a podcast.

+

It also installs OVOS Common Play which can be used as a standalone media player and is required for OVOS audio playback.

+

git clone https://github.com/OpenVoiceOS/ovos-audio

+

pip install ~/ovos-audio

+

This will enable the default TTS (Text To Speech) engine for voice feedback from your OVOS device. However, plenty of alternative TTS engines are available.

+

Starting Audio

+

You now should have all of the separate components needed to run a full OVOS software stack.

+

Starting the OVOS software stack

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/install_ovos_docker/index.html b/install_ovos_docker/index.html new file mode 100644 index 00000000..f399f692 --- /dev/null +++ b/install_ovos_docker/index.html @@ -0,0 +1,293 @@ + + + + + + + + Install with Docker - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Installation Options »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Docker

+

WIP Check back soon

+

see ovos-docker/README

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/js/html5shiv.min.js b/js/html5shiv.min.js new file mode 100644 index 00000000..1a01c94b --- /dev/null +++ b/js/html5shiv.min.js @@ -0,0 +1,4 @@ +/** +* @preserve HTML5 Shiv 3.7.3 | @afarkas @jdalton @jon_neal @rem | MIT/GPL2 Licensed +*/ +!function(a,b){function c(a,b){var c=a.createElement("p"),d=a.getElementsByTagName("head")[0]||a.documentElement;return c.innerHTML="x",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=t.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=t.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),t.elements=c+" "+a,j(b)}function f(a){var b=s[a[q]];return b||(b={},r++,a[q]=r,s[r]=b),b}function g(a,c,d){if(c||(c=b),l)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():p.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||o.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),l)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return t.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(t,b.frag)}function j(a){a||(a=b);var d=f(a);return!t.shivCSS||k||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),l||i(a,d),a}var k,l,m="3.7.3",n=a.html5||{},o=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,p=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,q="_html5shiv",r=0,s={};!function(){try{var a=b.createElement("a");a.innerHTML="",k="hidden"in a,l=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){k=!0,l=!0}}();var t={elements:n.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:m,shivCSS:n.shivCSS!==!1,supportsUnknownElements:l,shivMethods:n.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=t,j(b),"object"==typeof module&&module.exports&&(module.exports=t)}("undefined"!=typeof window?window:this,document); diff --git a/js/jquery-3.6.0.min.js b/js/jquery-3.6.0.min.js new file mode 100644 index 00000000..c4c6022f --- /dev/null +++ b/js/jquery-3.6.0.min.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0"),n("table.docutils.footnote").wrap("
"),n("table.docutils.citation").wrap("
"),n(".wy-menu-vertical ul").not(".simple").siblings("a").each((function(){var t=n(this);expand=n(''),expand.on("click",(function(n){return e.toggleCurrent(t),n.stopPropagation(),!1})),t.prepend(expand)}))},reset:function(){var n=encodeURI(window.location.hash)||"#";try{var e=$(".wy-menu-vertical"),t=e.find('[href="'+n+'"]');if(0===t.length){var i=$('.document [id="'+n.substring(1)+'"]').closest("div.section");0===(t=e.find('[href="#'+i.attr("id")+'"]')).length&&(t=e.find('[href="#"]'))}if(t.length>0){$(".wy-menu-vertical .current").removeClass("current").attr("aria-expanded","false"),t.addClass("current").attr("aria-expanded","true"),t.closest("li.toctree-l1").parent().addClass("current").attr("aria-expanded","true");for(let n=1;n<=10;n++)t.closest("li.toctree-l"+n).addClass("current").attr("aria-expanded","true");t[0].scrollIntoView()}}catch(n){console.log("Error expanding nav for anchor",n)}},onScroll:function(){this.winScroll=!1;var n=this.win.scrollTop(),e=n+this.winHeight,t=this.navBar.scrollTop()+(n-this.winPosition);n<0||e>this.docHeight||(this.navBar.scrollTop(t),this.winPosition=n)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",(function(){this.linkScroll=!1}))},toggleCurrent:function(n){var e=n.closest("li");e.siblings("li.current").removeClass("current").attr("aria-expanded","false"),e.siblings().find("li.current").removeClass("current").attr("aria-expanded","false");var t=e.find("> ul li");t.length&&(t.removeClass("current").attr("aria-expanded","false"),e.toggleClass("current").attr("aria-expanded",(function(n,e){return"true"==e?"false":"true"})))}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:n.exports.ThemeNav,StickyNav:n.exports.ThemeNav}),function(){for(var n=0,e=["ms","moz","webkit","o"],t=0;t + + + + + + + KDE Connect - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Additional Services »
  • + +
  • +
  • +
+
+
+
+
+ +

KDE Connect

+

KDE Connect is a multi-platform application developed by KDE, which facilitates wireless communications and data transfer between devices over local networks and is installed and configured by default on the Buildroot based image.

+

A couple of features of KDE Connect are:

+
    +
  • Shared clipboard: copy and paste between your phone, computer and/or OpenVoiceOS device.
  • +
  • Share files and URLs instantly from one device to another including your OpenVoiceOS device.
  • +
  • Multimedia remote control: Use your phone, tablet or computer as a remote for what is playing on your OpenVoiceOS device.
  • +
  • Auto mute your OpenVoiceOS device when your mobile phone rings.
  • +
  • Virtual touchpad / keyboard: Use your phone/tablet screen as your OpenVoiceOS device its mouse and keyboard.
  • +
+

For the sake of simplicity the below screenshots are made using the iPhone KDE Connect client, however as it is not yet fully feature complete and / or stable, it is recommended to use the Android and / or Linux client. Especially if you would like to have full MPRIS control of your OpenVoiceOS device.

+

On your mobile device, open the KDE Connect app and it will see the advertised OpenVoiceOS KDE Connect device automatically. +{ width=50% } +Click / Tap on the "OpenVoiceOS-*" to start the pairing process. + +By clicking / tapping the pair button a similar pop-up will appear on the screen of the OpenVoiceOS device. Also click / tap on the pair button finalises the pairing proces allowing your Mobile device to automatically connect with your OpenVoiceOS device and make use of all the extra functionality of what KDE Connect brings. +

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/license/index.html b/license/index.html new file mode 100644 index 00000000..6a8015d0 --- /dev/null +++ b/license/index.html @@ -0,0 +1,338 @@ + + + + + + + + License - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Appendix »
  • + +
  • +
  • +
+
+
+
+
+ +

License

+

We have a universal donor policy, our code should be able to be used anywhere by anyone, no ifs or conditions attached.

+

OVOS is predominately Apache2 or BSD licensed. There are only a few exceptions to this, which are all licensed under other compatible open source licenses.

+

Individual plugins or skills may have their own license, for example mimic3 is AGPL, so we can not change the license of our plugin.

+

We are committed to maintain all core components fully free, any code that we have no control over the license will live in an optional plugin and be flagged as such.

+

This includes avoiding LGPL code for reasons explained here.

+

Our license policy has the following properties:

+
    +
  • It gives you, the user of the software, complete and unrestrained access to the software, such that you may inspect, modify, and redistribute your changes
      +
    • Inspection - Anyone may inspect the software for security vulnerabilities
    • +
    • Modification - Anyone may modify the software to fix issues or add features
    • +
    • Redistribution - Anyone may redistribute the software on their terms
    • +
    +
  • +
  • It is compatible with GPL licenses - Projects licensed as GPL can be distributed with OVOS
  • +
  • It allows for the incorporation of GPL-incompatible free software, such as software that is CDDL licensed
  • +
+

The license does not restrict the software that may run on OVOS, however -- and thanks to the plugin architecture, even traditionally tightly-coupled components such as drivers can be distributed separately, so maintainers are free to choose whatever license they like for their projects.

+

Notable licensing exceptions

+

The following repositories do not respect our universal donor policy, please ensure their licenses are compatible before you use them

+ + + + + + + + + + + + + + + + + + + + + + + + + +
RepositoryLicenseReason
ovos-intent-plugin-padatiousApache2.0padatious license might not be valid, depends on libfann2 (LGPL)
ovos-tts-plugin-mimic3AGPLdepends on mimic3 (AGPL)
ovos-tts-plugin-SAM?reverse engineered abandonware
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/members/index.html b/members/index.html new file mode 100644 index 00000000..8025049c --- /dev/null +++ b/members/index.html @@ -0,0 +1,331 @@ + + + + + + + + OVOS Members - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Appendix »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Team Members

+

JarbasAI
+Daniel McKnight
+j1nx
+forslund
+ChanceNCounter
+5trongthany
+builderjer
+goldyfruit
+mikejgray
+emphasize
+dscripka

+

Members hosting services

+

goldyfruit

+

use with ovos-tts-server-plugin

+

Mimic1 TTS
+Mimic3 TTS
+Piper TTS

+

builderjer

+

use with ovos-tts-server-plugin

+

Mimic3 TTS
+Piper TTS

+

use with ovos-stt-server-plugin

+

Faster Whisper STT

+

use with ovos-tts-plugin-mimic3-server

+

Mimic3 TTS

+

OVOS

+

use with ovos-stt-server-plugin

+

Faster Whisper STT

+

BlueSystems

+

use with ovos-tts-plugin-mimic3-server

+

Mimic3 TTS

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + +
+ + + + + + + + + diff --git a/mycroft/index.html b/mycroft/index.html new file mode 100644 index 00000000..5b232bd7 --- /dev/null +++ b/mycroft/index.html @@ -0,0 +1,596 @@ + + + + + + + + Mycroft - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Other Builds »
  • + +
  • +
  • +
+
+
+
+
+ +

ovos-core vs mycroft-core

+

Speech Client

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureMycroftOVOSDescription
Wake Word (listen)yesyesOnly transcribe speech (STT) after a certain word is spoken
Wake Up Word (sleep mode)yesyesWhen in sleep mode only listen for "wake up" (no STT)
Hotword (bus event)noyesEmit bus events when a hotword is detected (no STT)
Multiple Wake WordsnoyesLoad multiple hotword engines/models simultaneously
Fallback STTnoyesfallback STT if the main one fails (eg, internet outage)
Instant ListennoyesDo not pause between wake word detection and recording start
Hybrid ListennoWIPDo not require wake word for follow up questions
Continuous ListennoWIPDo not require wake word, always listen using VAD
Recording modenoWIPSave audio instead of processing speech
Wake Word PluginsyesyesSupports 3rd party integrations for hotword detection
STT PluginsyesyesSupports 3rd party integrations for STT
VAD pluginsno *yesSupports 3rd party integrations for voice activity detection
+

NOTES:

+
    +
  • HiveMind Voice Satellite uses ovos-core and supports the same + features
  • +
  • Pyaudio has a bug in python 3.10, you may need to use this fork (ovos-core and + mk2 only)
  • +
  • VAD is supported in mycroft mark2 branch, but is hardcoded for silero
  • +
  • Sleep mode loop has been rewritten in ovos-core and is much more + responsive than mycroft
  • +
  • Mic handling logic has been ported from mk2 branch and is much + more responsive than mycroft dev branch
  • +
  • Instant / Hybrid / Continuous listen settings are experimental, good microphone and AEC are highly recommended (such + as a mark2)
  • +
  • in ovos-core this functionality has + been refactored and moved to the + new mycroft.listenermodule
  • +
+

Audio

+ + + + + + + + + + + + + + + + + +
FeatureMycroftOVOSDescription
MPRIS integrationnoyesIntegrate with MPRIS protocol
+

NOTES:

+
    +
  • OCP can be used with mycroft-core, but not mk2
  • +
  • OCP can be controlled via MPRIS, e.g. KDEConnect
  • +
  • OCP can control MPRIS enabled players, e.g. spotify
  • +
+

Skills

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureMycroftOVOSDescription
Skill Pluginsnoyesskills can be packaged like standard python projects and installed via setup.py (eg. with pip or your package manager)
User ResourcesnoyesUsers can override resource files, eg. customize dialogs for installed skills
Skill permissionsnoWIPUsers can limit converse and fallback functionality per skill and configure the order in which skills are executed
Intent PluginsnoWIPSupports 3rd party integrations for Intent Matching
+

Hardware

+ + + + + + + + + + + + + + + + + +
FeatureMycroftOVOSDescription
System PluginsnoyesSupport for 3rd party hardware (eg. mk2-plugin) and OS level integrations (eg. wifi-setup)
+

NOTES:

+
    +
  • PHAL can be used with mycroft-core
  • +
+

Misc

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureMycroftOVOSDescription
Offline usagenoyesCan be configured to work without internet connectivity
MultiLingualnoWIPCan be configured to work in multiple languages at the same time
HiveMind supportWIPWIPSupports HiveMind for a distributed/remote mycroft experience
XDG complianceWIPyesAll resources respect XDG standards and support multiple users
Usage as a libnoyesPackaged as a library, supports derivative voice assistants
+

NOTES:

+ +

Dinkum

+

What is Dinkum

+

Mycroft Mark2 shipped with a new version of mycroft called "dinkum", this is a total overhaul of mycroft-core and +incompatible

+

mycroft-core is now referred to as "Classic Core" by MycroftAI

+

MycroftAI now provides what they call sandbox images, to add to the confusion those only work in the mark 2 and "Classic +Core" means the mark-ii/latest branch of mycroft-core, this is a derivative version of the branch that was used in the dev kits (mark-ii/qa) and is also +backwards incompatible, changes in this branch were not done via PRs and had no review or community input

+

Mark2 useful links:

+ +

Dinkum vs ovos-core

+

you can find mycroft's guide to porting skills to dinkum here https://mycroft-ai.gitbook.io/mark-ii/differences-to-classic-core/porting-classic-core-skills

+

mark2/qa brought some changes to mycroft-core, not all of them backwards compatible and some that were contentious +within the community.

+
    +
  • VAD - VAD has been added to the mark-ii, but it is hardcoded to silero, this feature has been adopted via OPM, it is an important part of ovos-core listening modes introduced in version 0.0.5
  • +
  • self.resources - resource file loading was overhauled, this feature has been + improved (ovos-core/pull/130 + ovos-core/pull/131 + ovos-core/pull/135 + ovos-core/pull/170) + and ported to OVOS and is also available in OVOSkill class (OVOS-workshop/pull/30) for usage in classic core
  • +
  • audio hal - audio playback was rewritten from scratch, audio plugin support has been removed, OVOS will not adopt this new approach but keep improving the previous one
  • +
  • skill states - converse method introduced skill states, this changed some core assumptions behind converse method and + active skills, OVOS will not adopt skill states, see community discussion + here mycroft-core/pull/2901 + mycroft-core/pull/2906
  • +
  • pure regex intents - pure regex intents have been introduced, we strongly recommend you use padatious instead if this is desired, regex makes language support really hard, let the intent engines do their jobs
  • +
  • adapt fork - a fork of adapt is used in the mark2, it introduces the exactly and excludes methods. excludes will be added upstream in adapt/pull/156. Any skill using these new methods will be incompatible with most core versions
  • +
  • activities - an activity is just a set of bus messages to indicate something started and ended, it is a reimplementation of an already existing feature, in ovos we use the native events from the self.add_event skill method
  • +
+

dinkum contains all changes above and also brought further changes to the table

+
    +
  • sessions - in dinkum session handling is done by skills, it completely ignores the message.context mechanism and existing session_id, in ovos we believe session should come in the message and handled by the clients (eg, a chat user or a hivemind client....), in ovos we are expanding the original session concept ovos-core/pull/160
  • +
  • dbus-hal - a dbus service specific to the mk2 has been introduced, in ovos we have a generic PHAL service and companion plugins to interface with mk2 hardware instead, this component is mark2 specific and should be ignored in the ovos ecosystem
  • +
+

Any skills using these new "features" will not work outside the mark2

+

FAQ

+

Do OVOS skills run in dinkum?

+

No, not even classic core skills run in dinkum. We have no plans to support this

+

Do Dinkum skills run in ovos?

+

No, dinkum is designed in a very incompatible way, the mycroft module is not always mycroft-core and the MycroftSkill class is not always a MycroftSkill, we have no intention of transparently loading dinkum skills in ovos-core

+

We have a small proof of concept tool to convert a dinkum skill into an ovos/classic core compatible skill, see https://github.com/OpenVoiceOS/undinkumfier

+

Does OCP work in dinkum?

+

No, Audio plugin support has been removed, you can run OCP standalone but will be missing the compatibility layers and can't load OCP skills anyway

+

It could be made to work but this is not in the roadmap, PRs will be accepted and reviewed

+

Does PHAL work in dinkum?

+

It should! We don't explicitly target or test it with dinkum, but it is a fairly standalone component

+

Does OPM work in dinkum?

+

STT , TTS and WW plugins should work, We don't explicitly target or test compatibility, PRs will be accepted and reviewed

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/neonos/index.html b/neonos/index.html new file mode 100644 index 00000000..03b671f7 --- /dev/null +++ b/neonos/index.html @@ -0,0 +1,292 @@ + + + + + + + + NeonOS - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Other Builds »
  • + +
  • +
  • +
+
+
+
+
+ +

NeonOS

+

** Coming soon **

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/personal_backend/index.html b/personal_backend/index.html new file mode 100644 index 00000000..ea3b4c8b --- /dev/null +++ b/personal_backend/index.html @@ -0,0 +1,387 @@ + + + + + + + + Personal Backend - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Additional Services »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Personal Backend

+

Personal mycroft backend alternative to mycroft.home, written in flask

+

This repo is an alternative to the backend meant for personal usage, this allows you to run without mycroft servers

+

:warning: there are no user accounts :warning:

+

This is NOT meant to provision third party devices, but rather to run on the mycroft devices directly or on a private network

+

For a full backend experience, the official mycroft backend has been open sourced, read the blog post

+

NOTE: There is no pairing, devices will just activate themselves and work

+

Install

+

from pip

+
pip install ovos-local-backend
+
+

Mycroft Setup

+

There are 2 main intended ways to run local backend with mycroft

+
    +
  • on same device as mycroft-core, tricking it to run without mycroft servers
  • +
  • on a private network, to manage all your devices locally
  • +
+

NOTE: you can not fully run mycroft-core offline, it refuses to launch without internet connection, you can only replace the calls to use this backend instead of mycroft.home

+

We recommend you use ovos-core instead

+

update your mycroft config to use this backend, delete identity2.json and restart mycroft

+
{
+  "server": {
+    "url": "http://0.0.0.0:6712",
+    "version": "v1",
+    "update": true,
+    "metrics": true
+  },
+  "listener": {
+    "wake_word_upload": {
+      "url": "http://0.0.0.0:6712/precise/upload"
+    }
+  }
+}
+
+

Companion projects

+ +

Usage

+

start backend

+
$ ovos-local-backend -h
+usage: ovos-local-backend [-h] [--flask-port FLASK_PORT] [--flask-host FLASK_HOST]
+
+optional arguments:
+  -h, --help            show this help message and exit
+  --flask-port FLASK_PORT
+                        Mock backend port number
+  --flask-host FLASK_HOST
+                        Mock backend host
+
+
+

Docker

+

There is also a docker container you can use

+
docker run -p 8086:6712 -d --restart always --name local_backend ghcr.io/openvoiceos/local-backend:dev
+
+

a docker-compose.yml could look like this

+
version: '3.6'
+services:
+    # ...
+    ovosbackend:
+        container_name: ovos_backend
+        image: ghcr.io/openvoiceos/local-backend:dev
+        # or build from local source (relative to docker-compose.yml)
+        # build: ../ovos/ovos-personal-backend/.
+        restart: unless-stopped
+        ports:
+          - "6712:6712"                                              # default port backend API
+          - "36535:36535"                                            # default port backend-manager
+        volumes:                                                     # <host>:<guest>:<SELinux flag>
+          - ./ovos/backend/config:/root/.config/json_database:z      # shared config directory
+          - ./ovos/backend/data:/root/.local/share/ovos_backend:Z    # shared data directory
+                                                                     # set `data_path` to `/root/.local/share/ovos_backend`
+
+

about selinux flags (omit if you don't deal with selinux)

+

How it works

+

Configuration

+

WIP Coming Soon

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/qs_first_boot/index.html b/qs_first_boot/index.html new file mode 100644 index 00000000..2d0bc53c --- /dev/null +++ b/qs_first_boot/index.html @@ -0,0 +1,363 @@ + + + + + + + + First Boot - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Quick Start Guide »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Quick Start - First Boot

+

You now have a flashed boot medium with OVOS installed. Now what?

+

Insert your boot medium, and power on your device.

+

NOTE If using a Raspberry Pi 4 with a device conneected via USB-A, the top-right USB 3.2 Gen 1 port is recommended.

+

Each image has a different experience when you first boot your device.

+

Buildroot - GUI

+

Raspbian-ovos - Headless

+

Notes on internet While OVOS can run fully offline, with on device STT and TTS, your device will be lacking many of the things you expect from a smart assistant. On a RPi, this includes the date and time because the Pi lacks a Real Time Clock, and therefor needs to connect to a server to set those on your device.

+

Buildroot - GUI

+

When you first boot the Buildroot image, you will be greeted with an OVOS splash screen as shown below.

+

Splash Screen

+

As this is the first time you have booted your device, it might take a bit longer than normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device.
+Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown.

+

Loading Spinner

+

Again, since this is your first boot, this might take a bit longer as the ovos-core configuration is populated and skills are being setup.

+

Ethernet

+

There should be no issues connecting automatically if your router accepts DHCP requests.

+

WiFi

+

If you do not have an internet connection, you will be prompted with a screen with options to connect.

+

Network Setup Image

+

You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work. This includes the date and time as a Raspberry Pi does not have a Real Time Clock and therefore does not know this data without being told).

+

On Device Setup

+

If this option is selected the next screen will have a list of available WiFi connections.

+

Network On Device Image

+

Select your desired network, and enter a password if needed.

+

Network WiFi Password Image

+

If everything went correctly, you should be connected to the internet and after a short period of time, OVOS is loading skills that require internet, you will be presented with the homescreen.

+

Horizontal Homescreen Image

+

Configure WiFi after skipping initial setup

+

If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the "Wi-Fi" icon. This brings you to the same on-device configuration screen.

+

Network Connections Image

+

From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks.

+

Network Connection Details Image

+

Mobile Setup

+

If this option is chosen, you will be prompted to connect your mobile device, or computer to the hotspot OVOS.

+

Mobile HotSpot Image

+

The rest of this option coincides with the headless image WiFi setup. Continue Mobile WiFi Setup.

+

Raspbian-ovos - Headless

+

If you are trying to run OVOS on a RPi3, this is the image to use. It DOES NOT provide a GUI, therefore some things that are available on the buildroot image, are not available here.

+

Once again, it may take several minutes for this first boot to complete. OVOS resizes the partition on the drive, and loads the required plugins.

+

Ethernet

+

There should be no issues connecting automatically if your router accepts DHCP requests.

+

WiFi

+

The Raspbian image will create a HotSpot which is a temporary access point that allows you to configure your WiFi credentials.

+

On first boot, you should hear a voice prompt instructing you to connect to the OVOS hotspot.

+
+

"Open a browser to 'start dot openvoiceos dot com'"

+
+

Connect your mobile device or computer to the OVOS HotSpot and open the webpage http://start.openvoiceos.com NOTE This is NOT the official OVOS website, rather a "HotSpot" created by OVOS and will be removed after the device is connected to WiFi.

+

Choose from the list of WiFi access points from the list on your mobile device.

+

WiFi Setup Image

+

Enter your password if needed and enjoy your OVOS smart assistant.

+

NOTE There is a known bug in with balena-wifi connecting to WPA3 security. You must use on device setup or use raspi-config from a command prompt if your WiFi security is not supported with the mobile setup option.

+

Thats it!! You should have a working OVOS device, QUICK!!

+

While this is the fastest and easiest way to get OVOS, it is not the only way.

+

Installing OVOS with libraries

+

Installing OVOS Docker

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/qs_intro/index.html b/qs_intro/index.html new file mode 100644 index 00000000..264dfaae --- /dev/null +++ b/qs_intro/index.html @@ -0,0 +1,351 @@ + + + + + + + + Installing an Image - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Installation Options »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Quick Start Guide - Get an image

+

So you just want to give OVOS a try? This quick start will help get an OVOS image installed and running on your Raspberry Pi.

+

NOTE The GUI will not reliably run on a RPi3 and is therefore not recommended for that device.

+

Download an OVOS image

+

OVOS provides a couple of different images specifically for the Raspberry Pi.

+

Buildroot Latest Image

+

The most advanced and featureful is the Buildroot image. If you want a GUI, this is currently your only choice.

+ +

Raspbian Latest Images

+

OVOS also provides a "Headless" image that is similar to the origional picroft software from MycroftAI. It runs without a screen and works with a RPi3b/b+

+ +

Flashing your image

+

Once you have an image downloaded, it needs to be flashed to a boot device.

+

NOTE If you have a Raspberry Pi 4 we recommend to use a good USB3.1 device or better, a USB3 SSD. If you have a Raspberry Pi 3, use a proper SD card. (From fast to slow: SSD - USB3.1 - SD card - USB2)

+

Decompress the image

+

Some image writing methods, dd, may require your file be decompressed. Others, BalenaEtcher for example, can use a compressed image. +The Buildroot image is compressed in .xz format and the raspbian image is in .zip format.

+

Windows

+

Use winzip or 7-zip to decompress the image.

+

Linux

+

Use gunzip to decompress .xz compressed images and unzip to decompress .zip images.

+

The resulting file should end in .img and is now ready to flash to a device.

+

Flashing Software

+

Flashing your image to your SD card or USB drive is not different from flashing any other image. For the non-technical users we advise to use the flashing utility, Raspberry Pi Imager from the Raspberry Pi Foundation. It is available for Windows, Mac OS, and Linux.

+
    +
  • Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with "sudo raspi-imager".
  • +
  • For "Choose OS", select "Use custom" and select the OVOS/PI image file downloaded from the OpenVoiceOS Downloads Site.
  • +
  • For "Choose Storage", select your removable boot media, probably something like "Internal SD Card Reader".
  • +
  • Then select "Write". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot.
  • +
+

Upon completion, you should have a bootable SD card or USB drive.

+

Warning EXTREME CARE needs to be taken while using this method

+

Be careful with the dd command, you can easily render your computer useless

+
    +
  • Check where your sd card or usb drive is located using the lsusb command.
  • +
  • It should be visible as sdxx
  • +
  • Write the unzipped image to the disk sudo dd if=<path-to-unzipped-image> of=<path-to-sd-card> bs=4M status=progress
  • +
+

No matter what method you used, upon completion, you should have a bootable SD card or USB drive.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/qs_next_steps/index.html b/qs_next_steps/index.html new file mode 100644 index 00000000..0dccce0b --- /dev/null +++ b/qs_next_steps/index.html @@ -0,0 +1,315 @@ + + + + + + + + Next Steps - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Quick Start Guide »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Quickstart - Next Steps

+

Woo Woo!! You Have A Running OVOS Device!! Now what?

+

Prebuilt images come with a default set of skills installed, including, but not limited to the date/time, and weather. Give them a shot.

+

Speak these commands and enjoy the spoils:

+

Hey Mycroft, what time is it?

+

Hey Mycroft, what is today's date?

+

Hey Mycroft, what is the weather today?

+

Hey Mycroft, will it rain today?

+

While there are several default skills installed, there are many more available to be used. The link below will show you how to find and install more skills.

+

Installing Skills

+

But wait, there's more!!

+

OVOS is highly configurable, and uses a file in either JSON or YAML format to provide these options. While in most cases, OVOS should just work, sometimes you either need to, or want to change some options.

+

OVOS Configuration

+

OVOS ships with a default TTS (Text to Speech) engine which speaks in the original Alan-Pope voice that Mycroft used. There are MANY more to choose from. The following link will help you choose and configure a different voice for your assistant.

+

Configuring TTS

+

Your device does not understand your voice when you speak? There are options for different STT (Speech To Text) engines also. Some work better than others, but can provide less privacy.

+

Changing STT

+

Your OVOS assistant uses a "wake word" which lets it know it is time to start listening to your commands. By default, the phrase to wake OVOS is Hey Mycroft. This, like most things in OVOS, is totally configurable. Follow the link to learn more.

+

Changing the Wake Word

+

PHAL plugins allow OVOS to interact with the underlying hardware and operating system. Several are available, and may be installed and run together.

+

Configuring PHAL

+

OVOS ships with default services available to the public to use. These include public TTS and STT servers, a weather API provided by OpenMeteo, access to Wolfram, and more. Since OVOS is an open and private system, you can also change these to your own preferences.

+

Install your own Services WIP

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/search.html b/search.html new file mode 100644 index 00000000..752dcdc0 --- /dev/null +++ b/search.html @@ -0,0 +1,283 @@ + + + + + + + + OVOS User Documentation + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • +
  • +
+
+
+
+
+ + +

Search Results

+ + + +
+ Searching... +
+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/search/lunr.js b/search/lunr.js new file mode 100644 index 00000000..aca0a167 --- /dev/null +++ b/search/lunr.js @@ -0,0 +1,3475 @@ +/** + * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9 + * Copyright (C) 2020 Oliver Nightingale + * @license MIT + */ + +;(function(){ + +/** + * A convenience function for configuring and constructing + * a new lunr Index. + * + * A lunr.Builder instance is created and the pipeline setup + * with a trimmer, stop word filter and stemmer. + * + * This builder object is yielded to the configuration function + * that is passed as a parameter, allowing the list of fields + * and other builder parameters to be customised. + * + * All documents _must_ be added within the passed config function. + * + * @example + * var idx = lunr(function () { + * this.field('title') + * this.field('body') + * this.ref('id') + * + * documents.forEach(function (doc) { + * this.add(doc) + * }, this) + * }) + * + * @see {@link lunr.Builder} + * @see {@link lunr.Pipeline} + * @see {@link lunr.trimmer} + * @see {@link lunr.stopWordFilter} + * @see {@link lunr.stemmer} + * @namespace {function} lunr + */ +var lunr = function (config) { + var builder = new lunr.Builder + + builder.pipeline.add( + lunr.trimmer, + lunr.stopWordFilter, + lunr.stemmer + ) + + builder.searchPipeline.add( + lunr.stemmer + ) + + config.call(builder, builder) + return builder.build() +} + +lunr.version = "2.3.9" +/*! + * lunr.utils + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A namespace containing utils for the rest of the lunr library + * @namespace lunr.utils + */ +lunr.utils = {} + +/** + * Print a warning message to the console. + * + * @param {String} message The message to be printed. + * @memberOf lunr.utils + * @function + */ +lunr.utils.warn = (function (global) { + /* eslint-disable no-console */ + return function (message) { + if (global.console && console.warn) { + console.warn(message) + } + } + /* eslint-enable no-console */ +})(this) + +/** + * Convert an object to a string. + * + * In the case of `null` and `undefined` the function returns + * the empty string, in all other cases the result of calling + * `toString` on the passed object is returned. + * + * @param {Any} obj The object to convert to a string. + * @return {String} string representation of the passed object. + * @memberOf lunr.utils + */ +lunr.utils.asString = function (obj) { + if (obj === void 0 || obj === null) { + return "" + } else { + return obj.toString() + } +} + +/** + * Clones an object. + * + * Will create a copy of an existing object such that any mutations + * on the copy cannot affect the original. + * + * Only shallow objects are supported, passing a nested object to this + * function will cause a TypeError. + * + * Objects with primitives, and arrays of primitives are supported. + * + * @param {Object} obj The object to clone. + * @return {Object} a clone of the passed object. + * @throws {TypeError} when a nested object is passed. + * @memberOf Utils + */ +lunr.utils.clone = function (obj) { + if (obj === null || obj === undefined) { + return obj + } + + var clone = Object.create(null), + keys = Object.keys(obj) + + for (var i = 0; i < keys.length; i++) { + var key = keys[i], + val = obj[key] + + if (Array.isArray(val)) { + clone[key] = val.slice() + continue + } + + if (typeof val === 'string' || + typeof val === 'number' || + typeof val === 'boolean') { + clone[key] = val + continue + } + + throw new TypeError("clone is not deep and does not support nested objects") + } + + return clone +} +lunr.FieldRef = function (docRef, fieldName, stringValue) { + this.docRef = docRef + this.fieldName = fieldName + this._stringValue = stringValue +} + +lunr.FieldRef.joiner = "/" + +lunr.FieldRef.fromString = function (s) { + var n = s.indexOf(lunr.FieldRef.joiner) + + if (n === -1) { + throw "malformed field ref string" + } + + var fieldRef = s.slice(0, n), + docRef = s.slice(n + 1) + + return new lunr.FieldRef (docRef, fieldRef, s) +} + +lunr.FieldRef.prototype.toString = function () { + if (this._stringValue == undefined) { + this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef + } + + return this._stringValue +} +/*! + * lunr.Set + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A lunr set. + * + * @constructor + */ +lunr.Set = function (elements) { + this.elements = Object.create(null) + + if (elements) { + this.length = elements.length + + for (var i = 0; i < this.length; i++) { + this.elements[elements[i]] = true + } + } else { + this.length = 0 + } +} + +/** + * A complete set that contains all elements. + * + * @static + * @readonly + * @type {lunr.Set} + */ +lunr.Set.complete = { + intersect: function (other) { + return other + }, + + union: function () { + return this + }, + + contains: function () { + return true + } +} + +/** + * An empty set that contains no elements. + * + * @static + * @readonly + * @type {lunr.Set} + */ +lunr.Set.empty = { + intersect: function () { + return this + }, + + union: function (other) { + return other + }, + + contains: function () { + return false + } +} + +/** + * Returns true if this set contains the specified object. + * + * @param {object} object - Object whose presence in this set is to be tested. + * @returns {boolean} - True if this set contains the specified object. + */ +lunr.Set.prototype.contains = function (object) { + return !!this.elements[object] +} + +/** + * Returns a new set containing only the elements that are present in both + * this set and the specified set. + * + * @param {lunr.Set} other - set to intersect with this set. + * @returns {lunr.Set} a new set that is the intersection of this and the specified set. + */ + +lunr.Set.prototype.intersect = function (other) { + var a, b, elements, intersection = [] + + if (other === lunr.Set.complete) { + return this + } + + if (other === lunr.Set.empty) { + return other + } + + if (this.length < other.length) { + a = this + b = other + } else { + a = other + b = this + } + + elements = Object.keys(a.elements) + + for (var i = 0; i < elements.length; i++) { + var element = elements[i] + if (element in b.elements) { + intersection.push(element) + } + } + + return new lunr.Set (intersection) +} + +/** + * Returns a new set combining the elements of this and the specified set. + * + * @param {lunr.Set} other - set to union with this set. + * @return {lunr.Set} a new set that is the union of this and the specified set. + */ + +lunr.Set.prototype.union = function (other) { + if (other === lunr.Set.complete) { + return lunr.Set.complete + } + + if (other === lunr.Set.empty) { + return this + } + + return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements))) +} +/** + * A function to calculate the inverse document frequency for + * a posting. This is shared between the builder and the index + * + * @private + * @param {object} posting - The posting for a given term + * @param {number} documentCount - The total number of documents. + */ +lunr.idf = function (posting, documentCount) { + var documentsWithTerm = 0 + + for (var fieldName in posting) { + if (fieldName == '_index') continue // Ignore the term index, its not a field + documentsWithTerm += Object.keys(posting[fieldName]).length + } + + var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5) + + return Math.log(1 + Math.abs(x)) +} + +/** + * A token wraps a string representation of a token + * as it is passed through the text processing pipeline. + * + * @constructor + * @param {string} [str=''] - The string token being wrapped. + * @param {object} [metadata={}] - Metadata associated with this token. + */ +lunr.Token = function (str, metadata) { + this.str = str || "" + this.metadata = metadata || {} +} + +/** + * Returns the token string that is being wrapped by this object. + * + * @returns {string} + */ +lunr.Token.prototype.toString = function () { + return this.str +} + +/** + * A token update function is used when updating or optionally + * when cloning a token. + * + * @callback lunr.Token~updateFunction + * @param {string} str - The string representation of the token. + * @param {Object} metadata - All metadata associated with this token. + */ + +/** + * Applies the given function to the wrapped string token. + * + * @example + * token.update(function (str, metadata) { + * return str.toUpperCase() + * }) + * + * @param {lunr.Token~updateFunction} fn - A function to apply to the token string. + * @returns {lunr.Token} + */ +lunr.Token.prototype.update = function (fn) { + this.str = fn(this.str, this.metadata) + return this +} + +/** + * Creates a clone of this token. Optionally a function can be + * applied to the cloned token. + * + * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token. + * @returns {lunr.Token} + */ +lunr.Token.prototype.clone = function (fn) { + fn = fn || function (s) { return s } + return new lunr.Token (fn(this.str, this.metadata), this.metadata) +} +/*! + * lunr.tokenizer + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A function for splitting a string into tokens ready to be inserted into + * the search index. Uses `lunr.tokenizer.separator` to split strings, change + * the value of this property to change how strings are split into tokens. + * + * This tokenizer will convert its parameter to a string by calling `toString` and + * then will split this string on the character in `lunr.tokenizer.separator`. + * Arrays will have their elements converted to strings and wrapped in a lunr.Token. + * + * Optional metadata can be passed to the tokenizer, this metadata will be cloned and + * added as metadata to every token that is created from the object to be tokenized. + * + * @static + * @param {?(string|object|object[])} obj - The object to convert into tokens + * @param {?object} metadata - Optional metadata to associate with every token + * @returns {lunr.Token[]} + * @see {@link lunr.Pipeline} + */ +lunr.tokenizer = function (obj, metadata) { + if (obj == null || obj == undefined) { + return [] + } + + if (Array.isArray(obj)) { + return obj.map(function (t) { + return new lunr.Token( + lunr.utils.asString(t).toLowerCase(), + lunr.utils.clone(metadata) + ) + }) + } + + var str = obj.toString().toLowerCase(), + len = str.length, + tokens = [] + + for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) { + var char = str.charAt(sliceEnd), + sliceLength = sliceEnd - sliceStart + + if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) { + + if (sliceLength > 0) { + var tokenMetadata = lunr.utils.clone(metadata) || {} + tokenMetadata["position"] = [sliceStart, sliceLength] + tokenMetadata["index"] = tokens.length + + tokens.push( + new lunr.Token ( + str.slice(sliceStart, sliceEnd), + tokenMetadata + ) + ) + } + + sliceStart = sliceEnd + 1 + } + + } + + return tokens +} + +/** + * The separator used to split a string into tokens. Override this property to change the behaviour of + * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens. + * + * @static + * @see lunr.tokenizer + */ +lunr.tokenizer.separator = /[\s\-]+/ +/*! + * lunr.Pipeline + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.Pipelines maintain an ordered list of functions to be applied to all + * tokens in documents entering the search index and queries being ran against + * the index. + * + * An instance of lunr.Index created with the lunr shortcut will contain a + * pipeline with a stop word filter and an English language stemmer. Extra + * functions can be added before or after either of these functions or these + * default functions can be removed. + * + * When run the pipeline will call each function in turn, passing a token, the + * index of that token in the original list of all tokens and finally a list of + * all the original tokens. + * + * The output of functions in the pipeline will be passed to the next function + * in the pipeline. To exclude a token from entering the index the function + * should return undefined, the rest of the pipeline will not be called with + * this token. + * + * For serialisation of pipelines to work, all functions used in an instance of + * a pipeline should be registered with lunr.Pipeline. Registered functions can + * then be loaded. If trying to load a serialised pipeline that uses functions + * that are not registered an error will be thrown. + * + * If not planning on serialising the pipeline then registering pipeline functions + * is not necessary. + * + * @constructor + */ +lunr.Pipeline = function () { + this._stack = [] +} + +lunr.Pipeline.registeredFunctions = Object.create(null) + +/** + * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token + * string as well as all known metadata. A pipeline function can mutate the token string + * or mutate (or add) metadata for a given token. + * + * A pipeline function can indicate that the passed token should be discarded by returning + * null, undefined or an empty string. This token will not be passed to any downstream pipeline + * functions and will not be added to the index. + * + * Multiple tokens can be returned by returning an array of tokens. Each token will be passed + * to any downstream pipeline functions and all will returned tokens will be added to the index. + * + * Any number of pipeline functions may be chained together using a lunr.Pipeline. + * + * @interface lunr.PipelineFunction + * @param {lunr.Token} token - A token from the document being processed. + * @param {number} i - The index of this token in the complete list of tokens for this document/field. + * @param {lunr.Token[]} tokens - All tokens for this document/field. + * @returns {(?lunr.Token|lunr.Token[])} + */ + +/** + * Register a function with the pipeline. + * + * Functions that are used in the pipeline should be registered if the pipeline + * needs to be serialised, or a serialised pipeline needs to be loaded. + * + * Registering a function does not add it to a pipeline, functions must still be + * added to instances of the pipeline for them to be used when running a pipeline. + * + * @param {lunr.PipelineFunction} fn - The function to check for. + * @param {String} label - The label to register this function with + */ +lunr.Pipeline.registerFunction = function (fn, label) { + if (label in this.registeredFunctions) { + lunr.utils.warn('Overwriting existing registered function: ' + label) + } + + fn.label = label + lunr.Pipeline.registeredFunctions[fn.label] = fn +} + +/** + * Warns if the function is not registered as a Pipeline function. + * + * @param {lunr.PipelineFunction} fn - The function to check for. + * @private + */ +lunr.Pipeline.warnIfFunctionNotRegistered = function (fn) { + var isRegistered = fn.label && (fn.label in this.registeredFunctions) + + if (!isRegistered) { + lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\n', fn) + } +} + +/** + * Loads a previously serialised pipeline. + * + * All functions to be loaded must already be registered with lunr.Pipeline. + * If any function from the serialised data has not been registered then an + * error will be thrown. + * + * @param {Object} serialised - The serialised pipeline to load. + * @returns {lunr.Pipeline} + */ +lunr.Pipeline.load = function (serialised) { + var pipeline = new lunr.Pipeline + + serialised.forEach(function (fnName) { + var fn = lunr.Pipeline.registeredFunctions[fnName] + + if (fn) { + pipeline.add(fn) + } else { + throw new Error('Cannot load unregistered function: ' + fnName) + } + }) + + return pipeline +} + +/** + * Adds new functions to the end of the pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline. + */ +lunr.Pipeline.prototype.add = function () { + var fns = Array.prototype.slice.call(arguments) + + fns.forEach(function (fn) { + lunr.Pipeline.warnIfFunctionNotRegistered(fn) + this._stack.push(fn) + }, this) +} + +/** + * Adds a single function after a function that already exists in the + * pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. + * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. + */ +lunr.Pipeline.prototype.after = function (existingFn, newFn) { + lunr.Pipeline.warnIfFunctionNotRegistered(newFn) + + var pos = this._stack.indexOf(existingFn) + if (pos == -1) { + throw new Error('Cannot find existingFn') + } + + pos = pos + 1 + this._stack.splice(pos, 0, newFn) +} + +/** + * Adds a single function before a function that already exists in the + * pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. + * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. + */ +lunr.Pipeline.prototype.before = function (existingFn, newFn) { + lunr.Pipeline.warnIfFunctionNotRegistered(newFn) + + var pos = this._stack.indexOf(existingFn) + if (pos == -1) { + throw new Error('Cannot find existingFn') + } + + this._stack.splice(pos, 0, newFn) +} + +/** + * Removes a function from the pipeline. + * + * @param {lunr.PipelineFunction} fn The function to remove from the pipeline. + */ +lunr.Pipeline.prototype.remove = function (fn) { + var pos = this._stack.indexOf(fn) + if (pos == -1) { + return + } + + this._stack.splice(pos, 1) +} + +/** + * Runs the current list of functions that make up the pipeline against the + * passed tokens. + * + * @param {Array} tokens The tokens to run through the pipeline. + * @returns {Array} + */ +lunr.Pipeline.prototype.run = function (tokens) { + var stackLength = this._stack.length + + for (var i = 0; i < stackLength; i++) { + var fn = this._stack[i] + var memo = [] + + for (var j = 0; j < tokens.length; j++) { + var result = fn(tokens[j], j, tokens) + + if (result === null || result === void 0 || result === '') continue + + if (Array.isArray(result)) { + for (var k = 0; k < result.length; k++) { + memo.push(result[k]) + } + } else { + memo.push(result) + } + } + + tokens = memo + } + + return tokens +} + +/** + * Convenience method for passing a string through a pipeline and getting + * strings out. This method takes care of wrapping the passed string in a + * token and mapping the resulting tokens back to strings. + * + * @param {string} str - The string to pass through the pipeline. + * @param {?object} metadata - Optional metadata to associate with the token + * passed to the pipeline. + * @returns {string[]} + */ +lunr.Pipeline.prototype.runString = function (str, metadata) { + var token = new lunr.Token (str, metadata) + + return this.run([token]).map(function (t) { + return t.toString() + }) +} + +/** + * Resets the pipeline by removing any existing processors. + * + */ +lunr.Pipeline.prototype.reset = function () { + this._stack = [] +} + +/** + * Returns a representation of the pipeline ready for serialisation. + * + * Logs a warning if the function has not been registered. + * + * @returns {Array} + */ +lunr.Pipeline.prototype.toJSON = function () { + return this._stack.map(function (fn) { + lunr.Pipeline.warnIfFunctionNotRegistered(fn) + + return fn.label + }) +} +/*! + * lunr.Vector + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A vector is used to construct the vector space of documents and queries. These + * vectors support operations to determine the similarity between two documents or + * a document and a query. + * + * Normally no parameters are required for initializing a vector, but in the case of + * loading a previously dumped vector the raw elements can be provided to the constructor. + * + * For performance reasons vectors are implemented with a flat array, where an elements + * index is immediately followed by its value. E.g. [index, value, index, value]. This + * allows the underlying array to be as sparse as possible and still offer decent + * performance when being used for vector calculations. + * + * @constructor + * @param {Number[]} [elements] - The flat list of element index and element value pairs. + */ +lunr.Vector = function (elements) { + this._magnitude = 0 + this.elements = elements || [] +} + + +/** + * Calculates the position within the vector to insert a given index. + * + * This is used internally by insert and upsert. If there are duplicate indexes then + * the position is returned as if the value for that index were to be updated, but it + * is the callers responsibility to check whether there is a duplicate at that index + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @returns {Number} + */ +lunr.Vector.prototype.positionForIndex = function (index) { + // For an empty vector the tuple can be inserted at the beginning + if (this.elements.length == 0) { + return 0 + } + + var start = 0, + end = this.elements.length / 2, + sliceLength = end - start, + pivotPoint = Math.floor(sliceLength / 2), + pivotIndex = this.elements[pivotPoint * 2] + + while (sliceLength > 1) { + if (pivotIndex < index) { + start = pivotPoint + } + + if (pivotIndex > index) { + end = pivotPoint + } + + if (pivotIndex == index) { + break + } + + sliceLength = end - start + pivotPoint = start + Math.floor(sliceLength / 2) + pivotIndex = this.elements[pivotPoint * 2] + } + + if (pivotIndex == index) { + return pivotPoint * 2 + } + + if (pivotIndex > index) { + return pivotPoint * 2 + } + + if (pivotIndex < index) { + return (pivotPoint + 1) * 2 + } +} + +/** + * Inserts an element at an index within the vector. + * + * Does not allow duplicates, will throw an error if there is already an entry + * for this index. + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @param {Number} val - The value to be inserted into the vector. + */ +lunr.Vector.prototype.insert = function (insertIdx, val) { + this.upsert(insertIdx, val, function () { + throw "duplicate index" + }) +} + +/** + * Inserts or updates an existing index within the vector. + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @param {Number} val - The value to be inserted into the vector. + * @param {function} fn - A function that is called for updates, the existing value and the + * requested value are passed as arguments + */ +lunr.Vector.prototype.upsert = function (insertIdx, val, fn) { + this._magnitude = 0 + var position = this.positionForIndex(insertIdx) + + if (this.elements[position] == insertIdx) { + this.elements[position + 1] = fn(this.elements[position + 1], val) + } else { + this.elements.splice(position, 0, insertIdx, val) + } +} + +/** + * Calculates the magnitude of this vector. + * + * @returns {Number} + */ +lunr.Vector.prototype.magnitude = function () { + if (this._magnitude) return this._magnitude + + var sumOfSquares = 0, + elementsLength = this.elements.length + + for (var i = 1; i < elementsLength; i += 2) { + var val = this.elements[i] + sumOfSquares += val * val + } + + return this._magnitude = Math.sqrt(sumOfSquares) +} + +/** + * Calculates the dot product of this vector and another vector. + * + * @param {lunr.Vector} otherVector - The vector to compute the dot product with. + * @returns {Number} + */ +lunr.Vector.prototype.dot = function (otherVector) { + var dotProduct = 0, + a = this.elements, b = otherVector.elements, + aLen = a.length, bLen = b.length, + aVal = 0, bVal = 0, + i = 0, j = 0 + + while (i < aLen && j < bLen) { + aVal = a[i], bVal = b[j] + if (aVal < bVal) { + i += 2 + } else if (aVal > bVal) { + j += 2 + } else if (aVal == bVal) { + dotProduct += a[i + 1] * b[j + 1] + i += 2 + j += 2 + } + } + + return dotProduct +} + +/** + * Calculates the similarity between this vector and another vector. + * + * @param {lunr.Vector} otherVector - The other vector to calculate the + * similarity with. + * @returns {Number} + */ +lunr.Vector.prototype.similarity = function (otherVector) { + return this.dot(otherVector) / this.magnitude() || 0 +} + +/** + * Converts the vector to an array of the elements within the vector. + * + * @returns {Number[]} + */ +lunr.Vector.prototype.toArray = function () { + var output = new Array (this.elements.length / 2) + + for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) { + output[j] = this.elements[i] + } + + return output +} + +/** + * A JSON serializable representation of the vector. + * + * @returns {Number[]} + */ +lunr.Vector.prototype.toJSON = function () { + return this.elements +} +/* eslint-disable */ +/*! + * lunr.stemmer + * Copyright (C) 2020 Oliver Nightingale + * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt + */ + +/** + * lunr.stemmer is an english language stemmer, this is a JavaScript + * implementation of the PorterStemmer taken from http://tartarus.org/~martin + * + * @static + * @implements {lunr.PipelineFunction} + * @param {lunr.Token} token - The string to stem + * @returns {lunr.Token} + * @see {@link lunr.Pipeline} + * @function + */ +lunr.stemmer = (function(){ + var step2list = { + "ational" : "ate", + "tional" : "tion", + "enci" : "ence", + "anci" : "ance", + "izer" : "ize", + "bli" : "ble", + "alli" : "al", + "entli" : "ent", + "eli" : "e", + "ousli" : "ous", + "ization" : "ize", + "ation" : "ate", + "ator" : "ate", + "alism" : "al", + "iveness" : "ive", + "fulness" : "ful", + "ousness" : "ous", + "aliti" : "al", + "iviti" : "ive", + "biliti" : "ble", + "logi" : "log" + }, + + step3list = { + "icate" : "ic", + "ative" : "", + "alize" : "al", + "iciti" : "ic", + "ical" : "ic", + "ful" : "", + "ness" : "" + }, + + c = "[^aeiou]", // consonant + v = "[aeiouy]", // vowel + C = c + "[^aeiouy]*", // consonant sequence + V = v + "[aeiou]*", // vowel sequence + + mgr0 = "^(" + C + ")?" + V + C, // [C]VC... is m>0 + meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$", // [C]VC[V] is m=1 + mgr1 = "^(" + C + ")?" + V + C + V + C, // [C]VCVC... is m>1 + s_v = "^(" + C + ")?" + v; // vowel in stem + + var re_mgr0 = new RegExp(mgr0); + var re_mgr1 = new RegExp(mgr1); + var re_meq1 = new RegExp(meq1); + var re_s_v = new RegExp(s_v); + + var re_1a = /^(.+?)(ss|i)es$/; + var re2_1a = /^(.+?)([^s])s$/; + var re_1b = /^(.+?)eed$/; + var re2_1b = /^(.+?)(ed|ing)$/; + var re_1b_2 = /.$/; + var re2_1b_2 = /(at|bl|iz)$/; + var re3_1b_2 = new RegExp("([^aeiouylsz])\\1$"); + var re4_1b_2 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + + var re_1c = /^(.+?[^aeiou])y$/; + var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + + var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + + var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + var re2_4 = /^(.+?)(s|t)(ion)$/; + + var re_5 = /^(.+?)e$/; + var re_5_1 = /ll$/; + var re3_5 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + + var porterStemmer = function porterStemmer(w) { + var stem, + suffix, + firstch, + re, + re2, + re3, + re4; + + if (w.length < 3) { return w; } + + firstch = w.substr(0,1); + if (firstch == "y") { + w = firstch.toUpperCase() + w.substr(1); + } + + // Step 1a + re = re_1a + re2 = re2_1a; + + if (re.test(w)) { w = w.replace(re,"$1$2"); } + else if (re2.test(w)) { w = w.replace(re2,"$1$2"); } + + // Step 1b + re = re_1b; + re2 = re2_1b; + if (re.test(w)) { + var fp = re.exec(w); + re = re_mgr0; + if (re.test(fp[1])) { + re = re_1b_2; + w = w.replace(re,""); + } + } else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = re_s_v; + if (re2.test(stem)) { + w = stem; + re2 = re2_1b_2; + re3 = re3_1b_2; + re4 = re4_1b_2; + if (re2.test(w)) { w = w + "e"; } + else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,""); } + else if (re4.test(w)) { w = w + "e"; } + } + } + + // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say) + re = re_1c; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + w = stem + "i"; + } + + // Step 2 + re = re_2; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = re_mgr0; + if (re.test(stem)) { + w = stem + step2list[suffix]; + } + } + + // Step 3 + re = re_3; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = re_mgr0; + if (re.test(stem)) { + w = stem + step3list[suffix]; + } + } + + // Step 4 + re = re_4; + re2 = re2_4; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = re_mgr1; + if (re.test(stem)) { + w = stem; + } + } else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = re_mgr1; + if (re2.test(stem)) { + w = stem; + } + } + + // Step 5 + re = re_5; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = re_mgr1; + re2 = re_meq1; + re3 = re3_5; + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) { + w = stem; + } + } + + re = re_5_1; + re2 = re_mgr1; + if (re.test(w) && re2.test(w)) { + re = re_1b_2; + w = w.replace(re,""); + } + + // and turn initial Y back to y + + if (firstch == "y") { + w = firstch.toLowerCase() + w.substr(1); + } + + return w; + }; + + return function (token) { + return token.update(porterStemmer); + } +})(); + +lunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer') +/*! + * lunr.stopWordFilter + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.generateStopWordFilter builds a stopWordFilter function from the provided + * list of stop words. + * + * The built in lunr.stopWordFilter is built using this generator and can be used + * to generate custom stopWordFilters for applications or non English languages. + * + * @function + * @param {Array} token The token to pass through the filter + * @returns {lunr.PipelineFunction} + * @see lunr.Pipeline + * @see lunr.stopWordFilter + */ +lunr.generateStopWordFilter = function (stopWords) { + var words = stopWords.reduce(function (memo, stopWord) { + memo[stopWord] = stopWord + return memo + }, {}) + + return function (token) { + if (token && words[token.toString()] !== token.toString()) return token + } +} + +/** + * lunr.stopWordFilter is an English language stop word list filter, any words + * contained in the list will not be passed through the filter. + * + * This is intended to be used in the Pipeline. If the token does not pass the + * filter then undefined will be returned. + * + * @function + * @implements {lunr.PipelineFunction} + * @params {lunr.Token} token - A token to check for being a stop word. + * @returns {lunr.Token} + * @see {@link lunr.Pipeline} + */ +lunr.stopWordFilter = lunr.generateStopWordFilter([ + 'a', + 'able', + 'about', + 'across', + 'after', + 'all', + 'almost', + 'also', + 'am', + 'among', + 'an', + 'and', + 'any', + 'are', + 'as', + 'at', + 'be', + 'because', + 'been', + 'but', + 'by', + 'can', + 'cannot', + 'could', + 'dear', + 'did', + 'do', + 'does', + 'either', + 'else', + 'ever', + 'every', + 'for', + 'from', + 'get', + 'got', + 'had', + 'has', + 'have', + 'he', + 'her', + 'hers', + 'him', + 'his', + 'how', + 'however', + 'i', + 'if', + 'in', + 'into', + 'is', + 'it', + 'its', + 'just', + 'least', + 'let', + 'like', + 'likely', + 'may', + 'me', + 'might', + 'most', + 'must', + 'my', + 'neither', + 'no', + 'nor', + 'not', + 'of', + 'off', + 'often', + 'on', + 'only', + 'or', + 'other', + 'our', + 'own', + 'rather', + 'said', + 'say', + 'says', + 'she', + 'should', + 'since', + 'so', + 'some', + 'than', + 'that', + 'the', + 'their', + 'them', + 'then', + 'there', + 'these', + 'they', + 'this', + 'tis', + 'to', + 'too', + 'twas', + 'us', + 'wants', + 'was', + 'we', + 'were', + 'what', + 'when', + 'where', + 'which', + 'while', + 'who', + 'whom', + 'why', + 'will', + 'with', + 'would', + 'yet', + 'you', + 'your' +]) + +lunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter') +/*! + * lunr.trimmer + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.trimmer is a pipeline function for trimming non word + * characters from the beginning and end of tokens before they + * enter the index. + * + * This implementation may not work correctly for non latin + * characters and should either be removed or adapted for use + * with languages with non-latin characters. + * + * @static + * @implements {lunr.PipelineFunction} + * @param {lunr.Token} token The token to pass through the filter + * @returns {lunr.Token} + * @see lunr.Pipeline + */ +lunr.trimmer = function (token) { + return token.update(function (s) { + return s.replace(/^\W+/, '').replace(/\W+$/, '') + }) +} + +lunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer') +/*! + * lunr.TokenSet + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A token set is used to store the unique list of all tokens + * within an index. Token sets are also used to represent an + * incoming query to the index, this query token set and index + * token set are then intersected to find which tokens to look + * up in the inverted index. + * + * A token set can hold multiple tokens, as in the case of the + * index token set, or it can hold a single token as in the + * case of a simple query token set. + * + * Additionally token sets are used to perform wildcard matching. + * Leading, contained and trailing wildcards are supported, and + * from this edit distance matching can also be provided. + * + * Token sets are implemented as a minimal finite state automata, + * where both common prefixes and suffixes are shared between tokens. + * This helps to reduce the space used for storing the token set. + * + * @constructor + */ +lunr.TokenSet = function () { + this.final = false + this.edges = {} + this.id = lunr.TokenSet._nextId + lunr.TokenSet._nextId += 1 +} + +/** + * Keeps track of the next, auto increment, identifier to assign + * to a new tokenSet. + * + * TokenSets require a unique identifier to be correctly minimised. + * + * @private + */ +lunr.TokenSet._nextId = 1 + +/** + * Creates a TokenSet instance from the given sorted array of words. + * + * @param {String[]} arr - A sorted array of strings to create the set from. + * @returns {lunr.TokenSet} + * @throws Will throw an error if the input array is not sorted. + */ +lunr.TokenSet.fromArray = function (arr) { + var builder = new lunr.TokenSet.Builder + + for (var i = 0, len = arr.length; i < len; i++) { + builder.insert(arr[i]) + } + + builder.finish() + return builder.root +} + +/** + * Creates a token set from a query clause. + * + * @private + * @param {Object} clause - A single clause from lunr.Query. + * @param {string} clause.term - The query clause term. + * @param {number} [clause.editDistance] - The optional edit distance for the term. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.fromClause = function (clause) { + if ('editDistance' in clause) { + return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance) + } else { + return lunr.TokenSet.fromString(clause.term) + } +} + +/** + * Creates a token set representing a single string with a specified + * edit distance. + * + * Insertions, deletions, substitutions and transpositions are each + * treated as an edit distance of 1. + * + * Increasing the allowed edit distance will have a dramatic impact + * on the performance of both creating and intersecting these TokenSets. + * It is advised to keep the edit distance less than 3. + * + * @param {string} str - The string to create the token set from. + * @param {number} editDistance - The allowed edit distance to match. + * @returns {lunr.Vector} + */ +lunr.TokenSet.fromFuzzyString = function (str, editDistance) { + var root = new lunr.TokenSet + + var stack = [{ + node: root, + editsRemaining: editDistance, + str: str + }] + + while (stack.length) { + var frame = stack.pop() + + // no edit + if (frame.str.length > 0) { + var char = frame.str.charAt(0), + noEditNode + + if (char in frame.node.edges) { + noEditNode = frame.node.edges[char] + } else { + noEditNode = new lunr.TokenSet + frame.node.edges[char] = noEditNode + } + + if (frame.str.length == 1) { + noEditNode.final = true + } + + stack.push({ + node: noEditNode, + editsRemaining: frame.editsRemaining, + str: frame.str.slice(1) + }) + } + + if (frame.editsRemaining == 0) { + continue + } + + // insertion + if ("*" in frame.node.edges) { + var insertionNode = frame.node.edges["*"] + } else { + var insertionNode = new lunr.TokenSet + frame.node.edges["*"] = insertionNode + } + + if (frame.str.length == 0) { + insertionNode.final = true + } + + stack.push({ + node: insertionNode, + editsRemaining: frame.editsRemaining - 1, + str: frame.str + }) + + // deletion + // can only do a deletion if we have enough edits remaining + // and if there are characters left to delete in the string + if (frame.str.length > 1) { + stack.push({ + node: frame.node, + editsRemaining: frame.editsRemaining - 1, + str: frame.str.slice(1) + }) + } + + // deletion + // just removing the last character from the str + if (frame.str.length == 1) { + frame.node.final = true + } + + // substitution + // can only do a substitution if we have enough edits remaining + // and if there are characters left to substitute + if (frame.str.length >= 1) { + if ("*" in frame.node.edges) { + var substitutionNode = frame.node.edges["*"] + } else { + var substitutionNode = new lunr.TokenSet + frame.node.edges["*"] = substitutionNode + } + + if (frame.str.length == 1) { + substitutionNode.final = true + } + + stack.push({ + node: substitutionNode, + editsRemaining: frame.editsRemaining - 1, + str: frame.str.slice(1) + }) + } + + // transposition + // can only do a transposition if there are edits remaining + // and there are enough characters to transpose + if (frame.str.length > 1) { + var charA = frame.str.charAt(0), + charB = frame.str.charAt(1), + transposeNode + + if (charB in frame.node.edges) { + transposeNode = frame.node.edges[charB] + } else { + transposeNode = new lunr.TokenSet + frame.node.edges[charB] = transposeNode + } + + if (frame.str.length == 1) { + transposeNode.final = true + } + + stack.push({ + node: transposeNode, + editsRemaining: frame.editsRemaining - 1, + str: charA + frame.str.slice(2) + }) + } + } + + return root +} + +/** + * Creates a TokenSet from a string. + * + * The string may contain one or more wildcard characters (*) + * that will allow wildcard matching when intersecting with + * another TokenSet. + * + * @param {string} str - The string to create a TokenSet from. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.fromString = function (str) { + var node = new lunr.TokenSet, + root = node + + /* + * Iterates through all characters within the passed string + * appending a node for each character. + * + * When a wildcard character is found then a self + * referencing edge is introduced to continually match + * any number of any characters. + */ + for (var i = 0, len = str.length; i < len; i++) { + var char = str[i], + final = (i == len - 1) + + if (char == "*") { + node.edges[char] = node + node.final = final + + } else { + var next = new lunr.TokenSet + next.final = final + + node.edges[char] = next + node = next + } + } + + return root +} + +/** + * Converts this TokenSet into an array of strings + * contained within the TokenSet. + * + * This is not intended to be used on a TokenSet that + * contains wildcards, in these cases the results are + * undefined and are likely to cause an infinite loop. + * + * @returns {string[]} + */ +lunr.TokenSet.prototype.toArray = function () { + var words = [] + + var stack = [{ + prefix: "", + node: this + }] + + while (stack.length) { + var frame = stack.pop(), + edges = Object.keys(frame.node.edges), + len = edges.length + + if (frame.node.final) { + /* In Safari, at this point the prefix is sometimes corrupted, see: + * https://github.com/olivernn/lunr.js/issues/279 Calling any + * String.prototype method forces Safari to "cast" this string to what + * it's supposed to be, fixing the bug. */ + frame.prefix.charAt(0) + words.push(frame.prefix) + } + + for (var i = 0; i < len; i++) { + var edge = edges[i] + + stack.push({ + prefix: frame.prefix.concat(edge), + node: frame.node.edges[edge] + }) + } + } + + return words +} + +/** + * Generates a string representation of a TokenSet. + * + * This is intended to allow TokenSets to be used as keys + * in objects, largely to aid the construction and minimisation + * of a TokenSet. As such it is not designed to be a human + * friendly representation of the TokenSet. + * + * @returns {string} + */ +lunr.TokenSet.prototype.toString = function () { + // NOTE: Using Object.keys here as this.edges is very likely + // to enter 'hash-mode' with many keys being added + // + // avoiding a for-in loop here as it leads to the function + // being de-optimised (at least in V8). From some simple + // benchmarks the performance is comparable, but allowing + // V8 to optimize may mean easy performance wins in the future. + + if (this._str) { + return this._str + } + + var str = this.final ? '1' : '0', + labels = Object.keys(this.edges).sort(), + len = labels.length + + for (var i = 0; i < len; i++) { + var label = labels[i], + node = this.edges[label] + + str = str + label + node.id + } + + return str +} + +/** + * Returns a new TokenSet that is the intersection of + * this TokenSet and the passed TokenSet. + * + * This intersection will take into account any wildcards + * contained within the TokenSet. + * + * @param {lunr.TokenSet} b - An other TokenSet to intersect with. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.prototype.intersect = function (b) { + var output = new lunr.TokenSet, + frame = undefined + + var stack = [{ + qNode: b, + output: output, + node: this + }] + + while (stack.length) { + frame = stack.pop() + + // NOTE: As with the #toString method, we are using + // Object.keys and a for loop instead of a for-in loop + // as both of these objects enter 'hash' mode, causing + // the function to be de-optimised in V8 + var qEdges = Object.keys(frame.qNode.edges), + qLen = qEdges.length, + nEdges = Object.keys(frame.node.edges), + nLen = nEdges.length + + for (var q = 0; q < qLen; q++) { + var qEdge = qEdges[q] + + for (var n = 0; n < nLen; n++) { + var nEdge = nEdges[n] + + if (nEdge == qEdge || qEdge == '*') { + var node = frame.node.edges[nEdge], + qNode = frame.qNode.edges[qEdge], + final = node.final && qNode.final, + next = undefined + + if (nEdge in frame.output.edges) { + // an edge already exists for this character + // no need to create a new node, just set the finality + // bit unless this node is already final + next = frame.output.edges[nEdge] + next.final = next.final || final + + } else { + // no edge exists yet, must create one + // set the finality bit and insert it + // into the output + next = new lunr.TokenSet + next.final = final + frame.output.edges[nEdge] = next + } + + stack.push({ + qNode: qNode, + output: next, + node: node + }) + } + } + } + } + + return output +} +lunr.TokenSet.Builder = function () { + this.previousWord = "" + this.root = new lunr.TokenSet + this.uncheckedNodes = [] + this.minimizedNodes = {} +} + +lunr.TokenSet.Builder.prototype.insert = function (word) { + var node, + commonPrefix = 0 + + if (word < this.previousWord) { + throw new Error ("Out of order word insertion") + } + + for (var i = 0; i < word.length && i < this.previousWord.length; i++) { + if (word[i] != this.previousWord[i]) break + commonPrefix++ + } + + this.minimize(commonPrefix) + + if (this.uncheckedNodes.length == 0) { + node = this.root + } else { + node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child + } + + for (var i = commonPrefix; i < word.length; i++) { + var nextNode = new lunr.TokenSet, + char = word[i] + + node.edges[char] = nextNode + + this.uncheckedNodes.push({ + parent: node, + char: char, + child: nextNode + }) + + node = nextNode + } + + node.final = true + this.previousWord = word +} + +lunr.TokenSet.Builder.prototype.finish = function () { + this.minimize(0) +} + +lunr.TokenSet.Builder.prototype.minimize = function (downTo) { + for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) { + var node = this.uncheckedNodes[i], + childKey = node.child.toString() + + if (childKey in this.minimizedNodes) { + node.parent.edges[node.char] = this.minimizedNodes[childKey] + } else { + // Cache the key for this node since + // we know it can't change anymore + node.child._str = childKey + + this.minimizedNodes[childKey] = node.child + } + + this.uncheckedNodes.pop() + } +} +/*! + * lunr.Index + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * An index contains the built index of all documents and provides a query interface + * to the index. + * + * Usually instances of lunr.Index will not be created using this constructor, instead + * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be + * used to load previously built and serialized indexes. + * + * @constructor + * @param {Object} attrs - The attributes of the built search index. + * @param {Object} attrs.invertedIndex - An index of term/field to document reference. + * @param {Object} attrs.fieldVectors - Field vectors + * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens. + * @param {string[]} attrs.fields - The names of indexed document fields. + * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms. + */ +lunr.Index = function (attrs) { + this.invertedIndex = attrs.invertedIndex + this.fieldVectors = attrs.fieldVectors + this.tokenSet = attrs.tokenSet + this.fields = attrs.fields + this.pipeline = attrs.pipeline +} + +/** + * A result contains details of a document matching a search query. + * @typedef {Object} lunr.Index~Result + * @property {string} ref - The reference of the document this result represents. + * @property {number} score - A number between 0 and 1 representing how similar this document is to the query. + * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match. + */ + +/** + * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple + * query language which itself is parsed into an instance of lunr.Query. + * + * For programmatically building queries it is advised to directly use lunr.Query, the query language + * is best used for human entered text rather than program generated text. + * + * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported + * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello' + * or 'world', though those that contain both will rank higher in the results. + * + * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can + * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding + * wildcards will increase the number of documents that will be found but can also have a negative + * impact on query performance, especially with wildcards at the beginning of a term. + * + * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term + * hello in the title field will match this query. Using a field not present in the index will lead + * to an error being thrown. + * + * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term + * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported + * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2. + * Avoid large values for edit distance to improve query performance. + * + * Each term also supports a presence modifier. By default a term's presence in document is optional, however + * this can be changed to either required or prohibited. For a term's presence to be required in a document the + * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and + * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not + * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'. + * + * To escape special characters the backslash character '\' can be used, this allows searches to include + * characters that would normally be considered modifiers, e.g. `foo\~2` will search for a term "foo~2" instead + * of attempting to apply a boost of 2 to the search term "foo". + * + * @typedef {string} lunr.Index~QueryString + * @example Simple single term query + * hello + * @example Multiple term query + * hello world + * @example term scoped to a field + * title:hello + * @example term with a boost of 10 + * hello^10 + * @example term with an edit distance of 2 + * hello~2 + * @example terms with presence modifiers + * -foo +bar baz + */ + +/** + * Performs a search against the index using lunr query syntax. + * + * Results will be returned sorted by their score, the most relevant results + * will be returned first. For details on how the score is calculated, please see + * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}. + * + * For more programmatic querying use lunr.Index#query. + * + * @param {lunr.Index~QueryString} queryString - A string containing a lunr query. + * @throws {lunr.QueryParseError} If the passed query string cannot be parsed. + * @returns {lunr.Index~Result[]} + */ +lunr.Index.prototype.search = function (queryString) { + return this.query(function (query) { + var parser = new lunr.QueryParser(queryString, query) + parser.parse() + }) +} + +/** + * A query builder callback provides a query object to be used to express + * the query to perform on the index. + * + * @callback lunr.Index~queryBuilder + * @param {lunr.Query} query - The query object to build up. + * @this lunr.Query + */ + +/** + * Performs a query against the index using the yielded lunr.Query object. + * + * If performing programmatic queries against the index, this method is preferred + * over lunr.Index#search so as to avoid the additional query parsing overhead. + * + * A query object is yielded to the supplied function which should be used to + * express the query to be run against the index. + * + * Note that although this function takes a callback parameter it is _not_ an + * asynchronous operation, the callback is just yielded a query object to be + * customized. + * + * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query. + * @returns {lunr.Index~Result[]} + */ +lunr.Index.prototype.query = function (fn) { + // for each query clause + // * process terms + // * expand terms from token set + // * find matching documents and metadata + // * get document vectors + // * score documents + + var query = new lunr.Query(this.fields), + matchingFields = Object.create(null), + queryVectors = Object.create(null), + termFieldCache = Object.create(null), + requiredMatches = Object.create(null), + prohibitedMatches = Object.create(null) + + /* + * To support field level boosts a query vector is created per + * field. An empty vector is eagerly created to support negated + * queries. + */ + for (var i = 0; i < this.fields.length; i++) { + queryVectors[this.fields[i]] = new lunr.Vector + } + + fn.call(query, query) + + for (var i = 0; i < query.clauses.length; i++) { + /* + * Unless the pipeline has been disabled for this term, which is + * the case for terms with wildcards, we need to pass the clause + * term through the search pipeline. A pipeline returns an array + * of processed terms. Pipeline functions may expand the passed + * term, which means we may end up performing multiple index lookups + * for a single query term. + */ + var clause = query.clauses[i], + terms = null, + clauseMatches = lunr.Set.empty + + if (clause.usePipeline) { + terms = this.pipeline.runString(clause.term, { + fields: clause.fields + }) + } else { + terms = [clause.term] + } + + for (var m = 0; m < terms.length; m++) { + var term = terms[m] + + /* + * Each term returned from the pipeline needs to use the same query + * clause object, e.g. the same boost and or edit distance. The + * simplest way to do this is to re-use the clause object but mutate + * its term property. + */ + clause.term = term + + /* + * From the term in the clause we create a token set which will then + * be used to intersect the indexes token set to get a list of terms + * to lookup in the inverted index + */ + var termTokenSet = lunr.TokenSet.fromClause(clause), + expandedTerms = this.tokenSet.intersect(termTokenSet).toArray() + + /* + * If a term marked as required does not exist in the tokenSet it is + * impossible for the search to return any matches. We set all the field + * scoped required matches set to empty and stop examining any further + * clauses. + */ + if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) { + for (var k = 0; k < clause.fields.length; k++) { + var field = clause.fields[k] + requiredMatches[field] = lunr.Set.empty + } + + break + } + + for (var j = 0; j < expandedTerms.length; j++) { + /* + * For each term get the posting and termIndex, this is required for + * building the query vector. + */ + var expandedTerm = expandedTerms[j], + posting = this.invertedIndex[expandedTerm], + termIndex = posting._index + + for (var k = 0; k < clause.fields.length; k++) { + /* + * For each field that this query term is scoped by (by default + * all fields are in scope) we need to get all the document refs + * that have this term in that field. + * + * The posting is the entry in the invertedIndex for the matching + * term from above. + */ + var field = clause.fields[k], + fieldPosting = posting[field], + matchingDocumentRefs = Object.keys(fieldPosting), + termField = expandedTerm + "/" + field, + matchingDocumentsSet = new lunr.Set(matchingDocumentRefs) + + /* + * if the presence of this term is required ensure that the matching + * documents are added to the set of required matches for this clause. + * + */ + if (clause.presence == lunr.Query.presence.REQUIRED) { + clauseMatches = clauseMatches.union(matchingDocumentsSet) + + if (requiredMatches[field] === undefined) { + requiredMatches[field] = lunr.Set.complete + } + } + + /* + * if the presence of this term is prohibited ensure that the matching + * documents are added to the set of prohibited matches for this field, + * creating that set if it does not yet exist. + */ + if (clause.presence == lunr.Query.presence.PROHIBITED) { + if (prohibitedMatches[field] === undefined) { + prohibitedMatches[field] = lunr.Set.empty + } + + prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet) + + /* + * Prohibited matches should not be part of the query vector used for + * similarity scoring and no metadata should be extracted so we continue + * to the next field + */ + continue + } + + /* + * The query field vector is populated using the termIndex found for + * the term and a unit value with the appropriate boost applied. + * Using upsert because there could already be an entry in the vector + * for the term we are working with. In that case we just add the scores + * together. + */ + queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b }) + + /** + * If we've already seen this term, field combo then we've already collected + * the matching documents and metadata, no need to go through all that again + */ + if (termFieldCache[termField]) { + continue + } + + for (var l = 0; l < matchingDocumentRefs.length; l++) { + /* + * All metadata for this term/field/document triple + * are then extracted and collected into an instance + * of lunr.MatchData ready to be returned in the query + * results + */ + var matchingDocumentRef = matchingDocumentRefs[l], + matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field), + metadata = fieldPosting[matchingDocumentRef], + fieldMatch + + if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) { + matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata) + } else { + fieldMatch.add(expandedTerm, field, metadata) + } + + } + + termFieldCache[termField] = true + } + } + } + + /** + * If the presence was required we need to update the requiredMatches field sets. + * We do this after all fields for the term have collected their matches because + * the clause terms presence is required in _any_ of the fields not _all_ of the + * fields. + */ + if (clause.presence === lunr.Query.presence.REQUIRED) { + for (var k = 0; k < clause.fields.length; k++) { + var field = clause.fields[k] + requiredMatches[field] = requiredMatches[field].intersect(clauseMatches) + } + } + } + + /** + * Need to combine the field scoped required and prohibited + * matching documents into a global set of required and prohibited + * matches + */ + var allRequiredMatches = lunr.Set.complete, + allProhibitedMatches = lunr.Set.empty + + for (var i = 0; i < this.fields.length; i++) { + var field = this.fields[i] + + if (requiredMatches[field]) { + allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field]) + } + + if (prohibitedMatches[field]) { + allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field]) + } + } + + var matchingFieldRefs = Object.keys(matchingFields), + results = [], + matches = Object.create(null) + + /* + * If the query is negated (contains only prohibited terms) + * we need to get _all_ fieldRefs currently existing in the + * index. This is only done when we know that the query is + * entirely prohibited terms to avoid any cost of getting all + * fieldRefs unnecessarily. + * + * Additionally, blank MatchData must be created to correctly + * populate the results. + */ + if (query.isNegated()) { + matchingFieldRefs = Object.keys(this.fieldVectors) + + for (var i = 0; i < matchingFieldRefs.length; i++) { + var matchingFieldRef = matchingFieldRefs[i] + var fieldRef = lunr.FieldRef.fromString(matchingFieldRef) + matchingFields[matchingFieldRef] = new lunr.MatchData + } + } + + for (var i = 0; i < matchingFieldRefs.length; i++) { + /* + * Currently we have document fields that match the query, but we + * need to return documents. The matchData and scores are combined + * from multiple fields belonging to the same document. + * + * Scores are calculated by field, using the query vectors created + * above, and combined into a final document score using addition. + */ + var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]), + docRef = fieldRef.docRef + + if (!allRequiredMatches.contains(docRef)) { + continue + } + + if (allProhibitedMatches.contains(docRef)) { + continue + } + + var fieldVector = this.fieldVectors[fieldRef], + score = queryVectors[fieldRef.fieldName].similarity(fieldVector), + docMatch + + if ((docMatch = matches[docRef]) !== undefined) { + docMatch.score += score + docMatch.matchData.combine(matchingFields[fieldRef]) + } else { + var match = { + ref: docRef, + score: score, + matchData: matchingFields[fieldRef] + } + matches[docRef] = match + results.push(match) + } + } + + /* + * Sort the results objects by score, highest first. + */ + return results.sort(function (a, b) { + return b.score - a.score + }) +} + +/** + * Prepares the index for JSON serialization. + * + * The schema for this JSON blob will be described in a + * separate JSON schema file. + * + * @returns {Object} + */ +lunr.Index.prototype.toJSON = function () { + var invertedIndex = Object.keys(this.invertedIndex) + .sort() + .map(function (term) { + return [term, this.invertedIndex[term]] + }, this) + + var fieldVectors = Object.keys(this.fieldVectors) + .map(function (ref) { + return [ref, this.fieldVectors[ref].toJSON()] + }, this) + + return { + version: lunr.version, + fields: this.fields, + fieldVectors: fieldVectors, + invertedIndex: invertedIndex, + pipeline: this.pipeline.toJSON() + } +} + +/** + * Loads a previously serialized lunr.Index + * + * @param {Object} serializedIndex - A previously serialized lunr.Index + * @returns {lunr.Index} + */ +lunr.Index.load = function (serializedIndex) { + var attrs = {}, + fieldVectors = {}, + serializedVectors = serializedIndex.fieldVectors, + invertedIndex = Object.create(null), + serializedInvertedIndex = serializedIndex.invertedIndex, + tokenSetBuilder = new lunr.TokenSet.Builder, + pipeline = lunr.Pipeline.load(serializedIndex.pipeline) + + if (serializedIndex.version != lunr.version) { + lunr.utils.warn("Version mismatch when loading serialised index. Current version of lunr '" + lunr.version + "' does not match serialized index '" + serializedIndex.version + "'") + } + + for (var i = 0; i < serializedVectors.length; i++) { + var tuple = serializedVectors[i], + ref = tuple[0], + elements = tuple[1] + + fieldVectors[ref] = new lunr.Vector(elements) + } + + for (var i = 0; i < serializedInvertedIndex.length; i++) { + var tuple = serializedInvertedIndex[i], + term = tuple[0], + posting = tuple[1] + + tokenSetBuilder.insert(term) + invertedIndex[term] = posting + } + + tokenSetBuilder.finish() + + attrs.fields = serializedIndex.fields + + attrs.fieldVectors = fieldVectors + attrs.invertedIndex = invertedIndex + attrs.tokenSet = tokenSetBuilder.root + attrs.pipeline = pipeline + + return new lunr.Index(attrs) +} +/*! + * lunr.Builder + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.Builder performs indexing on a set of documents and + * returns instances of lunr.Index ready for querying. + * + * All configuration of the index is done via the builder, the + * fields to index, the document reference, the text processing + * pipeline and document scoring parameters are all set on the + * builder before indexing. + * + * @constructor + * @property {string} _ref - Internal reference to the document reference field. + * @property {string[]} _fields - Internal reference to the document fields to index. + * @property {object} invertedIndex - The inverted index maps terms to document fields. + * @property {object} documentTermFrequencies - Keeps track of document term frequencies. + * @property {object} documentLengths - Keeps track of the length of documents added to the index. + * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing. + * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing. + * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index. + * @property {number} documentCount - Keeps track of the total number of documents indexed. + * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75. + * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2. + * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space. + * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index. + */ +lunr.Builder = function () { + this._ref = "id" + this._fields = Object.create(null) + this._documents = Object.create(null) + this.invertedIndex = Object.create(null) + this.fieldTermFrequencies = {} + this.fieldLengths = {} + this.tokenizer = lunr.tokenizer + this.pipeline = new lunr.Pipeline + this.searchPipeline = new lunr.Pipeline + this.documentCount = 0 + this._b = 0.75 + this._k1 = 1.2 + this.termIndex = 0 + this.metadataWhitelist = [] +} + +/** + * Sets the document field used as the document reference. Every document must have this field. + * The type of this field in the document should be a string, if it is not a string it will be + * coerced into a string by calling toString. + * + * The default ref is 'id'. + * + * The ref should _not_ be changed during indexing, it should be set before any documents are + * added to the index. Changing it during indexing can lead to inconsistent results. + * + * @param {string} ref - The name of the reference field in the document. + */ +lunr.Builder.prototype.ref = function (ref) { + this._ref = ref +} + +/** + * A function that is used to extract a field from a document. + * + * Lunr expects a field to be at the top level of a document, if however the field + * is deeply nested within a document an extractor function can be used to extract + * the right field for indexing. + * + * @callback fieldExtractor + * @param {object} doc - The document being added to the index. + * @returns {?(string|object|object[])} obj - The object that will be indexed for this field. + * @example Extracting a nested field + * function (doc) { return doc.nested.field } + */ + +/** + * Adds a field to the list of document fields that will be indexed. Every document being + * indexed should have this field. Null values for this field in indexed documents will + * not cause errors but will limit the chance of that document being retrieved by searches. + * + * All fields should be added before adding documents to the index. Adding fields after + * a document has been indexed will have no effect on already indexed documents. + * + * Fields can be boosted at build time. This allows terms within that field to have more + * importance when ranking search results. Use a field boost to specify that matches within + * one field are more important than other fields. + * + * @param {string} fieldName - The name of a field to index in all documents. + * @param {object} attributes - Optional attributes associated with this field. + * @param {number} [attributes.boost=1] - Boost applied to all terms within this field. + * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document. + * @throws {RangeError} fieldName cannot contain unsupported characters '/' + */ +lunr.Builder.prototype.field = function (fieldName, attributes) { + if (/\//.test(fieldName)) { + throw new RangeError ("Field '" + fieldName + "' contains illegal character '/'") + } + + this._fields[fieldName] = attributes || {} +} + +/** + * A parameter to tune the amount of field length normalisation that is applied when + * calculating relevance scores. A value of 0 will completely disable any normalisation + * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b + * will be clamped to the range 0 - 1. + * + * @param {number} number - The value to set for this tuning parameter. + */ +lunr.Builder.prototype.b = function (number) { + if (number < 0) { + this._b = 0 + } else if (number > 1) { + this._b = 1 + } else { + this._b = number + } +} + +/** + * A parameter that controls the speed at which a rise in term frequency results in term + * frequency saturation. The default value is 1.2. Setting this to a higher value will give + * slower saturation levels, a lower value will result in quicker saturation. + * + * @param {number} number - The value to set for this tuning parameter. + */ +lunr.Builder.prototype.k1 = function (number) { + this._k1 = number +} + +/** + * Adds a document to the index. + * + * Before adding fields to the index the index should have been fully setup, with the document + * ref and all fields to index already having been specified. + * + * The document must have a field name as specified by the ref (by default this is 'id') and + * it should have all fields defined for indexing, though null or undefined values will not + * cause errors. + * + * Entire documents can be boosted at build time. Applying a boost to a document indicates that + * this document should rank higher in search results than other documents. + * + * @param {object} doc - The document to add to the index. + * @param {object} attributes - Optional attributes associated with this document. + * @param {number} [attributes.boost=1] - Boost applied to all terms within this document. + */ +lunr.Builder.prototype.add = function (doc, attributes) { + var docRef = doc[this._ref], + fields = Object.keys(this._fields) + + this._documents[docRef] = attributes || {} + this.documentCount += 1 + + for (var i = 0; i < fields.length; i++) { + var fieldName = fields[i], + extractor = this._fields[fieldName].extractor, + field = extractor ? extractor(doc) : doc[fieldName], + tokens = this.tokenizer(field, { + fields: [fieldName] + }), + terms = this.pipeline.run(tokens), + fieldRef = new lunr.FieldRef (docRef, fieldName), + fieldTerms = Object.create(null) + + this.fieldTermFrequencies[fieldRef] = fieldTerms + this.fieldLengths[fieldRef] = 0 + + // store the length of this field for this document + this.fieldLengths[fieldRef] += terms.length + + // calculate term frequencies for this field + for (var j = 0; j < terms.length; j++) { + var term = terms[j] + + if (fieldTerms[term] == undefined) { + fieldTerms[term] = 0 + } + + fieldTerms[term] += 1 + + // add to inverted index + // create an initial posting if one doesn't exist + if (this.invertedIndex[term] == undefined) { + var posting = Object.create(null) + posting["_index"] = this.termIndex + this.termIndex += 1 + + for (var k = 0; k < fields.length; k++) { + posting[fields[k]] = Object.create(null) + } + + this.invertedIndex[term] = posting + } + + // add an entry for this term/fieldName/docRef to the invertedIndex + if (this.invertedIndex[term][fieldName][docRef] == undefined) { + this.invertedIndex[term][fieldName][docRef] = Object.create(null) + } + + // store all whitelisted metadata about this token in the + // inverted index + for (var l = 0; l < this.metadataWhitelist.length; l++) { + var metadataKey = this.metadataWhitelist[l], + metadata = term.metadata[metadataKey] + + if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) { + this.invertedIndex[term][fieldName][docRef][metadataKey] = [] + } + + this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata) + } + } + + } +} + +/** + * Calculates the average document length for this index + * + * @private + */ +lunr.Builder.prototype.calculateAverageFieldLengths = function () { + + var fieldRefs = Object.keys(this.fieldLengths), + numberOfFields = fieldRefs.length, + accumulator = {}, + documentsWithField = {} + + for (var i = 0; i < numberOfFields; i++) { + var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), + field = fieldRef.fieldName + + documentsWithField[field] || (documentsWithField[field] = 0) + documentsWithField[field] += 1 + + accumulator[field] || (accumulator[field] = 0) + accumulator[field] += this.fieldLengths[fieldRef] + } + + var fields = Object.keys(this._fields) + + for (var i = 0; i < fields.length; i++) { + var fieldName = fields[i] + accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName] + } + + this.averageFieldLength = accumulator +} + +/** + * Builds a vector space model of every document using lunr.Vector + * + * @private + */ +lunr.Builder.prototype.createFieldVectors = function () { + var fieldVectors = {}, + fieldRefs = Object.keys(this.fieldTermFrequencies), + fieldRefsLength = fieldRefs.length, + termIdfCache = Object.create(null) + + for (var i = 0; i < fieldRefsLength; i++) { + var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), + fieldName = fieldRef.fieldName, + fieldLength = this.fieldLengths[fieldRef], + fieldVector = new lunr.Vector, + termFrequencies = this.fieldTermFrequencies[fieldRef], + terms = Object.keys(termFrequencies), + termsLength = terms.length + + + var fieldBoost = this._fields[fieldName].boost || 1, + docBoost = this._documents[fieldRef.docRef].boost || 1 + + for (var j = 0; j < termsLength; j++) { + var term = terms[j], + tf = termFrequencies[term], + termIndex = this.invertedIndex[term]._index, + idf, score, scoreWithPrecision + + if (termIdfCache[term] === undefined) { + idf = lunr.idf(this.invertedIndex[term], this.documentCount) + termIdfCache[term] = idf + } else { + idf = termIdfCache[term] + } + + score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf) + score *= fieldBoost + score *= docBoost + scoreWithPrecision = Math.round(score * 1000) / 1000 + // Converts 1.23456789 to 1.234. + // Reducing the precision so that the vectors take up less + // space when serialised. Doing it now so that they behave + // the same before and after serialisation. Also, this is + // the fastest approach to reducing a number's precision in + // JavaScript. + + fieldVector.insert(termIndex, scoreWithPrecision) + } + + fieldVectors[fieldRef] = fieldVector + } + + this.fieldVectors = fieldVectors +} + +/** + * Creates a token set of all tokens in the index using lunr.TokenSet + * + * @private + */ +lunr.Builder.prototype.createTokenSet = function () { + this.tokenSet = lunr.TokenSet.fromArray( + Object.keys(this.invertedIndex).sort() + ) +} + +/** + * Builds the index, creating an instance of lunr.Index. + * + * This completes the indexing process and should only be called + * once all documents have been added to the index. + * + * @returns {lunr.Index} + */ +lunr.Builder.prototype.build = function () { + this.calculateAverageFieldLengths() + this.createFieldVectors() + this.createTokenSet() + + return new lunr.Index({ + invertedIndex: this.invertedIndex, + fieldVectors: this.fieldVectors, + tokenSet: this.tokenSet, + fields: Object.keys(this._fields), + pipeline: this.searchPipeline + }) +} + +/** + * Applies a plugin to the index builder. + * + * A plugin is a function that is called with the index builder as its context. + * Plugins can be used to customise or extend the behaviour of the index + * in some way. A plugin is just a function, that encapsulated the custom + * behaviour that should be applied when building the index. + * + * The plugin function will be called with the index builder as its argument, additional + * arguments can also be passed when calling use. The function will be called + * with the index builder as its context. + * + * @param {Function} plugin The plugin to apply. + */ +lunr.Builder.prototype.use = function (fn) { + var args = Array.prototype.slice.call(arguments, 1) + args.unshift(this) + fn.apply(this, args) +} +/** + * Contains and collects metadata about a matching document. + * A single instance of lunr.MatchData is returned as part of every + * lunr.Index~Result. + * + * @constructor + * @param {string} term - The term this match data is associated with + * @param {string} field - The field in which the term was found + * @param {object} metadata - The metadata recorded about this term in this field + * @property {object} metadata - A cloned collection of metadata associated with this document. + * @see {@link lunr.Index~Result} + */ +lunr.MatchData = function (term, field, metadata) { + var clonedMetadata = Object.create(null), + metadataKeys = Object.keys(metadata || {}) + + // Cloning the metadata to prevent the original + // being mutated during match data combination. + // Metadata is kept in an array within the inverted + // index so cloning the data can be done with + // Array#slice + for (var i = 0; i < metadataKeys.length; i++) { + var key = metadataKeys[i] + clonedMetadata[key] = metadata[key].slice() + } + + this.metadata = Object.create(null) + + if (term !== undefined) { + this.metadata[term] = Object.create(null) + this.metadata[term][field] = clonedMetadata + } +} + +/** + * An instance of lunr.MatchData will be created for every term that matches a + * document. However only one instance is required in a lunr.Index~Result. This + * method combines metadata from another instance of lunr.MatchData with this + * objects metadata. + * + * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one. + * @see {@link lunr.Index~Result} + */ +lunr.MatchData.prototype.combine = function (otherMatchData) { + var terms = Object.keys(otherMatchData.metadata) + + for (var i = 0; i < terms.length; i++) { + var term = terms[i], + fields = Object.keys(otherMatchData.metadata[term]) + + if (this.metadata[term] == undefined) { + this.metadata[term] = Object.create(null) + } + + for (var j = 0; j < fields.length; j++) { + var field = fields[j], + keys = Object.keys(otherMatchData.metadata[term][field]) + + if (this.metadata[term][field] == undefined) { + this.metadata[term][field] = Object.create(null) + } + + for (var k = 0; k < keys.length; k++) { + var key = keys[k] + + if (this.metadata[term][field][key] == undefined) { + this.metadata[term][field][key] = otherMatchData.metadata[term][field][key] + } else { + this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key]) + } + + } + } + } +} + +/** + * Add metadata for a term/field pair to this instance of match data. + * + * @param {string} term - The term this match data is associated with + * @param {string} field - The field in which the term was found + * @param {object} metadata - The metadata recorded about this term in this field + */ +lunr.MatchData.prototype.add = function (term, field, metadata) { + if (!(term in this.metadata)) { + this.metadata[term] = Object.create(null) + this.metadata[term][field] = metadata + return + } + + if (!(field in this.metadata[term])) { + this.metadata[term][field] = metadata + return + } + + var metadataKeys = Object.keys(metadata) + + for (var i = 0; i < metadataKeys.length; i++) { + var key = metadataKeys[i] + + if (key in this.metadata[term][field]) { + this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key]) + } else { + this.metadata[term][field][key] = metadata[key] + } + } +} +/** + * A lunr.Query provides a programmatic way of defining queries to be performed + * against a {@link lunr.Index}. + * + * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method + * so the query object is pre-initialized with the right index fields. + * + * @constructor + * @property {lunr.Query~Clause[]} clauses - An array of query clauses. + * @property {string[]} allFields - An array of all available fields in a lunr.Index. + */ +lunr.Query = function (allFields) { + this.clauses = [] + this.allFields = allFields +} + +/** + * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause. + * + * This allows wildcards to be added to the beginning and end of a term without having to manually do any string + * concatenation. + * + * The wildcard constants can be bitwise combined to select both leading and trailing wildcards. + * + * @constant + * @default + * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour + * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists + * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists + * @see lunr.Query~Clause + * @see lunr.Query#clause + * @see lunr.Query#term + * @example query term with trailing wildcard + * query.term('foo', { wildcard: lunr.Query.wildcard.TRAILING }) + * @example query term with leading and trailing wildcard + * query.term('foo', { + * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING + * }) + */ + +lunr.Query.wildcard = new String ("*") +lunr.Query.wildcard.NONE = 0 +lunr.Query.wildcard.LEADING = 1 +lunr.Query.wildcard.TRAILING = 2 + +/** + * Constants for indicating what kind of presence a term must have in matching documents. + * + * @constant + * @enum {number} + * @see lunr.Query~Clause + * @see lunr.Query#clause + * @see lunr.Query#term + * @example query term with required presence + * query.term('foo', { presence: lunr.Query.presence.REQUIRED }) + */ +lunr.Query.presence = { + /** + * Term's presence in a document is optional, this is the default value. + */ + OPTIONAL: 1, + + /** + * Term's presence in a document is required, documents that do not contain + * this term will not be returned. + */ + REQUIRED: 2, + + /** + * Term's presence in a document is prohibited, documents that do contain + * this term will not be returned. + */ + PROHIBITED: 3 +} + +/** + * A single clause in a {@link lunr.Query} contains a term and details on how to + * match that term against a {@link lunr.Index}. + * + * @typedef {Object} lunr.Query~Clause + * @property {string[]} fields - The fields in an index this clause should be matched against. + * @property {number} [boost=1] - Any boost that should be applied when matching this clause. + * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be. + * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline. + * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended. + * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents. + */ + +/** + * Adds a {@link lunr.Query~Clause} to this query. + * + * Unless the clause contains the fields to be matched all fields will be matched. In addition + * a default boost of 1 is applied to the clause. + * + * @param {lunr.Query~Clause} clause - The clause to add to this query. + * @see lunr.Query~Clause + * @returns {lunr.Query} + */ +lunr.Query.prototype.clause = function (clause) { + if (!('fields' in clause)) { + clause.fields = this.allFields + } + + if (!('boost' in clause)) { + clause.boost = 1 + } + + if (!('usePipeline' in clause)) { + clause.usePipeline = true + } + + if (!('wildcard' in clause)) { + clause.wildcard = lunr.Query.wildcard.NONE + } + + if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) { + clause.term = "*" + clause.term + } + + if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) { + clause.term = "" + clause.term + "*" + } + + if (!('presence' in clause)) { + clause.presence = lunr.Query.presence.OPTIONAL + } + + this.clauses.push(clause) + + return this +} + +/** + * A negated query is one in which every clause has a presence of + * prohibited. These queries require some special processing to return + * the expected results. + * + * @returns boolean + */ +lunr.Query.prototype.isNegated = function () { + for (var i = 0; i < this.clauses.length; i++) { + if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) { + return false + } + } + + return true +} + +/** + * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause} + * to the list of clauses that make up this query. + * + * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion + * to a token or token-like string should be done before calling this method. + * + * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an + * array, each term in the array will share the same options. + * + * @param {object|object[]} term - The term(s) to add to the query. + * @param {object} [options] - Any additional properties to add to the query clause. + * @returns {lunr.Query} + * @see lunr.Query#clause + * @see lunr.Query~Clause + * @example adding a single term to a query + * query.term("foo") + * @example adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard + * query.term("foo", { + * fields: ["title"], + * boost: 10, + * wildcard: lunr.Query.wildcard.TRAILING + * }) + * @example using lunr.tokenizer to convert a string to tokens before using them as terms + * query.term(lunr.tokenizer("foo bar")) + */ +lunr.Query.prototype.term = function (term, options) { + if (Array.isArray(term)) { + term.forEach(function (t) { this.term(t, lunr.utils.clone(options)) }, this) + return this + } + + var clause = options || {} + clause.term = term.toString() + + this.clause(clause) + + return this +} +lunr.QueryParseError = function (message, start, end) { + this.name = "QueryParseError" + this.message = message + this.start = start + this.end = end +} + +lunr.QueryParseError.prototype = new Error +lunr.QueryLexer = function (str) { + this.lexemes = [] + this.str = str + this.length = str.length + this.pos = 0 + this.start = 0 + this.escapeCharPositions = [] +} + +lunr.QueryLexer.prototype.run = function () { + var state = lunr.QueryLexer.lexText + + while (state) { + state = state(this) + } +} + +lunr.QueryLexer.prototype.sliceString = function () { + var subSlices = [], + sliceStart = this.start, + sliceEnd = this.pos + + for (var i = 0; i < this.escapeCharPositions.length; i++) { + sliceEnd = this.escapeCharPositions[i] + subSlices.push(this.str.slice(sliceStart, sliceEnd)) + sliceStart = sliceEnd + 1 + } + + subSlices.push(this.str.slice(sliceStart, this.pos)) + this.escapeCharPositions.length = 0 + + return subSlices.join('') +} + +lunr.QueryLexer.prototype.emit = function (type) { + this.lexemes.push({ + type: type, + str: this.sliceString(), + start: this.start, + end: this.pos + }) + + this.start = this.pos +} + +lunr.QueryLexer.prototype.escapeCharacter = function () { + this.escapeCharPositions.push(this.pos - 1) + this.pos += 1 +} + +lunr.QueryLexer.prototype.next = function () { + if (this.pos >= this.length) { + return lunr.QueryLexer.EOS + } + + var char = this.str.charAt(this.pos) + this.pos += 1 + return char +} + +lunr.QueryLexer.prototype.width = function () { + return this.pos - this.start +} + +lunr.QueryLexer.prototype.ignore = function () { + if (this.start == this.pos) { + this.pos += 1 + } + + this.start = this.pos +} + +lunr.QueryLexer.prototype.backup = function () { + this.pos -= 1 +} + +lunr.QueryLexer.prototype.acceptDigitRun = function () { + var char, charCode + + do { + char = this.next() + charCode = char.charCodeAt(0) + } while (charCode > 47 && charCode < 58) + + if (char != lunr.QueryLexer.EOS) { + this.backup() + } +} + +lunr.QueryLexer.prototype.more = function () { + return this.pos < this.length +} + +lunr.QueryLexer.EOS = 'EOS' +lunr.QueryLexer.FIELD = 'FIELD' +lunr.QueryLexer.TERM = 'TERM' +lunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE' +lunr.QueryLexer.BOOST = 'BOOST' +lunr.QueryLexer.PRESENCE = 'PRESENCE' + +lunr.QueryLexer.lexField = function (lexer) { + lexer.backup() + lexer.emit(lunr.QueryLexer.FIELD) + lexer.ignore() + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexTerm = function (lexer) { + if (lexer.width() > 1) { + lexer.backup() + lexer.emit(lunr.QueryLexer.TERM) + } + + lexer.ignore() + + if (lexer.more()) { + return lunr.QueryLexer.lexText + } +} + +lunr.QueryLexer.lexEditDistance = function (lexer) { + lexer.ignore() + lexer.acceptDigitRun() + lexer.emit(lunr.QueryLexer.EDIT_DISTANCE) + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexBoost = function (lexer) { + lexer.ignore() + lexer.acceptDigitRun() + lexer.emit(lunr.QueryLexer.BOOST) + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexEOS = function (lexer) { + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } +} + +// This matches the separator used when tokenising fields +// within a document. These should match otherwise it is +// not possible to search for some tokens within a document. +// +// It is possible for the user to change the separator on the +// tokenizer so it _might_ clash with any other of the special +// characters already used within the search string, e.g. :. +// +// This means that it is possible to change the separator in +// such a way that makes some words unsearchable using a search +// string. +lunr.QueryLexer.termSeparator = lunr.tokenizer.separator + +lunr.QueryLexer.lexText = function (lexer) { + while (true) { + var char = lexer.next() + + if (char == lunr.QueryLexer.EOS) { + return lunr.QueryLexer.lexEOS + } + + // Escape character is '\' + if (char.charCodeAt(0) == 92) { + lexer.escapeCharacter() + continue + } + + if (char == ":") { + return lunr.QueryLexer.lexField + } + + if (char == "~") { + lexer.backup() + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } + return lunr.QueryLexer.lexEditDistance + } + + if (char == "^") { + lexer.backup() + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } + return lunr.QueryLexer.lexBoost + } + + // "+" indicates term presence is required + // checking for length to ensure that only + // leading "+" are considered + if (char == "+" && lexer.width() === 1) { + lexer.emit(lunr.QueryLexer.PRESENCE) + return lunr.QueryLexer.lexText + } + + // "-" indicates term presence is prohibited + // checking for length to ensure that only + // leading "-" are considered + if (char == "-" && lexer.width() === 1) { + lexer.emit(lunr.QueryLexer.PRESENCE) + return lunr.QueryLexer.lexText + } + + if (char.match(lunr.QueryLexer.termSeparator)) { + return lunr.QueryLexer.lexTerm + } + } +} + +lunr.QueryParser = function (str, query) { + this.lexer = new lunr.QueryLexer (str) + this.query = query + this.currentClause = {} + this.lexemeIdx = 0 +} + +lunr.QueryParser.prototype.parse = function () { + this.lexer.run() + this.lexemes = this.lexer.lexemes + + var state = lunr.QueryParser.parseClause + + while (state) { + state = state(this) + } + + return this.query +} + +lunr.QueryParser.prototype.peekLexeme = function () { + return this.lexemes[this.lexemeIdx] +} + +lunr.QueryParser.prototype.consumeLexeme = function () { + var lexeme = this.peekLexeme() + this.lexemeIdx += 1 + return lexeme +} + +lunr.QueryParser.prototype.nextClause = function () { + var completedClause = this.currentClause + this.query.clause(completedClause) + this.currentClause = {} +} + +lunr.QueryParser.parseClause = function (parser) { + var lexeme = parser.peekLexeme() + + if (lexeme == undefined) { + return + } + + switch (lexeme.type) { + case lunr.QueryLexer.PRESENCE: + return lunr.QueryParser.parsePresence + case lunr.QueryLexer.FIELD: + return lunr.QueryParser.parseField + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expected either a field or a term, found " + lexeme.type + + if (lexeme.str.length >= 1) { + errorMessage += " with value '" + lexeme.str + "'" + } + + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } +} + +lunr.QueryParser.parsePresence = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + switch (lexeme.str) { + case "-": + parser.currentClause.presence = lunr.Query.presence.PROHIBITED + break + case "+": + parser.currentClause.presence = lunr.Query.presence.REQUIRED + break + default: + var errorMessage = "unrecognised presence operator'" + lexeme.str + "'" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + var errorMessage = "expecting term or field, found nothing" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.FIELD: + return lunr.QueryParser.parseField + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expecting term or field, found '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseField = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + if (parser.query.allFields.indexOf(lexeme.str) == -1) { + var possibleFields = parser.query.allFields.map(function (f) { return "'" + f + "'" }).join(', '), + errorMessage = "unrecognised field '" + lexeme.str + "', possible fields: " + possibleFields + + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.fields = [lexeme.str] + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + var errorMessage = "expecting term, found nothing" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expecting term, found '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseTerm = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + parser.currentClause.term = lexeme.str.toLowerCase() + + if (lexeme.str.indexOf("*") != -1) { + parser.currentClause.usePipeline = false + } + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseEditDistance = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + var editDistance = parseInt(lexeme.str, 10) + + if (isNaN(editDistance)) { + var errorMessage = "edit distance must be numeric" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.editDistance = editDistance + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseBoost = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + var boost = parseInt(lexeme.str, 10) + + if (isNaN(boost)) { + var errorMessage = "boost must be numeric" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.boost = boost + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + + /** + * export the module via AMD, CommonJS or as a browser global + * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js + */ + ;(function (root, factory) { + if (typeof define === 'function' && define.amd) { + // AMD. Register as an anonymous module. + define(factory) + } else if (typeof exports === 'object') { + /** + * Node. Does not work with strict CommonJS, but + * only CommonJS-like environments that support module.exports, + * like Node. + */ + module.exports = factory() + } else { + // Browser globals (root is window) + root.lunr = factory() + } + }(this, function () { + /** + * Just return a value to define the module export. + * This example returns an object, but the module + * can return a function as the exported value. + */ + return lunr + })) +})(); diff --git a/search/main.js b/search/main.js new file mode 100644 index 00000000..a5e469d7 --- /dev/null +++ b/search/main.js @@ -0,0 +1,109 @@ +function getSearchTermFromLocation() { + var sPageURL = window.location.search.substring(1); + var sURLVariables = sPageURL.split('&'); + for (var i = 0; i < sURLVariables.length; i++) { + var sParameterName = sURLVariables[i].split('='); + if (sParameterName[0] == 'q') { + return decodeURIComponent(sParameterName[1].replace(/\+/g, '%20')); + } + } +} + +function joinUrl (base, path) { + if (path.substring(0, 1) === "/") { + // path starts with `/`. Thus it is absolute. + return path; + } + if (base.substring(base.length-1) === "/") { + // base ends with `/` + return base + path; + } + return base + "/" + path; +} + +function escapeHtml (value) { + return value.replace(/&/g, '&') + .replace(/"/g, '"') + .replace(//g, '>'); +} + +function formatResult (location, title, summary) { + return ''; +} + +function displayResults (results) { + var search_results = document.getElementById("mkdocs-search-results"); + while (search_results.firstChild) { + search_results.removeChild(search_results.firstChild); + } + if (results.length > 0){ + for (var i=0; i < results.length; i++){ + var result = results[i]; + var html = formatResult(result.location, result.title, result.summary); + search_results.insertAdjacentHTML('beforeend', html); + } + } else { + var noResultsText = search_results.getAttribute('data-no-results-text'); + if (!noResultsText) { + noResultsText = "No results found"; + } + search_results.insertAdjacentHTML('beforeend', '

' + noResultsText + '

'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..1920c31f --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"The OpenVoiceOS Project Documentation The OVOS project documentation is written and maintained by users just like you! These documents are your starting point for installing and using OpenVoiceOS software Note some sections may be incomplete or outdated Please open Issues and Pull Requests ! Just want to get started with OVOS? Check out our Quick Start Guide for help with installing an image, your first boot, and basic configuration. Getting Involved If this is your first experience with OpenVoiceOS, or you're not sure where to get started, say hi in OpenVoiceOS Chat and a team member would be happy to mentor you. Join the Discussions for questions and answers. Links Latest Release OpenVoiceOS Chat OpenVoiceOS Website OpenVoiceOS Reddit The below links are in the process of being deprecated. Mycroft Chat Mycroft Forum Mycroft Blog Mycroft Documentation Mycroft API Docs","title":"Introduction"},{"location":"#the-openvoiceos-project-documentation","text":"The OVOS project documentation is written and maintained by users just like you! These documents are your starting point for installing and using OpenVoiceOS software Note some sections may be incomplete or outdated Please open Issues and Pull Requests !","title":"The OpenVoiceOS Project Documentation"},{"location":"#just-want-to-get-started-with-ovos","text":"Check out our Quick Start Guide for help with installing an image, your first boot, and basic configuration.","title":"Just want to get started with OVOS?"},{"location":"#getting-involved","text":"If this is your first experience with OpenVoiceOS, or you're not sure where to get started, say hi in OpenVoiceOS Chat and a team member would be happy to mentor you. Join the Discussions for questions and answers.","title":"Getting Involved"},{"location":"#links","text":"Latest Release OpenVoiceOS Chat OpenVoiceOS Website OpenVoiceOS Reddit The below links are in the process of being deprecated. Mycroft Chat Mycroft Forum Mycroft Blog Mycroft Documentation Mycroft API Docs","title":"Links"},{"location":"about/","text":"About OpenVoiceOS Introducing OpenVoiceOS - The Free and Open-Source Personal Assistant and Smart Speaker. OpenVoiceOS is a new player in the smart speaker market, offering a powerful and flexible alternative to proprietary solutions like Amazon Echo and Google Home. With OpenVoiceOS, you have complete control over your personal data and the ability to customize and extend the functionality of your smart speaker. Built on open-source software, OpenVoiceOS is designed to provide users with a seamless and intuitive voice interface for controlling their smart home devices, playing music, setting reminders, and much more. The platform leverages cutting-edge technology, including machine learning and natural language processing, to deliver a highly responsive and accurate experience. In addition to its voice capabilities, OpenVoiceOS features a touch-screen GUI made using QT5 and the KF5 framework. The GUI provides an intuitive, user-friendly interface that allows you to access the full range of OpenVoiceOS features and functionality. Whether you prefer voice commands or a more traditional touch interface, OpenVoiceOS has you covered. One of the key advantages of OpenVoiceOS is its open-source nature, which means that anyone with the technical skills can contribute to the platform and help shape its future. Whether you're a software developer, data scientist, someone with a passion for technology, or just a casual user that would like to experience what OVOS has to offer, you can get involved and help build the next generation of personal assistants and smart speakers. With OpenVoiceOS, you have the option to run the platform fully offline, giving you complete control over your data and ensuring that your information is never shared with third parties. This makes OpenVoiceOS the perfect choice for anyone who values privacy and security. So if you're looking for a personal assistant and smart speaker that gives you the freedom and control you deserve, be sure to check out OpenVoiceOS today! Disclaimer : This post was written in collaboration with ChatGPT","title":"About"},{"location":"about/#about-openvoiceos","text":"Introducing OpenVoiceOS - The Free and Open-Source Personal Assistant and Smart Speaker. OpenVoiceOS is a new player in the smart speaker market, offering a powerful and flexible alternative to proprietary solutions like Amazon Echo and Google Home. With OpenVoiceOS, you have complete control over your personal data and the ability to customize and extend the functionality of your smart speaker. Built on open-source software, OpenVoiceOS is designed to provide users with a seamless and intuitive voice interface for controlling their smart home devices, playing music, setting reminders, and much more. The platform leverages cutting-edge technology, including machine learning and natural language processing, to deliver a highly responsive and accurate experience. In addition to its voice capabilities, OpenVoiceOS features a touch-screen GUI made using QT5 and the KF5 framework. The GUI provides an intuitive, user-friendly interface that allows you to access the full range of OpenVoiceOS features and functionality. Whether you prefer voice commands or a more traditional touch interface, OpenVoiceOS has you covered. One of the key advantages of OpenVoiceOS is its open-source nature, which means that anyone with the technical skills can contribute to the platform and help shape its future. Whether you're a software developer, data scientist, someone with a passion for technology, or just a casual user that would like to experience what OVOS has to offer, you can get involved and help build the next generation of personal assistants and smart speakers. With OpenVoiceOS, you have the option to run the platform fully offline, giving you complete control over your data and ensuring that your information is never shared with third parties. This makes OpenVoiceOS the perfect choice for anyone who values privacy and security. So if you're looking for a personal assistant and smart speaker that gives you the freedom and control you deserve, be sure to check out OpenVoiceOS today! Disclaimer : This post was written in collaboration with ChatGPT","title":"About OpenVoiceOS"},{"location":"architecture/","text":"OpenVoiceOS Architecture This section can be a bit technical, but is included for reference. It is not necessary to read this section for day-to-day usage of OVOS. OVOS is a collection of modular services that work together to provide a seamless, private, open source voice assistant. The suggested way to start OVOS is with systemd service files. Most of the images run these services as a normal user instead of system wide. If you get an error when using the system files, try using it as a system service. NOTE The ovos.service is just a wrapper to control the other OVOS services. It is used here as an example showing --user vs system . user service systemctl --user status ovos.service system service systemctl status ovos.service ovos-core ovos-core This service provides the main instance for OVOS and handles all of the skill loading, and intent processing. All user queries are handled by the skills service. You can think of it as OVOS's brain typical systemd command systemctl --user status ovos-skills systemctl --user restart ovos-skills Technical Docs on Skills Messagebus ovos-messagebus C++ version NOTE This is an alpha version and mostly Proof of Concept . It has been known to crash often. ovos-bus-service You can think of the bus service as OVOS's nervous system. The ovos-bus is considered an internal and private websocket, external clients should not connect directly to it. Please do not expose the messagebus to the outside world! Technical docs for messagebus typical systemd command systemctl --user start ovos-messagebus Listener ovos-dinkum-listener The listener service is used to detect your voice. It controls the WakeWord, STT (Speech To Text), and VAD (Voice Activity Detection) Plugins. You can modify microphone settings and enable additional features under the listener section of your mycroft.conf file, such as wake word / utterance recording / uploading. The ovos-dinkum-listener is the new OVOS listener that replaced the original ovos-listener and has many more options. Others still work, but are not recommended. Technical Listener Docs typical systemd command systemctl --user start ovos-dinkum-listener STT Plugins This is where speech is transcribed into text and forwarded to the skills service. Two STT plugins may be loaded at once. If the primary plugin fails, the second will be used. Having a lower accuracy offline model as fallback will account for internet outages, which ensures your device never becomes fully unusable. Several different STT (Speech To Text) plugins are available for use. OVOS provides a number of public services using the ovos-stt-plugin-server plugin which are hosted by OVOS trusted members (Members hosting services) . No additional configuration is required. OVOS Supported STT Plugins Changing STT Plugin Hotwords OVOS uses \"Hotwords\" to trigger any number of actions. You can load any number of hotwords in parallel and trigger different actions when they are detected. Each Hotword can do one or more of the following: trigger listening, also called a Wake word play a sound emit a bus event take ovos-core out of sleep mode, also called a wakeup_word or standup_word take ovos-core out of recording mode, also called a stop_word Setting and adding Hotwords WakeWord Plugins A Wake word is what OVOS uses to activate the device. By default Hey Mycroft is used by OVOS. Like other things in the OVOS ecosystem, this is configurable. Wake word plugins Changing the Wake word VAD Plugins VAD Plugins detect when you are actually speaking to the device, and when you quit talking. Most of the time, this will not need changed. If you are having trouble with your microphone hearing you, or stopping listening when you are done talking, you might change this and see if it helps your issue. Supported VAD Plugins Changing VAD Plugin Audio ovos-audio The audio service handles the output of all audio. It is how you hear the voice responses, music, or any other sound from your OVOS device. Configuring Audio TTS Plugins TTS (Text To Speech) is the verbal response from OVOS. There are several plugins available that support different engines. Multiple languages and voices are available to use. OVOS provides a set of public TTS servers hosted by OVOS trusted members (Members hosting services) . It uses the ovos-tts-server-plugin , and no additional configuration is needed. Supported TTS Plugins Changing TTS Plugin PHAL ovos-PHAL PHAL stands for Plugin-based Hardware Abstraction Layer. It is used to allow access of different hardware devices access to use the OVOS software stack. It completely replaces the concept of hardcoded \"enclosure\" from mycroft-core . Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark 1 plugin Supported PHAL Plugins PHAL Plugins Admin PHAL Similar to regular PHAL, but is used when sudo or privlidged user is needed Be extremely careful when adding admin-phal plugins . They give OVOS administrative privileges, or root privileges to your operating system Admin PHAL GUI OVOS uses the standard mycroft-gui framework, you can find the official documentation here The GUI service provides a websocket for GUI clients to connect to, it is responsible for implementing the GUI protocol under ovos-core . You can find in depth documentation here Other OVOS services OVOS provides a number of helper scripts to allow the user to control the device at the command line. ovos-say-to This provides a way to communicate an intent to ovos. ovos-say-to \"what time is it\" ovos-listen This opens the microphone for listening, just like if you would have said the WakeWord. It is expecting a verbal command. Continue by speaking to your device \"what time is it\" ovos-speak This takes your command and runs it through the TTS (Text To Speech) engine and speaks what was provided. ovos-speak \"hello world\" will output \"hello world\" in the configured TTS voice ovos-config is a command line interface that allows you to view and set configuration values.","title":"OpenVoiceOS Architecture"},{"location":"architecture/#openvoiceos-architecture","text":"This section can be a bit technical, but is included for reference. It is not necessary to read this section for day-to-day usage of OVOS. OVOS is a collection of modular services that work together to provide a seamless, private, open source voice assistant. The suggested way to start OVOS is with systemd service files. Most of the images run these services as a normal user instead of system wide. If you get an error when using the system files, try using it as a system service. NOTE The ovos.service is just a wrapper to control the other OVOS services. It is used here as an example showing --user vs system . user service systemctl --user status ovos.service system service systemctl status ovos.service","title":"OpenVoiceOS Architecture"},{"location":"architecture/#ovos-core","text":"ovos-core This service provides the main instance for OVOS and handles all of the skill loading, and intent processing. All user queries are handled by the skills service. You can think of it as OVOS's brain typical systemd command systemctl --user status ovos-skills systemctl --user restart ovos-skills Technical Docs on Skills","title":"ovos-core"},{"location":"architecture/#messagebus","text":"ovos-messagebus C++ version NOTE This is an alpha version and mostly Proof of Concept . It has been known to crash often. ovos-bus-service You can think of the bus service as OVOS's nervous system. The ovos-bus is considered an internal and private websocket, external clients should not connect directly to it. Please do not expose the messagebus to the outside world! Technical docs for messagebus typical systemd command systemctl --user start ovos-messagebus","title":"Messagebus"},{"location":"architecture/#listener","text":"ovos-dinkum-listener The listener service is used to detect your voice. It controls the WakeWord, STT (Speech To Text), and VAD (Voice Activity Detection) Plugins. You can modify microphone settings and enable additional features under the listener section of your mycroft.conf file, such as wake word / utterance recording / uploading. The ovos-dinkum-listener is the new OVOS listener that replaced the original ovos-listener and has many more options. Others still work, but are not recommended. Technical Listener Docs typical systemd command systemctl --user start ovos-dinkum-listener","title":"Listener"},{"location":"architecture/#stt-plugins","text":"This is where speech is transcribed into text and forwarded to the skills service. Two STT plugins may be loaded at once. If the primary plugin fails, the second will be used. Having a lower accuracy offline model as fallback will account for internet outages, which ensures your device never becomes fully unusable. Several different STT (Speech To Text) plugins are available for use. OVOS provides a number of public services using the ovos-stt-plugin-server plugin which are hosted by OVOS trusted members (Members hosting services) . No additional configuration is required. OVOS Supported STT Plugins Changing STT Plugin","title":"STT Plugins"},{"location":"architecture/#hotwords","text":"OVOS uses \"Hotwords\" to trigger any number of actions. You can load any number of hotwords in parallel and trigger different actions when they are detected. Each Hotword can do one or more of the following: trigger listening, also called a Wake word play a sound emit a bus event take ovos-core out of sleep mode, also called a wakeup_word or standup_word take ovos-core out of recording mode, also called a stop_word Setting and adding Hotwords","title":"Hotwords"},{"location":"architecture/#wakeword-plugins","text":"A Wake word is what OVOS uses to activate the device. By default Hey Mycroft is used by OVOS. Like other things in the OVOS ecosystem, this is configurable. Wake word plugins Changing the Wake word","title":"WakeWord Plugins"},{"location":"architecture/#vad-plugins","text":"VAD Plugins detect when you are actually speaking to the device, and when you quit talking. Most of the time, this will not need changed. If you are having trouble with your microphone hearing you, or stopping listening when you are done talking, you might change this and see if it helps your issue. Supported VAD Plugins Changing VAD Plugin","title":"VAD Plugins"},{"location":"architecture/#audio","text":"ovos-audio The audio service handles the output of all audio. It is how you hear the voice responses, music, or any other sound from your OVOS device. Configuring Audio","title":"Audio"},{"location":"architecture/#tts-plugins","text":"TTS (Text To Speech) is the verbal response from OVOS. There are several plugins available that support different engines. Multiple languages and voices are available to use. OVOS provides a set of public TTS servers hosted by OVOS trusted members (Members hosting services) . It uses the ovos-tts-server-plugin , and no additional configuration is needed. Supported TTS Plugins Changing TTS Plugin","title":"TTS Plugins"},{"location":"architecture/#phal","text":"ovos-PHAL PHAL stands for Plugin-based Hardware Abstraction Layer. It is used to allow access of different hardware devices access to use the OVOS software stack. It completely replaces the concept of hardcoded \"enclosure\" from mycroft-core . Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark 1 plugin Supported PHAL Plugins PHAL Plugins","title":"PHAL"},{"location":"architecture/#admin-phal","text":"Similar to regular PHAL, but is used when sudo or privlidged user is needed Be extremely careful when adding admin-phal plugins . They give OVOS administrative privileges, or root privileges to your operating system Admin PHAL","title":"Admin PHAL"},{"location":"architecture/#gui","text":"OVOS uses the standard mycroft-gui framework, you can find the official documentation here The GUI service provides a websocket for GUI clients to connect to, it is responsible for implementing the GUI protocol under ovos-core . You can find in depth documentation here","title":"GUI"},{"location":"architecture/#other-ovos-services","text":"OVOS provides a number of helper scripts to allow the user to control the device at the command line. ovos-say-to This provides a way to communicate an intent to ovos. ovos-say-to \"what time is it\" ovos-listen This opens the microphone for listening, just like if you would have said the WakeWord. It is expecting a verbal command. Continue by speaking to your device \"what time is it\" ovos-speak This takes your command and runs it through the TTS (Text To Speech) engine and speaks what was provided. ovos-speak \"hello world\" will output \"hello world\" in the configured TTS voice ovos-config is a command line interface that allows you to view and set configuration values.","title":"Other OVOS services"},{"location":"config/","text":"OVOS Configuration When you first start OVOS, there should not be any configuration needed to have a working device. NOTE To continue with the examples, you will need access to a shell on your device. This can be achieved with SSH. Connect to your device with the command ssh ovos@ and enter the password ovos . This password is EXTREMELY insecure and should be changed or use ssh keys for logging in. Securing SSH This section will explain how the configuration works, and how to do basic configuration changes. The rest of this section will assume you have shell access to your device. How it works OVOS will load configuration files from several locations and combine them into a single json file that is used throughout the software. The file that is loaded last, is what the user should use to modify any configuration values. This is usually located at `~/.config/mycroft/mycroft.conf {ovos-config-path}/mycroft.conf usually in /site-packages/ovos_config/mycroft.conf This is the default configuration file distributed with ovos-core os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/mycroft/mycroft.conf This is the default configuration file used by images, and may change specific values to correspond with how the image works. os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH /mycroft/web_cache.json ~/.mycroft/mycroft.conf (Deprecated) XDG_CONFIG_DIRS + /mycroft/mycroft.conf /etc/xdg/mycroft/mycroft.conf XDG_CONFIG_HOME (default ~/.config) + /mycroft/mycroft.conf This is the file that you should use to modify the configuration. When the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that exist in multiple configuration files will be overridden by the last file to contain the value. This process results in a minimal amount being written for a specific device and user, without modifying default distribution files. Advanced Configuration Docs Included Tools OVOS provides a command line tool ovos-config for viewing and changing configuration values. Values can also be set manually in config files instead of using the CLI tool. These methods will be used later in the How To section of these Docs.","title":"Introduction"},{"location":"config/#ovos-configuration","text":"When you first start OVOS, there should not be any configuration needed to have a working device. NOTE To continue with the examples, you will need access to a shell on your device. This can be achieved with SSH. Connect to your device with the command ssh ovos@ and enter the password ovos . This password is EXTREMELY insecure and should be changed or use ssh keys for logging in. Securing SSH This section will explain how the configuration works, and how to do basic configuration changes. The rest of this section will assume you have shell access to your device.","title":"OVOS Configuration"},{"location":"config/#how-it-works","text":"OVOS will load configuration files from several locations and combine them into a single json file that is used throughout the software. The file that is loaded last, is what the user should use to modify any configuration values. This is usually located at `~/.config/mycroft/mycroft.conf {ovos-config-path}/mycroft.conf usually in /site-packages/ovos_config/mycroft.conf This is the default configuration file distributed with ovos-core os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/mycroft/mycroft.conf This is the default configuration file used by images, and may change specific values to correspond with how the image works. os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH /mycroft/web_cache.json ~/.mycroft/mycroft.conf (Deprecated) XDG_CONFIG_DIRS + /mycroft/mycroft.conf /etc/xdg/mycroft/mycroft.conf XDG_CONFIG_HOME (default ~/.config) + /mycroft/mycroft.conf This is the file that you should use to modify the configuration. When the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that exist in multiple configuration files will be overridden by the last file to contain the value. This process results in a minimal amount being written for a specific device and user, without modifying default distribution files. Advanced Configuration Docs","title":"How it works"},{"location":"config/#included-tools","text":"OVOS provides a command line tool ovos-config for viewing and changing configuration values. Values can also be set manually in config files instead of using the CLI tool. These methods will be used later in the How To section of these Docs.","title":"Included Tools"},{"location":"config_manual_config/","text":"Configuration - Manually Change Files User configuration should be set in the XDG_CONFIG_HOME file. Usually located at ~/.config/mycroft/mycroft.conf . This file may or may not exist by default. If it does NOT exist, create it. mkdir -p ~/.config/mycroft touch ~/.config/mycroft/mycroft.conf Now you can edit that file. To continue with the previous example, we will change the host of the TTS server, then add the value manually to the user's mycroft.conf file. Open the file for editing. It is not uncommon for this file to exist, but be empty. nano ~/.config/mycroft/mycroft.conf Enter the following into the file. NOTE this file must be valid json or yaml format. OVOS knows how to read both { \"tts\": { \"module\": \"ovos-tts-plugin-server\", \"ovos-tts-plugin-server\": { \"host\": \"https://pipertts.ziggyai.online\" } } } You can check the formatting of your file with the jq command. cat ~/.config/mycroft/mycroft.conf | jq If your distribution does not include jq it can be installed with the command sudo apt install jq or the equivalent for your distro. If there are no errors, it will output the complete file. On error, it will output the line where the error is. You can use an online JSON checker if you want also. online json checker online yaml checker","title":"Manualy Editing Files"},{"location":"config_manual_config/#configuration-manually-change-files","text":"User configuration should be set in the XDG_CONFIG_HOME file. Usually located at ~/.config/mycroft/mycroft.conf . This file may or may not exist by default. If it does NOT exist, create it. mkdir -p ~/.config/mycroft touch ~/.config/mycroft/mycroft.conf Now you can edit that file. To continue with the previous example, we will change the host of the TTS server, then add the value manually to the user's mycroft.conf file. Open the file for editing. It is not uncommon for this file to exist, but be empty. nano ~/.config/mycroft/mycroft.conf Enter the following into the file. NOTE this file must be valid json or yaml format. OVOS knows how to read both { \"tts\": { \"module\": \"ovos-tts-plugin-server\", \"ovos-tts-plugin-server\": { \"host\": \"https://pipertts.ziggyai.online\" } } } You can check the formatting of your file with the jq command. cat ~/.config/mycroft/mycroft.conf | jq If your distribution does not include jq it can be installed with the command sudo apt install jq or the equivalent for your distro. If there are no errors, it will output the complete file. On error, it will output the line where the error is. You can use an online JSON checker if you want also. online json checker online yaml checker","title":"Configuration - Manually Change Files"},{"location":"config_ovos_config/","text":"Configuration - ovos-config OVOS provides a small command line tool, ovos-config , for viewing and setting configuration values in the OVOS ecosystem. NOTE The CLI of this script is new, and may contain some bugs. Please report issues to the ovos-config github page. Viewing Configuration Settings ovos-config --help will show a list of commands to use with this tool. ovos-config show will display a table representing all of the current configuration values. To get the values of a specific section: ovos-config show --section tts will show just the \"tts\" section of the configuration ovos-config show --section tts \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513 \u2503 Configuration keys (Configuration: Joined, Section: tts) \u2503 Value \u2503 \u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529 \u2502 pulse_duck \u2502 False \u2502 \u2502 module \u2502 ovos-tts-plugin-server \u2502 \u2502 fallback_module \u2502 ovos-tts-plugin-mimic \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 \u2502 ovos-tts-plugin-server \u2502 \u2502 \u2502 host \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 Changing Configuration Values We will continue with the example above, TTS. Change the host of the TTS server: ovos-config set -k tts will show a table of values that can be edited set -k tts \u250f\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513 \u2503 # \u2503 Path \u2503 Value \u2503 \u2521\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529 \u2502 0 \u2502 tts/pulse_duck \u2502 False \u2502 \u2502 1 \u2502 tts/ovos-tts-plugin-server/host \u2502 \u2502 \u2514\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 Which value should be changed? (2='Exit') [0/1/2]: Enter 1 to change the value of tts/ovos-tts-plugin-server/host Please enter the value to be stored (type: str) : Enter the value for the tts server that you want ovos to use. https://pipertts.ziggyai.online Use ovos-config show --section tts to check your results ovos-config show --section tts \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513 \u2503 Configuration keys (Configuration: Joined, Section: tts) \u2503 Value \u2503 \u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529 \u2502 pulse_duck \u2502 False \u2502 \u2502 module \u2502 ovos-tts-plugin-server \u2502 \u2502 fallback_module \u2502 ovos-tts-plugin-mimic \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 \u2502 ovos-tts-plugin-server \u2502 \u2502 \u2502 host \u2502 https://pipertts.ziggyai.online \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 This can be done for any of the values in the configuration stack.","title":"Using ovos-config"},{"location":"config_ovos_config/#configuration-ovos-config","text":"OVOS provides a small command line tool, ovos-config , for viewing and setting configuration values in the OVOS ecosystem. NOTE The CLI of this script is new, and may contain some bugs. Please report issues to the ovos-config github page.","title":"Configuration - ovos-config"},{"location":"config_ovos_config/#viewing-configuration-settings","text":"ovos-config --help will show a list of commands to use with this tool. ovos-config show will display a table representing all of the current configuration values. To get the values of a specific section: ovos-config show --section tts will show just the \"tts\" section of the configuration ovos-config show --section tts \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513 \u2503 Configuration keys (Configuration: Joined, Section: tts) \u2503 Value \u2503 \u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529 \u2502 pulse_duck \u2502 False \u2502 \u2502 module \u2502 ovos-tts-plugin-server \u2502 \u2502 fallback_module \u2502 ovos-tts-plugin-mimic \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 \u2502 ovos-tts-plugin-server \u2502 \u2502 \u2502 host \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518","title":"Viewing Configuration Settings"},{"location":"config_ovos_config/#changing-configuration-values","text":"We will continue with the example above, TTS. Change the host of the TTS server: ovos-config set -k tts will show a table of values that can be edited set -k tts \u250f\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513 \u2503 # \u2503 Path \u2503 Value \u2503 \u2521\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529 \u2502 0 \u2502 tts/pulse_duck \u2502 False \u2502 \u2502 1 \u2502 tts/ovos-tts-plugin-server/host \u2502 \u2502 \u2514\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 Which value should be changed? (2='Exit') [0/1/2]: Enter 1 to change the value of tts/ovos-tts-plugin-server/host Please enter the value to be stored (type: str) : Enter the value for the tts server that you want ovos to use. https://pipertts.ziggyai.online Use ovos-config show --section tts to check your results ovos-config show --section tts \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513 \u2503 Configuration keys (Configuration: Joined, Section: tts) \u2503 Value \u2503 \u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529 \u2502 pulse_duck \u2502 False \u2502 \u2502 module \u2502 ovos-tts-plugin-server \u2502 \u2502 fallback_module \u2502 ovos-tts-plugin-mimic \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 \u2502 ovos-tts-plugin-server \u2502 \u2502 \u2502 host \u2502 https://pipertts.ziggyai.online \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 This can be done for any of the values in the configuration stack.","title":"Changing Configuration Values"},{"location":"faq/","text":"Frequently Asked Questions What is OVOS? How did OVOS start? Who is behind OVOS? What is the relationship between OVOS and Mycroft? How does OVOS make money? Where is your website? Does OVOS have any default skills? Does OVOS work offline? Does OVOS depend on any servers? How many voices does OVOS support? Can I change the wake word? Can OVOS run without a wake word? How fast can OVOS respond? How do I run OVOS behind a proxy? What is OVOS? OVOS aims to be a full operating system that is free and open source. The Open Voice Operating System consists of OVOS packages (programs specifically released by the OVOS Project) as well as free software released by third parties such as skills and plugins. OVOS makes it possible to voice enable technology without software that would trample your freedom. Historically OVOS has been used to refer to several things, the team, the github organization and the reference buildroot implementation How did OVOS start? OVOS started as MycroftOS, you can find the original mycroft forums thread here . Over time more mycroft community members joined the project, and it was renamed to OpenVoiceOS to avoid trademark issues. Initially OVOS was focused on bundling mycroft-core and on creating only companion software, but due to contributions not being accepted upstream we now maintain an enhanced reference fork of mycroft-core with extra functionality, while keeping all companion software mycroft-core (dev branch) compatible You can think of OVOS as the unsanctioned \"Mycroft Community Edition\" Who is behind OVOS? Everyone in the OVOS team is a long term mycroft community member and has experience working with the mycroft code base Meet the team: Peter Steenbergen - mycroft community developer since 2018, founder of MycroftOS project Casimiro Ferreira - mycroft community developer since 2017, co-founder of HelloChatterbox Aditya Mehra - mycroft community developer since 2016, mycroft-gui lead developer Daniel McKnight - community developer since 2017, NeonGecko lead developer Parker Seaman - mycroft enthusiast since 2018 Chance - mycroft community developer since 2019, ex-maintainer of lingua_franca currently taking a break, he will be back! What is the relationship between OVOS and Mycroft? Both projects are fully independent, initially OVOS was focused on wrapping mycroft-core with a minimal OS, but as both projects matured, ovos-core was created to include extra functionality and make OVOS development faster and more efficient. OVOS has been committed to keeping our components compatible with Mycroft and many of our changes are submitted to Mycroft to include in their projects at their discretion. How does OVOS make money? We don't, OVOS is a volunteer project with no source of income or business model However, we want to acknowledge Blue Systems and NeonGeckoCom , a lot of the work in OVOS is done on paid company time from these projects Where is your website? website - openvoiceos.com chat - matrix forums - github discussions Does OVOS have any default skills? We provide essential skills and those are bundled in all our reference images. ovos-core does not manage your skills, unlike mycroft it won't install or update anything by itself. if you installed ovos-core manually you also need to install skills manually Does OVOS work offline? By default ovos-core does not require a backend internet server to operate. Some skills can be accessed (via command line) entirely offline. The default speech-to-text (STT) engine currently requires an internet connection, though some self-hosted, offline options are available. Individual skills and plugins may require internet, and most of the time you will want to use those. Does OVOS depend on any servers? no! you can integrate ovos-core with selene or personal backend but that is fully optional we provide some microservices for some of our skills, but you can also use your own api keys How many voices does OVOS support? hundreds! nearly everything in OVOS is modular and configurable, that includes Text To Speech. Voices depend on language and the plugins you have installed, you can find a non-exhaustive list of plugins in the ovos plugins awesome list Can I change the wake word? yes, ovos-core supports several wake word plugins . Additionally, OVOS allows you to load any number of hot words in parallel and trigger different actions when they are detected each hotword can do one or more of the following: trigger listening, also called a wake_word play a sound emit a bus event take ovos-core out of sleep mode, also called a wakeup_word or standup_word take ovos-core out of recording mode, also called a stop_word Can OVOS run without a wake word? mostly yes, depending on exactly what you mean by this question OVOS can run without any wake word configured, in this case you will only be able to interact via CLI or button press, best for privacy, not so great for a smart speaker ovos-core also provides a couple experimental settings, if you enable continuous listening then VAD will be used to detect speech and no wake word is needed, just speak to mycroft and it should answer! However, this setting is experimental for a reason, you may find that mycroft answers your TV or even tries to answer itself if your hardware does not have AEC Another experimental setting is hybrid mode, with hybrid mode you can ask follow-up questions, up to 45 seconds after the last mycroft interaction, if you do not interact with mycroft it will go back to waiting for a wake word How fast can OVOS respond? By default, to answer a request: Detects the wake word Records 3 - 10 seconds of audio Transcribes the audio and returns the text transcription , either locally or remotely, depending on the speech-to-text (STT) engine in use Parses the text to understand the intent Sends the text to the intent handler with the highest confidence Allows the Skill to perform some action and provide the text to be spoken Synthesizes audio from the given text, either locally or remotely, depending on the text-to-speech (TTS) engine in use Plays the synthesized spoken audio. Through this process there are a number of factors that can affect the perceived speed of responses: System resources - more processing power and memory never hurts! Network latency - depending on configured plugins, network latency and connection speed can play a significant role in slowing down response times. Streaming STT - we have been experimenting with the use of streaming services. This transcribes audio as it's received rather than waiting for the entire utterance to be finished and sending the resulting audio file to a server to be processed in its entirety. It is possible to switch to a streaming STT service. See STT Plugins for a list of options available. Dialog structure - a long sentence will always take more time to synthesize than a short one. Skill developers can help provide quicker response times by considering the structure of their dialog and breaking that dialog up. TTS Caching - synthesized audio is cached meaning common recently generated phrases don't need to be generated, they can be returned immediately. How do I run OVOS behind a proxy? Many schools, universities and workplaces run a proxy on their network. If you need to type in a username and password to access the external internet, then you are likely behind a proxy . If you plan to use OVOS behind a proxy, then you will need to do an additional configuration step. NOTE: In order to complete this step, you will need to know the hostname and port for the proxy server. Your network administrator will be able to provide these details. Your network administrator may want information on what type of traffic OVOS will be using. We use https traffic on port 443 , primarily for accessing ReST-based APIs. Using OVOS behind a proxy without authentication If you are using OVOS behind a proxy without authentication, add the following environment variables, changing the proxy_hostname.com and proxy_port for the values for your network. These commands are executed from the Linux command line interface (CLI). $ export http_proxy=http://proxy_hostname.com:proxy_port $ export https_port=http://proxy_hostname.com:proxy_port $ export no_proxy=\"localhost,127.0.0.1,localaddress,.localdomain.com,0.0.0.0,::1\" Using OVOS behind an authenticated proxy If you are behind a proxy which requires authentication, add the following environment variables, changing the proxy_hostname.com and proxy_port for the values for your network. These commands are executed from the Linux command line interface (CLI). $ export http_proxy=http://user:password@proxy_hostname.com:proxy_port $ export https_port=http://user:password@proxy_hostname.com:proxy_port $ export no_proxy=\"localhost,127.0.0.1,localaddress,.localdomain.com,0.0.0.0,::1\"","title":"FAQ"},{"location":"faq/#frequently-asked-questions","text":"What is OVOS? How did OVOS start? Who is behind OVOS? What is the relationship between OVOS and Mycroft? How does OVOS make money? Where is your website? Does OVOS have any default skills? Does OVOS work offline? Does OVOS depend on any servers? How many voices does OVOS support? Can I change the wake word? Can OVOS run without a wake word? How fast can OVOS respond? How do I run OVOS behind a proxy?","title":"Frequently Asked Questions"},{"location":"faq/#what-is-ovos","text":"OVOS aims to be a full operating system that is free and open source. The Open Voice Operating System consists of OVOS packages (programs specifically released by the OVOS Project) as well as free software released by third parties such as skills and plugins. OVOS makes it possible to voice enable technology without software that would trample your freedom. Historically OVOS has been used to refer to several things, the team, the github organization and the reference buildroot implementation","title":"What is OVOS?"},{"location":"faq/#how-did-ovos-start","text":"OVOS started as MycroftOS, you can find the original mycroft forums thread here . Over time more mycroft community members joined the project, and it was renamed to OpenVoiceOS to avoid trademark issues. Initially OVOS was focused on bundling mycroft-core and on creating only companion software, but due to contributions not being accepted upstream we now maintain an enhanced reference fork of mycroft-core with extra functionality, while keeping all companion software mycroft-core (dev branch) compatible You can think of OVOS as the unsanctioned \"Mycroft Community Edition\"","title":"How did OVOS start?"},{"location":"faq/#who-is-behind-ovos","text":"Everyone in the OVOS team is a long term mycroft community member and has experience working with the mycroft code base Meet the team: Peter Steenbergen - mycroft community developer since 2018, founder of MycroftOS project Casimiro Ferreira - mycroft community developer since 2017, co-founder of HelloChatterbox Aditya Mehra - mycroft community developer since 2016, mycroft-gui lead developer Daniel McKnight - community developer since 2017, NeonGecko lead developer Parker Seaman - mycroft enthusiast since 2018 Chance - mycroft community developer since 2019, ex-maintainer of lingua_franca currently taking a break, he will be back!","title":"Who is behind OVOS?"},{"location":"faq/#what-is-the-relationship-between-ovos-and-mycroft","text":"Both projects are fully independent, initially OVOS was focused on wrapping mycroft-core with a minimal OS, but as both projects matured, ovos-core was created to include extra functionality and make OVOS development faster and more efficient. OVOS has been committed to keeping our components compatible with Mycroft and many of our changes are submitted to Mycroft to include in their projects at their discretion.","title":"What is the relationship between OVOS and Mycroft?"},{"location":"faq/#how-does-ovos-make-money","text":"We don't, OVOS is a volunteer project with no source of income or business model However, we want to acknowledge Blue Systems and NeonGeckoCom , a lot of the work in OVOS is done on paid company time from these projects","title":"How does OVOS make money?"},{"location":"faq/#where-is-your-website","text":"website - openvoiceos.com chat - matrix forums - github discussions","title":"Where is your website?"},{"location":"faq/#does-ovos-have-any-default-skills","text":"We provide essential skills and those are bundled in all our reference images. ovos-core does not manage your skills, unlike mycroft it won't install or update anything by itself. if you installed ovos-core manually you also need to install skills manually","title":"Does OVOS have any default skills?"},{"location":"faq/#does-ovos-work-offline","text":"By default ovos-core does not require a backend internet server to operate. Some skills can be accessed (via command line) entirely offline. The default speech-to-text (STT) engine currently requires an internet connection, though some self-hosted, offline options are available. Individual skills and plugins may require internet, and most of the time you will want to use those.","title":"Does OVOS work offline?"},{"location":"faq/#does-ovos-depend-on-any-servers","text":"no! you can integrate ovos-core with selene or personal backend but that is fully optional we provide some microservices for some of our skills, but you can also use your own api keys","title":"Does OVOS depend on any servers?"},{"location":"faq/#how-many-voices-does-ovos-support","text":"hundreds! nearly everything in OVOS is modular and configurable, that includes Text To Speech. Voices depend on language and the plugins you have installed, you can find a non-exhaustive list of plugins in the ovos plugins awesome list","title":"How many voices does OVOS support?"},{"location":"faq/#can-i-change-the-wake-word","text":"yes, ovos-core supports several wake word plugins . Additionally, OVOS allows you to load any number of hot words in parallel and trigger different actions when they are detected each hotword can do one or more of the following: trigger listening, also called a wake_word play a sound emit a bus event take ovos-core out of sleep mode, also called a wakeup_word or standup_word take ovos-core out of recording mode, also called a stop_word","title":"Can I change the wake word?"},{"location":"faq/#can-ovos-run-without-a-wake-word","text":"mostly yes, depending on exactly what you mean by this question OVOS can run without any wake word configured, in this case you will only be able to interact via CLI or button press, best for privacy, not so great for a smart speaker ovos-core also provides a couple experimental settings, if you enable continuous listening then VAD will be used to detect speech and no wake word is needed, just speak to mycroft and it should answer! However, this setting is experimental for a reason, you may find that mycroft answers your TV or even tries to answer itself if your hardware does not have AEC Another experimental setting is hybrid mode, with hybrid mode you can ask follow-up questions, up to 45 seconds after the last mycroft interaction, if you do not interact with mycroft it will go back to waiting for a wake word","title":"Can OVOS run without a wake word?"},{"location":"faq/#how-fast-can-ovos-respond","text":"By default, to answer a request: Detects the wake word Records 3 - 10 seconds of audio Transcribes the audio and returns the text transcription , either locally or remotely, depending on the speech-to-text (STT) engine in use Parses the text to understand the intent Sends the text to the intent handler with the highest confidence Allows the Skill to perform some action and provide the text to be spoken Synthesizes audio from the given text, either locally or remotely, depending on the text-to-speech (TTS) engine in use Plays the synthesized spoken audio. Through this process there are a number of factors that can affect the perceived speed of responses: System resources - more processing power and memory never hurts! Network latency - depending on configured plugins, network latency and connection speed can play a significant role in slowing down response times. Streaming STT - we have been experimenting with the use of streaming services. This transcribes audio as it's received rather than waiting for the entire utterance to be finished and sending the resulting audio file to a server to be processed in its entirety. It is possible to switch to a streaming STT service. See STT Plugins for a list of options available. Dialog structure - a long sentence will always take more time to synthesize than a short one. Skill developers can help provide quicker response times by considering the structure of their dialog and breaking that dialog up. TTS Caching - synthesized audio is cached meaning common recently generated phrases don't need to be generated, they can be returned immediately.","title":"How fast can OVOS respond?"},{"location":"faq/#how-do-i-run-ovos-behind-a-proxy","text":"Many schools, universities and workplaces run a proxy on their network. If you need to type in a username and password to access the external internet, then you are likely behind a proxy . If you plan to use OVOS behind a proxy, then you will need to do an additional configuration step. NOTE: In order to complete this step, you will need to know the hostname and port for the proxy server. Your network administrator will be able to provide these details. Your network administrator may want information on what type of traffic OVOS will be using. We use https traffic on port 443 , primarily for accessing ReST-based APIs.","title":"How do I run OVOS behind a proxy?"},{"location":"faq/#using-ovos-behind-a-proxy-without-authentication","text":"If you are using OVOS behind a proxy without authentication, add the following environment variables, changing the proxy_hostname.com and proxy_port for the values for your network. These commands are executed from the Linux command line interface (CLI). $ export http_proxy=http://proxy_hostname.com:proxy_port $ export https_port=http://proxy_hostname.com:proxy_port $ export no_proxy=\"localhost,127.0.0.1,localaddress,.localdomain.com,0.0.0.0,::1\"","title":"Using OVOS behind a proxy without authentication"},{"location":"faq/#using-ovos-behind-an-authenticated-proxy","text":"If you are behind a proxy which requires authentication, add the following environment variables, changing the proxy_hostname.com and proxy_port for the values for your network. These commands are executed from the Linux command line interface (CLI). $ export http_proxy=http://user:password@proxy_hostname.com:proxy_port $ export https_port=http://user:password@proxy_hostname.com:proxy_port $ export no_proxy=\"localhost,127.0.0.1,localaddress,.localdomain.com,0.0.0.0,::1\"","title":"Using OVOS behind an authenticated proxy"},{"location":"friends/","text":"OVOS Friends OpenVoiceOS is part of a larger ecosystem of FOSS voice technology, we work closely with the following projects HiveMind HiveMind is a community-developed superset or extension of OpenVoiceOS With HiveMind, you can extend one (or more, but usually just one!) instance of Mycroft to as many devices as you want, including devices that can't ordinarily run Mycroft! HiveMind's developers have successfully connected to Mycroft from a PinePhone, a 2009 MacBook, and a Raspberry Pi 0, among other devices. Mycroft itself usually runs on our desktop computers or our home servers, but you can use any Mycroft-branded device, or OpenVoiceOS, as your central unit. You find the website here and the source code here Plasma Bigscreen Plasma Bigscreen integrates and uses OpenVoiceOS as voice framework stack to serve voice queries and voice applications (skills with a homescreen), one can easily enable mycroft / ovos integration in the bigscreen launcher by installing ovos core and required services and enabling the integration switch in the bigscreen KCM You find the website here and the source code here NeonGecko Neon was one of the first projects ever to adopt ovos-core as a library to build their own voice assistant, Neon works closely together with OVOS and both projects are mostly compatible You find the website here and the source code here Mycroft Mycroft AI started it all, it was one of the first ever FOSS voice assistants and is the project OVOS descends from. Most applications made for mycroft will work in OVOS and vice-versa You find the website here and the source code here Secret Sauce AI Secret Sauce AI is a coordinated community of tech minded AI enthusiasts working together on projects to identify blockers and improve the basic open source tools and pipeline components in the AI (voice) assistant pipeline (wakeword, ASR, NLU, NLG, TTS). The focus is mostly geared toward deployment on edge devices and self-hosted solutions. This is not a voice assistant project in and of itself, rather Secret Sauce AI helps AI (voice) assistant projects come together as individuals and solve basic problems faced by the entire community. You find the website here and the source code here","title":"Friends"},{"location":"friends/#ovos-friends","text":"OpenVoiceOS is part of a larger ecosystem of FOSS voice technology, we work closely with the following projects","title":"OVOS Friends"},{"location":"friends/#hivemind","text":"HiveMind is a community-developed superset or extension of OpenVoiceOS With HiveMind, you can extend one (or more, but usually just one!) instance of Mycroft to as many devices as you want, including devices that can't ordinarily run Mycroft! HiveMind's developers have successfully connected to Mycroft from a PinePhone, a 2009 MacBook, and a Raspberry Pi 0, among other devices. Mycroft itself usually runs on our desktop computers or our home servers, but you can use any Mycroft-branded device, or OpenVoiceOS, as your central unit. You find the website here and the source code here","title":" HiveMind"},{"location":"friends/#plasma-bigscreen","text":"Plasma Bigscreen integrates and uses OpenVoiceOS as voice framework stack to serve voice queries and voice applications (skills with a homescreen), one can easily enable mycroft / ovos integration in the bigscreen launcher by installing ovos core and required services and enabling the integration switch in the bigscreen KCM You find the website here and the source code here","title":" Plasma Bigscreen"},{"location":"friends/#neongecko","text":"Neon was one of the first projects ever to adopt ovos-core as a library to build their own voice assistant, Neon works closely together with OVOS and both projects are mostly compatible You find the website here and the source code here","title":" NeonGecko"},{"location":"friends/#mycroft","text":"Mycroft AI started it all, it was one of the first ever FOSS voice assistants and is the project OVOS descends from. Most applications made for mycroft will work in OVOS and vice-versa You find the website here and the source code here","title":" Mycroft"},{"location":"friends/#secret-sauce-ai","text":"Secret Sauce AI is a coordinated community of tech minded AI enthusiasts working together on projects to identify blockers and improve the basic open source tools and pipeline components in the AI (voice) assistant pipeline (wakeword, ASR, NLU, NLG, TTS). The focus is mostly geared toward deployment on edge devices and self-hosted solutions. This is not a voice assistant project in and of itself, rather Secret Sauce AI helps AI (voice) assistant projects come together as individuals and solve basic problems faced by the entire community. You find the website here and the source code here","title":" Secret Sauce AI"},{"location":"glossary/","text":"Glossary Editor's Note Some of the more detailed definitions will be moved to other pages, it's just here to keep track of the information for now. The Project The OpenVoiceOS Project (OVOS) All the repositories under OpenVoiceOS organization The OpenVoiceOS Team The team behind OVOS Terms Confirmations Confirmation approaches can also be defined by Statements or Prompts , but when we talk about them in the context of confirmations we call them Implicit and Explicit. Implicit Confirmation This type of confirmation is also a statement. The idea is to parrot the information back to the user to confirm that it was correct, but not require additional input from the user. The implicit confirmation can be used in a majority of situations. Explicit Confirmation This type of confirmation requires an input from the user to verify everything is correct. Conversations Any time the user needs to input a lot of information or the user needs to sort through a variety of options a conversation will be needed. Users may be used to systems that require them to separate input into different chunks. Context Allows for natural conversation by having skills set a \"context\" that can be used by subsequent handlers. Context could be anything from person to location. Context can also create \"bubbles\" of available intent handlers, to make sure certain Intents can't be triggered unless some previous stage in a conversation has occurred. You can find an example Tea Skill using conversational context on Github . As you can see, Conversational Context lends itself well to implementing a dialog tree or conversation tree . Grapheme All of the letters and letter combinations that represent a phoneme. Home Screen The OpenVoiceOS home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections. Intent When an utterance is classified for its action and entities (e.g. 'turn on the kitchen lights' -> skill: home assistant, action: turn on/off, entity: kitchen lights) MPRIS (Media Player Remote Interfacing Specification) is a standard D-Bus interface which aims to provide a common programmatic API for controlling media players. More Inforamtion mycroft.conf Primary configuration file for the voice assistant. Possible locations: - /home/ovos/.local/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf - /etc/mycroft/mycroft.conf - /home/ovos/.config/mycroft/mycroft.conf - /etc/xdg/mycroft/mycroft.conf - /home/ovos/.mycroft/mycroft.conf More Information OCP OCP stands for OpenVoiceOS Common Play, it is a full fledged media player OCP is a OVOSAbstractApplication , this means it is a standalone but native OVOS application with full voice integration OCP differs from mycroft-core in several aspects: Can run standalone, only needs a bus connection OCP provides its own intents as if it was a skill OCP provides its own GUI as if it was a skill mycroft-core CommonPlay skill framework is disabled when OCP loads OCP skills have a dedicated MycroftSkill class and decorators in ovos-workshop OCP skills act as media providers, they do not (usually) handle playback mycroft-core CommonPlay skills have an imperfect compatibility layer and are given lower priority over OCP skills OCP handles several kinds of playback, including video OCP has a sub-intent parser for matching requested media types AudioService becomes a subsystem for OCP OCP also has AudioService plugin component introducing a compatibility layer for skills using \"old style audioservice api\" OCP integrates with MPRIS, it can be controlled from external apps, e.g. KdeConnect in your phone OCP manages external MPRIS enabled players, you can voice control 3rd party apps without writing a skill for it via OCP ovos-core The central repository where the voice assistant \"brain\" is developed OPM OPM is the OVOS Plugin Manager , this base package provides arbitrary plugins to the ovos ecosystem OPM plugins import their base classes from OPM making them portable and independent from core, plugins can be used in your standalone projects By using OPM you can ensure a standard interface to plugins and easily make them configurable in your project, plugin code and example configurations are mapped to a string via python entrypoints in setup.py Some projects using OPM are ovos-core , hivemind-voice-sat , ovos-personal-backend , ovos-stt-server and ovos-tts-server OVOS-shell The gui service in ovos-core will expose a websocket to the GUI client following the protocol outlined here The GUI library which implements the protocol lives in the mycroft-gui repository, The repository also hosts a development client for skill developers wanting to develop on the desktop. OVOS-shell is the OpenVoiceOS client implementation of the mycroft-gui library used in our embedded device images, other distributions may offer alternative implementations such as plasma-bigscreen * or mycroft mark2 OVOS-shell is tightly coupled to PHAL , the following companion plugins should be installed if you are using ovos-shell ovos-PHAL-plugin-notification-widgets ovos-PHAL-plugin-network-manager ovos-PHAL-plugin-gui-network-client ovos-PHAL-plugin-wifi-setup ovos-PHAL-plugin-alsa ovos-PHAL-plugin-system ovos-PHAL-plugin-dashboard ovos-PHAL-plugin-brightness-control-rpi ovos-PHAL-plugin-color-scheme-manager ovos-PHAL-plugin-configuration-provider PHAL Physical Hardware Abstraction Layer PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded \"enclosure\" from mycroft-core Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark2 plugin PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images Phoneme The smallest phonetic unit in a language that is capable of conveying a distinction in meaning, as the m of mat and the b of bat in English. Service Snapcast Snapcast is a multiroom client-server audio player, where all clients are time synchronized with the server to play perfectly synced audio. It's not a standalone player, but an extension that turns your existing audio player into a Sonos-like multiroom solution. More Information Prompts and Statements You can think of Prompts as questions and Statements as providing information to the user that does not need a follow-up response. QML Qt Markup Language, the language for Qt Quick UIs. More Information The Mycroft GUI Framework uses QML. STT Speech To Text Also known as ASR, automated speech recognition, the process of converting audio into words TTS Text To Speech The process of generating the audio with the responses Utterance Command, question, or query from a user (eg 'turn on the kitchen lights') Wake Word A specific word or phrase trained used to activate the STT (eg 'hey mycroft') XDG XDG stands for \"Cross-Desktop Group\", and it's a way to help with compatibility between systems. More Information","title":"Glossary"},{"location":"glossary/#glossary","text":"Editor's Note Some of the more detailed definitions will be moved to other pages, it's just here to keep track of the information for now.","title":"Glossary"},{"location":"glossary/#the-project","text":"","title":"The Project"},{"location":"glossary/#the-openvoiceos-project-ovos","text":"All the repositories under OpenVoiceOS organization","title":"The OpenVoiceOS Project (OVOS)"},{"location":"glossary/#the-openvoiceos-team","text":"The team behind OVOS","title":"The OpenVoiceOS Team"},{"location":"glossary/#terms","text":"","title":"Terms"},{"location":"glossary/#confirmations","text":"Confirmation approaches can also be defined by Statements or Prompts , but when we talk about them in the context of confirmations we call them Implicit and Explicit.","title":"Confirmations"},{"location":"glossary/#implicit-confirmation","text":"This type of confirmation is also a statement. The idea is to parrot the information back to the user to confirm that it was correct, but not require additional input from the user. The implicit confirmation can be used in a majority of situations.","title":"Implicit Confirmation"},{"location":"glossary/#explicit-confirmation","text":"This type of confirmation requires an input from the user to verify everything is correct.","title":"Explicit Confirmation"},{"location":"glossary/#conversations","text":"Any time the user needs to input a lot of information or the user needs to sort through a variety of options a conversation will be needed. Users may be used to systems that require them to separate input into different chunks.","title":"Conversations"},{"location":"glossary/#context","text":"Allows for natural conversation by having skills set a \"context\" that can be used by subsequent handlers. Context could be anything from person to location. Context can also create \"bubbles\" of available intent handlers, to make sure certain Intents can't be triggered unless some previous stage in a conversation has occurred. You can find an example Tea Skill using conversational context on Github . As you can see, Conversational Context lends itself well to implementing a dialog tree or conversation tree .","title":"Context"},{"location":"glossary/#grapheme","text":"All of the letters and letter combinations that represent a phoneme.","title":"Grapheme"},{"location":"glossary/#home-screen","text":"The OpenVoiceOS home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections.","title":"Home Screen"},{"location":"glossary/#intent","text":"When an utterance is classified for its action and entities (e.g. 'turn on the kitchen lights' -> skill: home assistant, action: turn on/off, entity: kitchen lights)","title":"Intent"},{"location":"glossary/#mpris","text":"(Media Player Remote Interfacing Specification) is a standard D-Bus interface which aims to provide a common programmatic API for controlling media players. More Inforamtion","title":"MPRIS"},{"location":"glossary/#mycroftconf","text":"Primary configuration file for the voice assistant. Possible locations: - /home/ovos/.local/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf - /etc/mycroft/mycroft.conf - /home/ovos/.config/mycroft/mycroft.conf - /etc/xdg/mycroft/mycroft.conf - /home/ovos/.mycroft/mycroft.conf More Information","title":"mycroft.conf"},{"location":"glossary/#ocp","text":"OCP stands for OpenVoiceOS Common Play, it is a full fledged media player OCP is a OVOSAbstractApplication , this means it is a standalone but native OVOS application with full voice integration OCP differs from mycroft-core in several aspects: Can run standalone, only needs a bus connection OCP provides its own intents as if it was a skill OCP provides its own GUI as if it was a skill mycroft-core CommonPlay skill framework is disabled when OCP loads OCP skills have a dedicated MycroftSkill class and decorators in ovos-workshop OCP skills act as media providers, they do not (usually) handle playback mycroft-core CommonPlay skills have an imperfect compatibility layer and are given lower priority over OCP skills OCP handles several kinds of playback, including video OCP has a sub-intent parser for matching requested media types AudioService becomes a subsystem for OCP OCP also has AudioService plugin component introducing a compatibility layer for skills using \"old style audioservice api\" OCP integrates with MPRIS, it can be controlled from external apps, e.g. KdeConnect in your phone OCP manages external MPRIS enabled players, you can voice control 3rd party apps without writing a skill for it via OCP","title":"OCP"},{"location":"glossary/#ovos-core","text":"The central repository where the voice assistant \"brain\" is developed","title":"ovos-core"},{"location":"glossary/#opm","text":"OPM is the OVOS Plugin Manager , this base package provides arbitrary plugins to the ovos ecosystem OPM plugins import their base classes from OPM making them portable and independent from core, plugins can be used in your standalone projects By using OPM you can ensure a standard interface to plugins and easily make them configurable in your project, plugin code and example configurations are mapped to a string via python entrypoints in setup.py Some projects using OPM are ovos-core , hivemind-voice-sat , ovos-personal-backend , ovos-stt-server and ovos-tts-server","title":"OPM"},{"location":"glossary/#ovos-shell","text":"The gui service in ovos-core will expose a websocket to the GUI client following the protocol outlined here The GUI library which implements the protocol lives in the mycroft-gui repository, The repository also hosts a development client for skill developers wanting to develop on the desktop. OVOS-shell is the OpenVoiceOS client implementation of the mycroft-gui library used in our embedded device images, other distributions may offer alternative implementations such as plasma-bigscreen * or mycroft mark2 OVOS-shell is tightly coupled to PHAL , the following companion plugins should be installed if you are using ovos-shell ovos-PHAL-plugin-notification-widgets ovos-PHAL-plugin-network-manager ovos-PHAL-plugin-gui-network-client ovos-PHAL-plugin-wifi-setup ovos-PHAL-plugin-alsa ovos-PHAL-plugin-system ovos-PHAL-plugin-dashboard ovos-PHAL-plugin-brightness-control-rpi ovos-PHAL-plugin-color-scheme-manager ovos-PHAL-plugin-configuration-provider","title":"OVOS-shell"},{"location":"glossary/#phal","text":"Physical Hardware Abstraction Layer PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded \"enclosure\" from mycroft-core Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark2 plugin PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images","title":"PHAL"},{"location":"glossary/#phoneme","text":"The smallest phonetic unit in a language that is capable of conveying a distinction in meaning, as the m of mat and the b of bat in English.","title":"Phoneme"},{"location":"glossary/#service","text":"","title":"Service"},{"location":"glossary/#snapcast","text":"Snapcast is a multiroom client-server audio player, where all clients are time synchronized with the server to play perfectly synced audio. It's not a standalone player, but an extension that turns your existing audio player into a Sonos-like multiroom solution. More Information","title":"Snapcast"},{"location":"glossary/#prompts-and-statements","text":"You can think of Prompts as questions and Statements as providing information to the user that does not need a follow-up response.","title":"Prompts and Statements"},{"location":"glossary/#qml","text":"Qt Markup Language, the language for Qt Quick UIs. More Information The Mycroft GUI Framework uses QML.","title":"QML"},{"location":"glossary/#stt","text":"Speech To Text Also known as ASR, automated speech recognition, the process of converting audio into words","title":"STT"},{"location":"glossary/#tts","text":"Text To Speech The process of generating the audio with the responses","title":"TTS"},{"location":"glossary/#utterance","text":"Command, question, or query from a user (eg 'turn on the kitchen lights')","title":"Utterance"},{"location":"glossary/#wake-word","text":"A specific word or phrase trained used to activate the STT (eg 'hey mycroft')","title":"Wake Word"},{"location":"glossary/#xdg","text":"XDG stands for \"Cross-Desktop Group\", and it's a way to help with compatibility between systems. More Information","title":"XDG"},{"location":"hardware/","text":"Supported Hardware OVOS has been confirmed to run on several devices, and more to come. RPI3b / RPI3b+ (Headless only) Headless Image Mark 1 image Mycroft Mark 1 device. ( ovos-PHAL-plugin-mk1 plugin required ) RPI4 (GUI and Headless) Buildroot GUI image Will run native on a Mark 2 device. Mycroft Mark 2 device. (Multiple plugins required) neon-phal-plugin-linear_led neon-phal-plugin-switches neon-phal-plugin-fan x86-64 Linux OS (Python packages or Docker) Windows (Docker) MacOS (Docker) Docker GitHub OrangePi (Possible, but not yet tested) More to come","title":"Introduction"},{"location":"hardware/#supported-hardware","text":"OVOS has been confirmed to run on several devices, and more to come. RPI3b / RPI3b+ (Headless only) Headless Image Mark 1 image Mycroft Mark 1 device. ( ovos-PHAL-plugin-mk1 plugin required ) RPI4 (GUI and Headless) Buildroot GUI image Will run native on a Mark 2 device. Mycroft Mark 2 device. (Multiple plugins required) neon-phal-plugin-linear_led neon-phal-plugin-switches neon-phal-plugin-fan x86-64 Linux OS (Python packages or Docker) Windows (Docker) MacOS (Docker) Docker GitHub OrangePi (Possible, but not yet tested) More to come","title":"Supported Hardware"},{"location":"hardware_audio/","text":"Audio Hardware Recommendations and notes on speakers and microphones Most audio devices are available to use with the help of Plugins and should for the most part work by default. If your device does not work, pop in to our Matrix support channel , please create an issue or start a discussion about your device. USB Most USB devices should work without any issues. But, not all devices are created equally. Tested USB devices Microphones BlueSnowball (Works well without issues) Generic webcam with mic (Works, but sound quality can be lacking. Makes wakeword detection and STT processing less accurate) PS3 Eye (Works without issue) Kinect V1 (Works, but may need a firmware upgrade) Speakers Generic USB Speakers (Works without issue, but sound quality varies) Cameras Generic webcam (Works, but is not guaranteed to work with some camera skills) PS3 Eye (Same as Generic webcam) Kinect V1 (Same as Generic webcam) Andrea Electronics C1-1028100-3 Audio Troubleshooting - USB HDMI HDMI audio should work without issues if your device supports it. Audio Troubleshooting - HDMI Analog Analog output to headphones, or external speakers should work also. There may be some configuration needed on some devices. Audio Troubleshooting - Analog Raspberry Pi HATs There are several HAT's that are available, some with just a microphone, others that play audio out also. Several are supported and tested, others should work with the proper configuration. Tested RPi HATs Respeaker 2/4/6/8 mic boards (Works native with Buildroot image. Others needs configuration) AIY VoiceHat V1 (Works with /boot/config.txt modification) AIY VoiceBonnet V2 (Works with custom driver update and /boot/config.txt modification) Audio Troubleshooting - HATs Specialty Hardware Some special sound boards are also supported. SJ201 sound board (Mark 2 sound board) Buildroot Native support Supported on other devices with manual install of drivers Audio Troubleshooting - SJ201 Mark 1 custom sound board (Native support - raspbian-ovos mark1 image) Other device support with /boot.config.txt modification Audio Troubleshooting - Mark 1","title":"Audio"},{"location":"hardware_audio/#audio-hardware","text":"Recommendations and notes on speakers and microphones Most audio devices are available to use with the help of Plugins and should for the most part work by default. If your device does not work, pop in to our Matrix support channel , please create an issue or start a discussion about your device.","title":"Audio Hardware"},{"location":"hardware_audio/#usb","text":"Most USB devices should work without any issues. But, not all devices are created equally.","title":"USB"},{"location":"hardware_audio/#tested-usb-devices","text":"Microphones BlueSnowball (Works well without issues) Generic webcam with mic (Works, but sound quality can be lacking. Makes wakeword detection and STT processing less accurate) PS3 Eye (Works without issue) Kinect V1 (Works, but may need a firmware upgrade) Speakers Generic USB Speakers (Works without issue, but sound quality varies) Cameras Generic webcam (Works, but is not guaranteed to work with some camera skills) PS3 Eye (Same as Generic webcam) Kinect V1 (Same as Generic webcam) Andrea Electronics C1-1028100-3 Audio Troubleshooting - USB","title":"Tested USB devices"},{"location":"hardware_audio/#hdmi","text":"HDMI audio should work without issues if your device supports it. Audio Troubleshooting - HDMI","title":"HDMI"},{"location":"hardware_audio/#analog","text":"Analog output to headphones, or external speakers should work also. There may be some configuration needed on some devices. Audio Troubleshooting - Analog","title":"Analog"},{"location":"hardware_audio/#raspberry-pi-hats","text":"There are several HAT's that are available, some with just a microphone, others that play audio out also. Several are supported and tested, others should work with the proper configuration.","title":"Raspberry Pi HATs"},{"location":"hardware_audio/#tested-rpi-hats","text":"Respeaker 2/4/6/8 mic boards (Works native with Buildroot image. Others needs configuration) AIY VoiceHat V1 (Works with /boot/config.txt modification) AIY VoiceBonnet V2 (Works with custom driver update and /boot/config.txt modification) Audio Troubleshooting - HATs","title":"Tested RPi HATs"},{"location":"hardware_audio/#specialty-hardware","text":"Some special sound boards are also supported. SJ201 sound board (Mark 2 sound board) Buildroot Native support Supported on other devices with manual install of drivers Audio Troubleshooting - SJ201 Mark 1 custom sound board (Native support - raspbian-ovos mark1 image) Other device support with /boot.config.txt modification Audio Troubleshooting - Mark 1","title":"Specialty Hardware"},{"location":"hardware_video/","text":"Supported Hardware - Video Display If your device supports video out, you can use a screen on your device. (RPI3/3b/3b+ will not the OVOS GUI, ovos-shell , due to lack of processing power, but you can access a command prompt on a locally connected screen) Touchscreens OVOS supports touchscreen interaction, but not all are created equally. It has been noted that on some USB touchscreens, the touch matrix is not synced with the OVOS display and requires an x11 setup with a window manager to adjust the settings to work. Troubleshooting Video","title":"Video"},{"location":"hardware_video/#supported-hardware-video","text":"","title":"Supported Hardware - Video"},{"location":"hardware_video/#display","text":"If your device supports video out, you can use a screen on your device. (RPI3/3b/3b+ will not the OVOS GUI, ovos-shell , due to lack of processing power, but you can access a command prompt on a locally connected screen)","title":"Display"},{"location":"hardware_video/#touchscreens","text":"OVOS supports touchscreen interaction, but not all are created equally. It has been noted that on some USB touchscreens, the touch matrix is not synced with the OVOS display and requires an x11 setup with a window manager to adjust the settings to work. Troubleshooting Video","title":"Touchscreens"},{"location":"ht_change_voice/","text":"How To Change Your Assistant's Voice WIP","title":"Change Your Assistant's Voice"},{"location":"ht_change_voice/#how-to-change-your-assistants-voice","text":"WIP","title":"How To Change Your Assistant's Voice"},{"location":"ht_config_homescreen/","text":"OpenVoiceOS Home Screen The home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections. Features Night Mode Feature The Night Mode feature lets you quickly switch your home screen into a dark standby clock, reducing the amount of light emitted by your device. This is especially useful if you are using your device in a dark room or at night. You can enable the night mode feature by tapping on the left edge pill button on the home screen. Quick Actions Dashboard The Quick Actions Dashboard provides you with a card-based interface to quickly access and add your most used action. The Quick Actions dashboard comes with a variety of pre-defined actions like the ability to quickly add a new alarm, start a new timer or add a new note. You can also add your own custom actions to the dashboard by tapping on the plus button in the top right corner of the dashboard. The Quick Actions dashboard is accessible by tapping on the right edge pill button on the home screen. Application Launcher OpenVoiceOS comes with support for dedicated voice applications. Voice Applications can be dedicated skills or PHAL plugins, providing their own dedicated user interface. The application launcher will show you a list of all available voice applications. You can access the application launcher by tapping on the center pill button on the bottom of the home screen. Wallpapers The home screen supports custom wallpapers and comes with a bunch of wallpapers to choose from. You can easily change your custom wallpaper by swiping from right to left on the home screen. Widgets Notifications Widget The notifications widget provides you with a quick overview of all your notifications. The notifications bell icon will be displayed in the top left corner of the home screen. You can access the notifications overview by tapping on the bell icon when it is displayed. Timer Widget The timer widget is displayed in top left corner after the notifications bell icon. It will show up when you have an active timer running. Clicking on the timer widget will open the timers overview. Alarm Widget The alarm widget is displayed in top left corner after the timer widget. It will show up when you have an active alarm set. Clicking on the alarm widget will open the alarms overview. Media Player Widget The media player widget is displayed in the bottom of the home screen, It replaces the examples widget when a media player is active. The media player widget will show you the currently playing media and provide you with a quick way to pause, resume or skip the current media. You can also quickly access the media player by tapping the quick display media player button on the right side of the media player widget. Configuration Settings The homescreen has several customizations available. This is sample settings.json file with all of the options explained { \"__mycroft_skill_firstrun\": false, \"weather_skill\": \"skill-weather.openvoiceos\", \"datetime_skill\": \"skill-date-time.mycroftai\", \"examples_skill\": \"ovos-skills-info.openvoiceos\", \"wallpaper\": \"default.jpg\", \"persistent_menu_hint\": false, \"examples_enabled\": true, \"randomize_examples\": true, \"examples_prefix\": true } __mycroft_skill_firstrun: This is automatically set on first load of skill weather_skill: DEPRECATED and has no effect - PR pending datetime_skill: Allows you to use a custom skill to display the date and time. Defaults to skill-ovos-date-time.openvoiceos examples_skill: Allows use of a custom skill for the displayed examples. Defaults to ovos_skills_manager.utils.get_skills_example() function wallpaper: Allows a custom wallpaper to be displayed. Use a complete url without any tilde ~ persistent_menu_hint: When true, displayes a hint of the pull-down menu at the top of the page examples_enabled: When false, the examples at the bottom of the screen will be hidden randomize_examples: When false, the rotation of the examples will follow the way they are loaded examples_prefix: When false, the prefix 'Ask Me' will NOT be displayed with the examples","title":"Configuration"},{"location":"ht_config_homescreen/#openvoiceos-home-screen","text":"The home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections.","title":"OpenVoiceOS Home Screen"},{"location":"ht_config_homescreen/#features","text":"","title":"Features"},{"location":"ht_config_homescreen/#night-mode-feature","text":"The Night Mode feature lets you quickly switch your home screen into a dark standby clock, reducing the amount of light emitted by your device. This is especially useful if you are using your device in a dark room or at night. You can enable the night mode feature by tapping on the left edge pill button on the home screen.","title":"Night Mode Feature"},{"location":"ht_config_homescreen/#quick-actions-dashboard","text":"The Quick Actions Dashboard provides you with a card-based interface to quickly access and add your most used action. The Quick Actions dashboard comes with a variety of pre-defined actions like the ability to quickly add a new alarm, start a new timer or add a new note. You can also add your own custom actions to the dashboard by tapping on the plus button in the top right corner of the dashboard. The Quick Actions dashboard is accessible by tapping on the right edge pill button on the home screen.","title":"Quick Actions Dashboard"},{"location":"ht_config_homescreen/#application-launcher","text":"OpenVoiceOS comes with support for dedicated voice applications. Voice Applications can be dedicated skills or PHAL plugins, providing their own dedicated user interface. The application launcher will show you a list of all available voice applications. You can access the application launcher by tapping on the center pill button on the bottom of the home screen.","title":"Application Launcher"},{"location":"ht_config_homescreen/#wallpapers","text":"The home screen supports custom wallpapers and comes with a bunch of wallpapers to choose from. You can easily change your custom wallpaper by swiping from right to left on the home screen.","title":"Wallpapers"},{"location":"ht_config_homescreen/#widgets","text":"","title":"Widgets"},{"location":"ht_config_homescreen/#notifications-widget","text":"The notifications widget provides you with a quick overview of all your notifications. The notifications bell icon will be displayed in the top left corner of the home screen. You can access the notifications overview by tapping on the bell icon when it is displayed.","title":"Notifications Widget"},{"location":"ht_config_homescreen/#timer-widget","text":"The timer widget is displayed in top left corner after the notifications bell icon. It will show up when you have an active timer running. Clicking on the timer widget will open the timers overview.","title":"Timer Widget"},{"location":"ht_config_homescreen/#alarm-widget","text":"The alarm widget is displayed in top left corner after the timer widget. It will show up when you have an active alarm set. Clicking on the alarm widget will open the alarms overview.","title":"Alarm Widget"},{"location":"ht_config_homescreen/#media-player-widget","text":"The media player widget is displayed in the bottom of the home screen, It replaces the examples widget when a media player is active. The media player widget will show you the currently playing media and provide you with a quick way to pause, resume or skip the current media. You can also quickly access the media player by tapping the quick display media player button on the right side of the media player widget.","title":"Media Player Widget"},{"location":"ht_config_homescreen/#configuration","text":"","title":"Configuration"},{"location":"ht_config_homescreen/#settings","text":"The homescreen has several customizations available. This is sample settings.json file with all of the options explained { \"__mycroft_skill_firstrun\": false, \"weather_skill\": \"skill-weather.openvoiceos\", \"datetime_skill\": \"skill-date-time.mycroftai\", \"examples_skill\": \"ovos-skills-info.openvoiceos\", \"wallpaper\": \"default.jpg\", \"persistent_menu_hint\": false, \"examples_enabled\": true, \"randomize_examples\": true, \"examples_prefix\": true } __mycroft_skill_firstrun: This is automatically set on first load of skill weather_skill: DEPRECATED and has no effect - PR pending datetime_skill: Allows you to use a custom skill to display the date and time. Defaults to skill-ovos-date-time.openvoiceos examples_skill: Allows use of a custom skill for the displayed examples. Defaults to ovos_skills_manager.utils.get_skills_example() function wallpaper: Allows a custom wallpaper to be displayed. Use a complete url without any tilde ~ persistent_menu_hint: When true, displayes a hint of the pull-down menu at the top of the page examples_enabled: When false, the examples at the bottom of the screen will be hidden randomize_examples: When false, the rotation of the examples will follow the way they are loaded examples_prefix: When false, the prefix 'Ask Me' will NOT be displayed with the examples","title":"Settings"},{"location":"ht_config_security/","text":"OpenVoiceOS Security Securing SSH Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication. When connecting from a Linux or MacOS client Create a keyfile (you can change ovos to whatever you want) ssh-keygen -t ed25519 -f ~/.ssh/ovos Copy to host (use the same filename as above, specify the user and hostname you are using) ssh-copy-id -i ~/.ssh/ovos ovos@mycroft On your dekstop, edit ~/.ssh/config and add the following lines Host rp2 user ovos IdentityFile ~/.ssh/ovos On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line: PasswordAuthentication no restart sshd or reboot sudo systemctl restart sshd Message Bus Security Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system! You can read more about the security issues over at Nhoya/MycroftAI-RCE in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills you can see a demonstration of this problem with BusBrickerSkill \"shared_connection\": false ensures each skill gets its own websocket connection and avoids this problem Additionally, it is recommended you change \"host\": \"127.0.0.1\" , this will ensure no outside world connections are allowed","title":"OpenVoiceOS Security"},{"location":"ht_config_security/#openvoiceos-security","text":"","title":"OpenVoiceOS Security"},{"location":"ht_config_security/#securing-ssh","text":"Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication.","title":"Securing SSH"},{"location":"ht_config_security/#when-connecting-from-a-linux-or-macos-client","text":"Create a keyfile (you can change ovos to whatever you want) ssh-keygen -t ed25519 -f ~/.ssh/ovos Copy to host (use the same filename as above, specify the user and hostname you are using) ssh-copy-id -i ~/.ssh/ovos ovos@mycroft On your dekstop, edit ~/.ssh/config and add the following lines Host rp2 user ovos IdentityFile ~/.ssh/ovos On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line: PasswordAuthentication no restart sshd or reboot sudo systemctl restart sshd","title":"When connecting from a Linux or MacOS client"},{"location":"ht_config_security/#message-bus-security","text":"Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system! You can read more about the security issues over at Nhoya/MycroftAI-RCE in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills you can see a demonstration of this problem with BusBrickerSkill \"shared_connection\": false ensures each skill gets its own websocket connection and avoids this problem Additionally, it is recommended you change \"host\": \"127.0.0.1\" , this will ensure no outside world connections are allowed","title":"Message Bus Security"},{"location":"ht_intro/","text":"How Do I??? This section is provided as a basic Q&A for common questions. How do I - Skills How do I - TTS How do I - Listener How do I - WakeWord How do I - HotWords How do I - PHAL How do I - Custom Services","title":"Introduction"},{"location":"ht_intro/#how-do-i","text":"This section is provided as a basic Q&A for common questions. How do I - Skills How do I - TTS How do I - Listener How do I - WakeWord How do I - HotWords How do I - PHAL How do I - Custom Services","title":"How Do I???"},{"location":"ht_listener_intro/","text":"OVOS Listener The listener is responsible for loading STT, VAD and Wake Word plugins Speech is transcribed into text and forwarded to the skills service. The newest listener that OVOS uses is ovos-dinkum-listener . It is a version of the listener from the Mycroft Dinkum software for the Mark 2 modified for use with OVOS.","title":"Introduction"},{"location":"ht_listener_intro/#ovos-listener","text":"The listener is responsible for loading STT, VAD and Wake Word plugins Speech is transcribed into text and forwarded to the skills service. The newest listener that OVOS uses is ovos-dinkum-listener . It is a version of the listener from the Mycroft Dinkum software for the Mark 2 modified for use with OVOS.","title":"OVOS Listener"},{"location":"ht_mic/","text":"OVOS Listener - Microphone OVOS uses microphone plugins to support different setups and devices. NOTE only ovos-dinkum-listener has this support. The default plugin that OVOS uses is ovos-microphone-plugin-alsa and for most cases should work fine. If you are running OVOS on a Mac, you need a different plugin to access the audio. ovos-microphone-plugin-sounddevice OVOS microphone plugins are available on PyPi pip install ovos-microphone-plugin-sounddevice or pip install --pre ovos-microphone-plugin-sounddevice for the latest alpha versions. NOTE The alpha versions may be needed until the release of ovos-core 0.1.0 List of OVOS microphone plugins Plugin Usage ovos-microphone-plugin-alsa Default plugin - should work in most cases ovos-microphone-plugin-sounddevice This plugin is needed when running OVOS on a Mac but also works on other platforms ovos-microphone-plugin-socket Used to connect a websocket microphone for remote usage ovos-microphone-plugin-files Will use a file as the voice input instead of a microphone ovos-microphone-plugin-pyaudio Uses PyAudio for audio processing ovos-microphone-plugin-arecord Uses arecord to get input from the microphone. In some cases this may be faster than the default alsa Configuration Microphone plugin configuration is located under the top level listener value. { \"listener\": { \"microphone\": { \"module\": \"ovos-microphone-plugin-alsa\", \"ovos-microphone-plugin-alsa\": { \"device\": \"default\" } } } } The only required section is \"module\" . The plugin will then use the default values. The \"device\" section is used if you have several microphones attached, this can be used to specify which one to use. Specific plugins may have other values that can be set. Check the GitHub repo of each plugin for more details.","title":"Microphone"},{"location":"ht_mic/#ovos-listener-microphone","text":"OVOS uses microphone plugins to support different setups and devices. NOTE only ovos-dinkum-listener has this support. The default plugin that OVOS uses is ovos-microphone-plugin-alsa and for most cases should work fine. If you are running OVOS on a Mac, you need a different plugin to access the audio. ovos-microphone-plugin-sounddevice OVOS microphone plugins are available on PyPi pip install ovos-microphone-plugin-sounddevice or pip install --pre ovos-microphone-plugin-sounddevice for the latest alpha versions. NOTE The alpha versions may be needed until the release of ovos-core 0.1.0","title":"OVOS Listener - Microphone"},{"location":"ht_mic/#list-of-ovos-microphone-plugins","text":"Plugin Usage ovos-microphone-plugin-alsa Default plugin - should work in most cases ovos-microphone-plugin-sounddevice This plugin is needed when running OVOS on a Mac but also works on other platforms ovos-microphone-plugin-socket Used to connect a websocket microphone for remote usage ovos-microphone-plugin-files Will use a file as the voice input instead of a microphone ovos-microphone-plugin-pyaudio Uses PyAudio for audio processing ovos-microphone-plugin-arecord Uses arecord to get input from the microphone. In some cases this may be faster than the default alsa","title":"List of OVOS microphone plugins"},{"location":"ht_mic/#configuration","text":"Microphone plugin configuration is located under the top level listener value. { \"listener\": { \"microphone\": { \"module\": \"ovos-microphone-plugin-alsa\", \"ovos-microphone-plugin-alsa\": { \"device\": \"default\" } } } } The only required section is \"module\" . The plugin will then use the default values. The \"device\" section is used if you have several microphones attached, this can be used to specify which one to use. Specific plugins may have other values that can be set. Check the GitHub repo of each plugin for more details.","title":"Configuration"},{"location":"ht_phal/","text":"OVOS PHAL PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded \"enclosure\" from mycroft-core . Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as Mycroft Mark 2 plugin . PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images. Plugins Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities. Plugin Description ovos-PHAL-plugin-alsa volume control ovos-PHAL-plugin-system reboot / shutdown / factory reset ovos-PHAL-plugin-mk1 mycroft mark1 integration ovos-PHAL-plugin-mk2 mycroft mark2 integration ovos-PHAL-plugin-respeaker-2mic respeaker 2mic hat integration ovos-PHAL-plugin-respeaker-4mic respeaker 4mic hat integration ovos-PHAL-plugin-wifi-setup wifi setup (central plugin) ovos-PHAL-plugin-gui-network-client wifi setup (GUI interface) ovos-PHAL-plugin-balena-wifi wifi setup (hotspot) ovos-PHAL-plugin-network-manager wifi setup (network manager) ovos-PHAL-plugin-brightness-control-rpi brightness control ovos-PHAL-plugin-ipgeo automatic geolocation (IP address) ovos-PHAL-plugin-gpsd automatic geolocation (GPS) ovos-PHAL-plugin-dashboard dashboard control (ovos-shell) ovos-PHAL-plugin-notification-widgets system notifications (ovos-shell) ovos-PHAL-plugin-color-scheme-manager GUI color schemes (ovos-shell) ovos-PHAL-plugin-configuration-provider UI to edit mycroft.conf (ovos-shell) ovos-PHAL-plugin-analog-media-devices video/audio capture devices (OCP) Admin PHAL AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root privileges. This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely NOTE: Because this service runs as root, plugins it loads are responsible for not writing configuration changes which would result in breaking config file permissions. Admin Plugins AdminPlugins are just like regular PHAL plugins that run with root privileges. Admin plugins will only load if their configuration contains \"enabled\": true . All admin plugins need to be explicitly enabled. You can find plugin packaging documentation here .","title":"PHAL"},{"location":"ht_phal/#ovos-phal","text":"PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded \"enclosure\" from mycroft-core . Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as Mycroft Mark 2 plugin . PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images.","title":"OVOS PHAL"},{"location":"ht_phal/#plugins","text":"Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities. Plugin Description ovos-PHAL-plugin-alsa volume control ovos-PHAL-plugin-system reboot / shutdown / factory reset ovos-PHAL-plugin-mk1 mycroft mark1 integration ovos-PHAL-plugin-mk2 mycroft mark2 integration ovos-PHAL-plugin-respeaker-2mic respeaker 2mic hat integration ovos-PHAL-plugin-respeaker-4mic respeaker 4mic hat integration ovos-PHAL-plugin-wifi-setup wifi setup (central plugin) ovos-PHAL-plugin-gui-network-client wifi setup (GUI interface) ovos-PHAL-plugin-balena-wifi wifi setup (hotspot) ovos-PHAL-plugin-network-manager wifi setup (network manager) ovos-PHAL-plugin-brightness-control-rpi brightness control ovos-PHAL-plugin-ipgeo automatic geolocation (IP address) ovos-PHAL-plugin-gpsd automatic geolocation (GPS) ovos-PHAL-plugin-dashboard dashboard control (ovos-shell) ovos-PHAL-plugin-notification-widgets system notifications (ovos-shell) ovos-PHAL-plugin-color-scheme-manager GUI color schemes (ovos-shell) ovos-PHAL-plugin-configuration-provider UI to edit mycroft.conf (ovos-shell) ovos-PHAL-plugin-analog-media-devices video/audio capture devices (OCP)","title":"Plugins"},{"location":"ht_phal/#admin-phal","text":"AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root privileges. This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely NOTE: Because this service runs as root, plugins it loads are responsible for not writing configuration changes which would result in breaking config file permissions.","title":"Admin PHAL"},{"location":"ht_phal/#admin-plugins","text":"AdminPlugins are just like regular PHAL plugins that run with root privileges. Admin plugins will only load if their configuration contains \"enabled\": true . All admin plugins need to be explicitly enabled. You can find plugin packaging documentation here .","title":"Admin Plugins"},{"location":"ht_skills/","text":"How do I - Skills What can a Skill do? Skills give OVOS the ability to perform a variety of functions. They can be installed or removed by the user, and can be easily updated to expand functionality. To get a good idea of what skills to build, let\u2019s talk about the best use cases for a voice assistant, and what types of things OVOS can do. OVOS can run on a variety of platforms from the Linux Desktop to Single Board Computers (SBCs) like the Raspberry Pi. Different devices will have slightly different use cases. Devices in the home are generally located in the living room or kitchen and are ideal for listening to the news, playing music, general information, using timers while cooking, checking the weather, and other similar activities that are easily accomplished hands-free. Basic functions We cover a lot of the basics with our Default Skills, things like Timers, Alarms, Weather, Time and Date, and more. Information We also call this General Question and Answer, and it covers all of those factual questions someone might think to ask a voice assistant. Questions like \u201cwho was the 32nd President of the United States?\u201d, or \u201chow tall is Eiffel Tower?\u201d Although the Default Skills cover a great deal of questions there is room for more. There are many topics that could use a specific skill such as Science, Academics, Movie Info, TV info, and Music info, etc. OVOS Fallback Skills Media One of the biggest use cases for Smart Speakers is playing media. The reason media playback is so popular is that it makes playing a song so easy, all you have to do is say \u201cHey Mycroft play the Beatles,\u201d and you can be enjoying music without having to reach for a phone or remote. In addition to listening to music, there are skills that handle videos as well. News Much like listening to music, getting the latest news with a simple voice interaction is extremely convenient. OVOS supports multiple news feeds, and has the ability to support multiple news skills. Smart Home Another popular use case for Voice Assistants is to control Smart Home and IoT products. Within the OVOS ecosystem there are skills for Home Assistant, Wink IoT, Lifx and more, but there are many products that we do not have skill for yet. The open source community has been enthusiastically expanding OVOS's ability to voice control all kinds of smart home products. Games Voice games are becoming more and more popular, especially those that allow multiple users to play together. Trivia games are some of the most popular types of games to develop for voice assistants. There are several games already available for OVOS. There are native voice adventure games, ports of the popular text adventure games from infocom, a Crystal Ball game, a Number Guessing game and much more! OpenVoiceOS Standard Skills Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage. Date / Time skill You can ask your device what time or date it is just in case you lost your watch. Hey Mycroft, what time is it? Hey Mycroft, what is the date? Setting an alarm Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device. Hey Mycroft, set an alarm for 8 AM. Setting of timers Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers. Hey Mycroft, set a timer for 5 minutes. You can always set more timers and even name them, so you know which timers is for what. Hey, Mycroft, set another timer called rice cooking for 7 minutes. Asking the weather You can ask your device what the weather is or would be at any given time or place. Hey Mycroft, what is the weather like today? The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen. There are more installed, just try. If you don't get the response you expected, see the section on installing new skills","title":"Introduction"},{"location":"ht_skills/#how-do-i-skills","text":"","title":"How do I - Skills"},{"location":"ht_skills/#what-can-a-skill-do","text":"Skills give OVOS the ability to perform a variety of functions. They can be installed or removed by the user, and can be easily updated to expand functionality. To get a good idea of what skills to build, let\u2019s talk about the best use cases for a voice assistant, and what types of things OVOS can do. OVOS can run on a variety of platforms from the Linux Desktop to Single Board Computers (SBCs) like the Raspberry Pi. Different devices will have slightly different use cases. Devices in the home are generally located in the living room or kitchen and are ideal for listening to the news, playing music, general information, using timers while cooking, checking the weather, and other similar activities that are easily accomplished hands-free.","title":"What can a Skill do?"},{"location":"ht_skills/#basic-functions","text":"We cover a lot of the basics with our Default Skills, things like Timers, Alarms, Weather, Time and Date, and more.","title":"Basic functions"},{"location":"ht_skills/#information","text":"We also call this General Question and Answer, and it covers all of those factual questions someone might think to ask a voice assistant. Questions like \u201cwho was the 32nd President of the United States?\u201d, or \u201chow tall is Eiffel Tower?\u201d Although the Default Skills cover a great deal of questions there is room for more. There are many topics that could use a specific skill such as Science, Academics, Movie Info, TV info, and Music info, etc. OVOS Fallback Skills","title":"Information"},{"location":"ht_skills/#media","text":"One of the biggest use cases for Smart Speakers is playing media. The reason media playback is so popular is that it makes playing a song so easy, all you have to do is say \u201cHey Mycroft play the Beatles,\u201d and you can be enjoying music without having to reach for a phone or remote. In addition to listening to music, there are skills that handle videos as well.","title":"Media"},{"location":"ht_skills/#news","text":"Much like listening to music, getting the latest news with a simple voice interaction is extremely convenient. OVOS supports multiple news feeds, and has the ability to support multiple news skills.","title":"News"},{"location":"ht_skills/#smart-home","text":"Another popular use case for Voice Assistants is to control Smart Home and IoT products. Within the OVOS ecosystem there are skills for Home Assistant, Wink IoT, Lifx and more, but there are many products that we do not have skill for yet. The open source community has been enthusiastically expanding OVOS's ability to voice control all kinds of smart home products.","title":"Smart Home"},{"location":"ht_skills/#games","text":"Voice games are becoming more and more popular, especially those that allow multiple users to play together. Trivia games are some of the most popular types of games to develop for voice assistants. There are several games already available for OVOS. There are native voice adventure games, ports of the popular text adventure games from infocom, a Crystal Ball game, a Number Guessing game and much more!","title":"Games"},{"location":"ht_skills/#openvoiceos-standard-skills","text":"Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage.","title":"OpenVoiceOS Standard Skills"},{"location":"ht_skills/#date-time-skill","text":"You can ask your device what time or date it is just in case you lost your watch. Hey Mycroft, what time is it? Hey Mycroft, what is the date?","title":"Date / Time skill"},{"location":"ht_skills/#setting-an-alarm","text":"Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device. Hey Mycroft, set an alarm for 8 AM.","title":"Setting an alarm"},{"location":"ht_skills/#setting-of-timers","text":"Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers. Hey Mycroft, set a timer for 5 minutes. You can always set more timers and even name them, so you know which timers is for what. Hey, Mycroft, set another timer called rice cooking for 7 minutes.","title":"Setting of timers"},{"location":"ht_skills/#asking-the-weather","text":"You can ask your device what the weather is or would be at any given time or place. Hey Mycroft, what is the weather like today? The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen. There are more installed, just try. If you don't get the response you expected, see the section on installing new skills","title":"Asking the weather"},{"location":"ht_skills_config/","text":"Skill Configuration Each skill will have its own config file usually located at ~/.local/share/mycroft/skills//settings.json Skill settings provide the ability for users to configure a Skill using the command line or a web-based interface. This is often used to: Change default behaviors - such as the sound used for users alarms. Authenticate with external services - such as Spotify Enter longer data as text rather than by voice - such as the IP address and token of the user's Home Assistant server. Skill settings are completely optional. Refer to each skill repository for valid configuration values.","title":"Configuring"},{"location":"ht_skills_config/#skill-configuration","text":"Each skill will have its own config file usually located at ~/.local/share/mycroft/skills//settings.json Skill settings provide the ability for users to configure a Skill using the command line or a web-based interface. This is often used to: Change default behaviors - such as the sound used for users alarms. Authenticate with external services - such as Spotify Enter longer data as text rather than by voice - such as the IP address and token of the user's Home Assistant server. Skill settings are completely optional. Refer to each skill repository for valid configuration values.","title":"Skill Configuration"},{"location":"ht_skills_install/","text":"How do I - Installing Skills This section will help you to understand what a skill is and how to install and use skills with OVOS. Finding Skills OVOS official skills can be found on PyPi and the latest stable version can be installed with a pip install command. pip install ovos-skill-naptime If you have issues installing with this command, you may need to use the alpha versions. Pip has a command line flag for this --pre . pip install --pre ovos-skill-naptime will install the latest alpha version. This should fix dependency issues with the stable versions. Most skills are found throughout github. The official skills can be found with a simple search in the OVOS GitHub page . There are a few other places they can be found. Neon AI has several skills, and a search through GitHub will surley find more. Installing a found skill There are a few ways to install skills in ovos. The preferred way is with pip and a setup.py file. pip install The preferred method is with pip . If a skill has a setup.py file, it can be installed this way. The syntax is pip install git+ . ex. pip install git+https://github.com/OpenVoiceOS/skill-ovos-date-time.git should install the ovos-date-time skill. Skills can be installed from a local file also. Clone the repository. git clone https://github.com/OpenVoiceOS/skill-ovos-date-time pip install ./skill-ovos-date-time After installing skills this way, ovos skills service needs to be restarted. systemctl --user restart ovos-skills git install This is NOT the preferred method and is here for backward compatibility with the original mycroft-core skills. Skills can also be directly cloned to the skill directory, usually located at ~/.local/share/mycroft/skills/ . enter the skill directory. cd ~/.local/share/mycroft/skills and clone the found skill here with git. git clone ex. git clone https://github.com/OpenVoiceOS/skill-ovos-date-time.git will install the ovos-date-time skill. A restart of the ovos-skills service is not required when installing this way. Deprecated The OVOS skills manager is in need of some love, and when official skill stores are created, this will be updated to use the new methods. Until then, this method is NOT recommended, and NOT supported. The following is included just as reference. OVOS skills manager Install skills from any appstore! The mycroft-skills-manager alternative that is not vendor locked , this means you must use it responsibly! Do not install random skills, different appstores have different policies! Keep in mind any skill you install can modify mycroft-core at runtime , and very likely has root access if you are running on a Raspberry Pi. Supported stores OVOS - this one is really a proof of concept for now, stay tuned! Mycroft Marketplace - the official mycroft skills store, all skills are reviewed by humans! Pling - the official plasma bigscreen skills store, skills are accepted by default and only removed if flagged as malicious [Andlo's skill list]https://github.com/andlo/mycroft-skills-list-gitbook/tree/master/skills) - not a real appstore, this is a web scrapped automatically generated list of 900+ skills from all over GitHub, there is no review at all, it will catch malicious skills OpenVoiceOS Skill Manager pip install ovos-skills-manager Enable a skill store osm enable --appstore [ovos|mycroft|pling|andlo|all] Search for a skill and install it osm install --search See more osm commands osm --help osm install --help More Information","title":"Installing Skills"},{"location":"ht_skills_install/#how-do-i-installing-skills","text":"This section will help you to understand what a skill is and how to install and use skills with OVOS.","title":"How do I - Installing Skills"},{"location":"ht_skills_install/#finding-skills","text":"OVOS official skills can be found on PyPi and the latest stable version can be installed with a pip install command. pip install ovos-skill-naptime If you have issues installing with this command, you may need to use the alpha versions. Pip has a command line flag for this --pre . pip install --pre ovos-skill-naptime will install the latest alpha version. This should fix dependency issues with the stable versions. Most skills are found throughout github. The official skills can be found with a simple search in the OVOS GitHub page . There are a few other places they can be found. Neon AI has several skills, and a search through GitHub will surley find more.","title":"Finding Skills"},{"location":"ht_skills_install/#installing-a-found-skill","text":"There are a few ways to install skills in ovos. The preferred way is with pip and a setup.py file.","title":"Installing a found skill"},{"location":"ht_skills_install/#pip-install","text":"The preferred method is with pip . If a skill has a setup.py file, it can be installed this way. The syntax is pip install git+ . ex. pip install git+https://github.com/OpenVoiceOS/skill-ovos-date-time.git should install the ovos-date-time skill. Skills can be installed from a local file also. Clone the repository. git clone https://github.com/OpenVoiceOS/skill-ovos-date-time pip install ./skill-ovos-date-time After installing skills this way, ovos skills service needs to be restarted. systemctl --user restart ovos-skills","title":"pip install"},{"location":"ht_skills_install/#git-install","text":"This is NOT the preferred method and is here for backward compatibility with the original mycroft-core skills. Skills can also be directly cloned to the skill directory, usually located at ~/.local/share/mycroft/skills/ . enter the skill directory. cd ~/.local/share/mycroft/skills and clone the found skill here with git. git clone ex. git clone https://github.com/OpenVoiceOS/skill-ovos-date-time.git will install the ovos-date-time skill. A restart of the ovos-skills service is not required when installing this way.","title":"git install"},{"location":"ht_skills_install/#deprecated","text":"The OVOS skills manager is in need of some love, and when official skill stores are created, this will be updated to use the new methods. Until then, this method is NOT recommended, and NOT supported. The following is included just as reference.","title":"Deprecated"},{"location":"ht_skills_install/#ovos-skills-manager","text":"Install skills from any appstore! The mycroft-skills-manager alternative that is not vendor locked , this means you must use it responsibly! Do not install random skills, different appstores have different policies! Keep in mind any skill you install can modify mycroft-core at runtime , and very likely has root access if you are running on a Raspberry Pi.","title":"OVOS skills manager"},{"location":"ht_skills_install/#supported-stores","text":"OVOS - this one is really a proof of concept for now, stay tuned! Mycroft Marketplace - the official mycroft skills store, all skills are reviewed by humans! Pling - the official plasma bigscreen skills store, skills are accepted by default and only removed if flagged as malicious [Andlo's skill list]https://github.com/andlo/mycroft-skills-list-gitbook/tree/master/skills) - not a real appstore, this is a web scrapped automatically generated list of 900+ skills from all over GitHub, there is no review at all, it will catch malicious skills","title":"Supported stores"},{"location":"ht_skills_install/#openvoiceos-skill-manager","text":"pip install ovos-skills-manager Enable a skill store osm enable --appstore [ovos|mycroft|pling|andlo|all] Search for a skill and install it osm install --search See more osm commands osm --help osm install --help More Information","title":"OpenVoiceOS Skill Manager"},{"location":"ht_stt/","text":"How do I - STT STT (Speech to Text) is what converts your voice into commands that OVOS recognizes, then converts to an intent that is used to activate skills. There are several STT engines available and OVOS uses ovos-stt-plugin-server and a list of public servers hosted by OVOS community members by default. List of STT plugins Plugin Offline Type ovos-stt-plugin-vosk yes FOSS ovos-stt-plugin-chromium no API (free) neon-stt-plugin-google_cloud_streaming no API (key) neon-stt-plugin-scribosermo yes FOSS neon-stt-plugin-silero yes FOSS neon-stt-plugin-polyglot yes FOSS neon-stt-plugin-deepspeech_stream_local yes FOSS ovos-stt-plugin-selene no API (free) ovos-stt-plugin-http-server no API (self hosted) ovos-stt-plugin-pocketsphinx yes FOSS Advanced Usage","title":"STT"},{"location":"ht_stt/#how-do-i-stt","text":"STT (Speech to Text) is what converts your voice into commands that OVOS recognizes, then converts to an intent that is used to activate skills. There are several STT engines available and OVOS uses ovos-stt-plugin-server and a list of public servers hosted by OVOS community members by default.","title":"How do I - STT"},{"location":"ht_stt/#list-of-stt-plugins","text":"Plugin Offline Type ovos-stt-plugin-vosk yes FOSS ovos-stt-plugin-chromium no API (free) neon-stt-plugin-google_cloud_streaming no API (key) neon-stt-plugin-scribosermo yes FOSS neon-stt-plugin-silero yes FOSS neon-stt-plugin-polyglot yes FOSS neon-stt-plugin-deepspeech_stream_local yes FOSS ovos-stt-plugin-selene no API (free) ovos-stt-plugin-http-server no API (self hosted) ovos-stt-plugin-pocketsphinx yes FOSS Advanced Usage","title":"List of STT plugins"},{"location":"ht_stt_config/","text":"STT Configuration Several STT engines have different configuration settings for optimizing its use. Several have different voices to use, or you can specify a specific STT server to use. We will cover basic configuration of the default STT engine ovos-stt-plugin-server All changes will be made in the User configuration file, eg. ~/.config/mycroft/mycroft.conf Open the file for editing. nano ~/.config/mycroft/mycroft.conf If your file is empty, or does not have a \"stt\" section, you need to create it. Add this to your config { \"stt\": { \"module\": \"ovos-stt-plugin-server\", \"fallback_module\": \"ovos-stt-plugin-vosk\", \"ovos-stt-plugin-server\": { \"url\": \"https://fasterwhisper.ziggyai.online/stt\" } \"ovos-stt-plugin-vosk\": {} } } By default, the language that is configured with OVOS will be used, but should (WIP), detect the spoken language and convert it as necessary. Sections explained \"module\" - This is where you specify what STT module to use. \"fallback_module\" - If by chance your first STT engine fails, OVOS will try to use this one. It is usually configured to use an on device engine so that you always have some output even if you are disconnected from the internet. \"ovos-tts-server-plugin\" \"ovos-tts-plugin-piper\" - Specify specific plugin settings here. Multiple entries are allowed. If an empty dict is provided, {} , the plugin will use its default values. Refer to the STT plugin GitHub repository for specifications on each plugin","title":"STT Configuration"},{"location":"ht_stt_config/#stt-configuration","text":"Several STT engines have different configuration settings for optimizing its use. Several have different voices to use, or you can specify a specific STT server to use. We will cover basic configuration of the default STT engine ovos-stt-plugin-server All changes will be made in the User configuration file, eg. ~/.config/mycroft/mycroft.conf Open the file for editing. nano ~/.config/mycroft/mycroft.conf If your file is empty, or does not have a \"stt\" section, you need to create it. Add this to your config { \"stt\": { \"module\": \"ovos-stt-plugin-server\", \"fallback_module\": \"ovos-stt-plugin-vosk\", \"ovos-stt-plugin-server\": { \"url\": \"https://fasterwhisper.ziggyai.online/stt\" } \"ovos-stt-plugin-vosk\": {} } } By default, the language that is configured with OVOS will be used, but should (WIP), detect the spoken language and convert it as necessary.","title":"STT Configuration"},{"location":"ht_stt_config/#sections-explained","text":"\"module\" - This is where you specify what STT module to use. \"fallback_module\" - If by chance your first STT engine fails, OVOS will try to use this one. It is usually configured to use an on device engine so that you always have some output even if you are disconnected from the internet. \"ovos-tts-server-plugin\" \"ovos-tts-plugin-piper\" - Specify specific plugin settings here. Multiple entries are allowed. If an empty dict is provided, {} , the plugin will use its default values. Refer to the STT plugin GitHub repository for specifications on each plugin","title":"Sections explained"},{"location":"ht_tts/","text":"How do I - TTS TTS plugins are responsible for converting text into audio for playback. Several options are available each with different attributes and supported languages. Some can be run on device, others need an internet connection to work. As with most OVOS packages, the TTS plugins are available on PyPi and can be installed with pip install pip install ovos-tts-plugin-piper will install the latest stable version. If there are installation errors, you can install the latest alpha versions of the plugins. pip install --pre ovos-tts-plugin-piper By default, OVOS uses ovos-tts-server-plugin and a series of public TTS servers, provided by OVOS community members, to send speech to your device. If you host your own TTS server, or this option is not acceptable to you, there are many other options to use. List of TTS plugins Plugin Offline Type ovos-tts-server-plugin no API (self hosted) ovos-tts-plugin-mimic3 yes FOSS ovos-tts-plugin-mimic yes FOSS ovos-tts-plugin-mimic2 no API (free) ovos-tts-plugin-marytts no API (self hosted) neon-tts-plugin-larynx_server no API (self hosted) ovos-tts-plugin-pico yes FOSS neon-tts-plugin-glados yes FOSS neon-tts-plugin-mozilla_local yes FOSS neon-tts-plugin-polly no API (key) ovos-tts-plugin-voicerss no API (key) ovos-tts-plugin-google-TX no API (free) ovos-tts-plugin-responsivevoice no API (free) neon-tts-plugin-mozilla_remote no API (self hosted) neon-tts-plugin-tacotron2 yes FOSS ovos-tts-plugin-espeakNG yes FOSS ovos-tts-plugin-cotovia yes FOSS ovos-tts-plugin-catotron no API (self hosted) ovos-tts-plugin-softcatala no API (self hosted) ovos-tts-plugin-SAM yes Abandonware ovos-tts-plugin-beepspeak yes Fun Advanced Usage","title":"Introduction"},{"location":"ht_tts/#how-do-i-tts","text":"TTS plugins are responsible for converting text into audio for playback. Several options are available each with different attributes and supported languages. Some can be run on device, others need an internet connection to work. As with most OVOS packages, the TTS plugins are available on PyPi and can be installed with pip install pip install ovos-tts-plugin-piper will install the latest stable version. If there are installation errors, you can install the latest alpha versions of the plugins. pip install --pre ovos-tts-plugin-piper By default, OVOS uses ovos-tts-server-plugin and a series of public TTS servers, provided by OVOS community members, to send speech to your device. If you host your own TTS server, or this option is not acceptable to you, there are many other options to use.","title":"How do I - TTS"},{"location":"ht_tts/#list-of-tts-plugins","text":"Plugin Offline Type ovos-tts-server-plugin no API (self hosted) ovos-tts-plugin-mimic3 yes FOSS ovos-tts-plugin-mimic yes FOSS ovos-tts-plugin-mimic2 no API (free) ovos-tts-plugin-marytts no API (self hosted) neon-tts-plugin-larynx_server no API (self hosted) ovos-tts-plugin-pico yes FOSS neon-tts-plugin-glados yes FOSS neon-tts-plugin-mozilla_local yes FOSS neon-tts-plugin-polly no API (key) ovos-tts-plugin-voicerss no API (key) ovos-tts-plugin-google-TX no API (free) ovos-tts-plugin-responsivevoice no API (free) neon-tts-plugin-mozilla_remote no API (self hosted) neon-tts-plugin-tacotron2 yes FOSS ovos-tts-plugin-espeakNG yes FOSS ovos-tts-plugin-cotovia yes FOSS ovos-tts-plugin-catotron no API (self hosted) ovos-tts-plugin-softcatala no API (self hosted) ovos-tts-plugin-SAM yes Abandonware ovos-tts-plugin-beepspeak yes Fun Advanced Usage","title":"List of TTS plugins"},{"location":"ht_tts_config/","text":"TTS Configuration TTS plugins are responsible for converting text into audio for playback. List of TTS Plugins Advanced TTS Plugin Documentation Several TTS engines have different configuration settings for optimizing its use. Several have different voices to use, or you can specify a TTS server to use. We will cover basic configuration of the default TTS engine ovos-tts-server-plugin . All changes will be made in the User configuration file, eg. ~/.config/mycroft/mycroft.conf . Open the file for editing. nano ~/.config/mycroft/mycroft.conf . If your file is empty, or does not have a \"tts\" section, you need to create it. Add this to your config { \"tts\": { \"module\": \"ovos-tts-server-plugin\", \"fallback_module\": \"ovos-tts-plugin-piper\", \"ovos-tts-server-plugin\": { \"host\": \"https://pipertts.ziggyai.online\", \"voice\": \"alan-low\" } \"ovos-tts-plugin-piper\": {} } } Sections explained \"module\" - This is where you specify what TTS plugin to use. - ovos-tts-server-plugin in this example. - This plugin, by default, uses a random selection of public TTS servers provided by the OVOS community. With no \"host\" provided, one of those will be used. - You can still change your voice without changing the \"host\" . The default voice is \"alan-low\" , or the Mycroft original voice `\"Alan Pope\". Changing your assistant's voice \"fallback_module\" - If by chance your first TTS engine fails, OVOS will try to use this one. It is usually configured to use an on device engine so that you always have some output even if you are disconnected from the internet. \"ovos-tts-server-plugin\" \"ovos-tts-plugin-piper\" - Specify specific plugin settings here. Multiple entries are allowed. If an empty dict is provided, {} , the plugin will use its default values. Refer to the TTS github repository for specifications on each plugin List of TTS plugins Plugin Offline Type ovos-tts-server-plugin no API (self hosted) ovos-tts-plugin-piper yes API (self hosted) ovos-tts-plugin-mimic3 yes FOSS ovos-tts-plugin-mimic yes FOSS ovos-tts-plugin-mimic2 no API (free) ovos-tts-plugin-marytts no API (self hosted) neon-tts-plugin-larynx_server no API (self hosted) ovos-tts-plugin-pico yes FOSS neon-tts-plugin-glados yes FOSS neon-tts-plugin-mozilla_local yes FOSS neon-tts-plugin-polly no API (key) ovos-tts-plugin-voicerss no API (key) ovos-tts-plugin-google-TX no API (free) ovos-tts-plugin-responsivevoice no API (free) neon-tts-plugin-mozilla_remote no API (self hosted) neon-tts-plugin-tacotron2 yes FOSS ovos-tts-plugin-espeakNG yes FOSS ovos-tts-plugin-cotovia yes FOSS ovos-tts-plugin-catotron no API (self hosted) ovos-tts-plugin-softcatala no API (self hosted) ovos-tts-plugin-SAM yes Abandonware ovos-tts-plugin-beepspeak yes Fun","title":"Configuring"},{"location":"ht_tts_config/#tts-configuration","text":"TTS plugins are responsible for converting text into audio for playback. List of TTS Plugins Advanced TTS Plugin Documentation Several TTS engines have different configuration settings for optimizing its use. Several have different voices to use, or you can specify a TTS server to use. We will cover basic configuration of the default TTS engine ovos-tts-server-plugin . All changes will be made in the User configuration file, eg. ~/.config/mycroft/mycroft.conf . Open the file for editing. nano ~/.config/mycroft/mycroft.conf . If your file is empty, or does not have a \"tts\" section, you need to create it. Add this to your config { \"tts\": { \"module\": \"ovos-tts-server-plugin\", \"fallback_module\": \"ovos-tts-plugin-piper\", \"ovos-tts-server-plugin\": { \"host\": \"https://pipertts.ziggyai.online\", \"voice\": \"alan-low\" } \"ovos-tts-plugin-piper\": {} } }","title":"TTS Configuration"},{"location":"ht_tts_config/#sections-explained","text":"\"module\" - This is where you specify what TTS plugin to use. - ovos-tts-server-plugin in this example. - This plugin, by default, uses a random selection of public TTS servers provided by the OVOS community. With no \"host\" provided, one of those will be used. - You can still change your voice without changing the \"host\" . The default voice is \"alan-low\" , or the Mycroft original voice `\"Alan Pope\". Changing your assistant's voice \"fallback_module\" - If by chance your first TTS engine fails, OVOS will try to use this one. It is usually configured to use an on device engine so that you always have some output even if you are disconnected from the internet. \"ovos-tts-server-plugin\" \"ovos-tts-plugin-piper\" - Specify specific plugin settings here. Multiple entries are allowed. If an empty dict is provided, {} , the plugin will use its default values. Refer to the TTS github repository for specifications on each plugin","title":"Sections explained"},{"location":"ht_tts_config/#list-of-tts-plugins","text":"Plugin Offline Type ovos-tts-server-plugin no API (self hosted) ovos-tts-plugin-piper yes API (self hosted) ovos-tts-plugin-mimic3 yes FOSS ovos-tts-plugin-mimic yes FOSS ovos-tts-plugin-mimic2 no API (free) ovos-tts-plugin-marytts no API (self hosted) neon-tts-plugin-larynx_server no API (self hosted) ovos-tts-plugin-pico yes FOSS neon-tts-plugin-glados yes FOSS neon-tts-plugin-mozilla_local yes FOSS neon-tts-plugin-polly no API (key) ovos-tts-plugin-voicerss no API (key) ovos-tts-plugin-google-TX no API (free) ovos-tts-plugin-responsivevoice no API (free) neon-tts-plugin-mozilla_remote no API (self hosted) neon-tts-plugin-tacotron2 yes FOSS ovos-tts-plugin-espeakNG yes FOSS ovos-tts-plugin-cotovia yes FOSS ovos-tts-plugin-catotron no API (self hosted) ovos-tts-plugin-softcatala no API (self hosted) ovos-tts-plugin-SAM yes Abandonware ovos-tts-plugin-beepspeak yes Fun","title":"List of TTS plugins"},{"location":"ht_vad/","text":"OVOS Listener - VAD","title":"VAD"},{"location":"ht_vad/#ovos-listener-vad","text":"","title":"OVOS Listener - VAD"},{"location":"ht_ww/","text":"OVOS Listener - WakeWords / HotWords OVOS uses \"wakewords\" to activate the system. This is what \"hey Google\" or \"Alexa\" is on proprietary devices. By default, OVOS uses the WakeWord \"hey Mycroft\". OVOS \"hotwords\" is the configuration section to specify what the WakeWord do. Multiple \"hotwords\" can be used to do a variety of things from putting OVOS into active listening mode, a WakeWord like \"hey Mycroft\", to issuing a command such as \"stop\" or \"wake up\" As with everything else, this too can be changed, and several plugins are available. Some work better than others. List of OVOS WakeWord Plugins Plugin Type Description ovos-ww-plugin-precise-lite Model The most accurate plugin available as it uses pretrained models and community models are available also ovos-ww-plugin-openWakeWord Model Uses openWakeWord for detection ovos-ww-plugin-vosk Full Word Uses full word detection from a loaded model. ovos-ww-plugin-pocketsphinx Phonomes Probably the least accurate, but can be used on almost any device ovos-ww-plugin-hotkeys Model Use an input from keyboard or button to emulate a wakeword being said. Useful for privacy, but not so much for a smart speaker. ovos-ww-plugin-snowboy Model Uses snowboy wakeword engine ovos-ww-plugin-nyumaya Model WakeWord plugin using Nyumaya Configuration The configuration for wakewords are in the \"listener\" section of mycroft.conf and configuration of hotwords is in the \"hotwords\" section of the same file. This example will use the vosk plugin and change the wake word to \"hey Ziggy\". Add the following to your ~/.config/mycroft/mycroft.conf file. { \"listener\": { \"wake_word\": \"hey_ziggy\" } \"hotwords\": { \"hey_ziggy\": { \"module\": \"ovos-ww-plugin-vosk\", \"listen\": true, \"active\": true, \"sound\": \"snd/start_listening.wav\", \"debug\": false \"rule\": \"fuzzy\", \"lang\": \"en\", \"samples\": [ \"hey ziggy\", \"hay ziggy\" ] } } } Sections explained The most important section is \"wake_word\": \"hey_ziggy\" in the \"listener\" section. This tells OVOS what the default wakeword should be. In the \"hotwords\" section, \"active\": true , is only used if multiple wakewords are being used. By default, what ever wake_word is set in the listener section is automatically set to true . If you want to disable a wakeword, you can set this to false . If enabling a wakeword, be sure to also set \"listen\": true . Multiple hotwords can be configured at the same time, even the same word with different plugins. This allows for more accurate ones to be used before the less accurate, but only if the plugin is installed. Advanced WakeWords/HotWords","title":"WakeWord/HotWords"},{"location":"ht_ww/#ovos-listener-wakewords-hotwords","text":"OVOS uses \"wakewords\" to activate the system. This is what \"hey Google\" or \"Alexa\" is on proprietary devices. By default, OVOS uses the WakeWord \"hey Mycroft\". OVOS \"hotwords\" is the configuration section to specify what the WakeWord do. Multiple \"hotwords\" can be used to do a variety of things from putting OVOS into active listening mode, a WakeWord like \"hey Mycroft\", to issuing a command such as \"stop\" or \"wake up\" As with everything else, this too can be changed, and several plugins are available. Some work better than others.","title":"OVOS Listener - WakeWords / HotWords"},{"location":"ht_ww/#list-of-ovos-wakeword-plugins","text":"Plugin Type Description ovos-ww-plugin-precise-lite Model The most accurate plugin available as it uses pretrained models and community models are available also ovos-ww-plugin-openWakeWord Model Uses openWakeWord for detection ovos-ww-plugin-vosk Full Word Uses full word detection from a loaded model. ovos-ww-plugin-pocketsphinx Phonomes Probably the least accurate, but can be used on almost any device ovos-ww-plugin-hotkeys Model Use an input from keyboard or button to emulate a wakeword being said. Useful for privacy, but not so much for a smart speaker. ovos-ww-plugin-snowboy Model Uses snowboy wakeword engine ovos-ww-plugin-nyumaya Model WakeWord plugin using Nyumaya","title":"List of OVOS WakeWord Plugins"},{"location":"ht_ww/#configuration","text":"The configuration for wakewords are in the \"listener\" section of mycroft.conf and configuration of hotwords is in the \"hotwords\" section of the same file. This example will use the vosk plugin and change the wake word to \"hey Ziggy\". Add the following to your ~/.config/mycroft/mycroft.conf file. { \"listener\": { \"wake_word\": \"hey_ziggy\" } \"hotwords\": { \"hey_ziggy\": { \"module\": \"ovos-ww-plugin-vosk\", \"listen\": true, \"active\": true, \"sound\": \"snd/start_listening.wav\", \"debug\": false \"rule\": \"fuzzy\", \"lang\": \"en\", \"samples\": [ \"hey ziggy\", \"hay ziggy\" ] } } }","title":"Configuration"},{"location":"ht_ww/#sections-explained","text":"The most important section is \"wake_word\": \"hey_ziggy\" in the \"listener\" section. This tells OVOS what the default wakeword should be. In the \"hotwords\" section, \"active\": true , is only used if multiple wakewords are being used. By default, what ever wake_word is set in the listener section is automatically set to true . If you want to disable a wakeword, you can set this to false . If enabling a wakeword, be sure to also set \"listen\": true . Multiple hotwords can be configured at the same time, even the same word with different plugins. This allows for more accurate ones to be used before the less accurate, but only if the plugin is installed. Advanced WakeWords/HotWords","title":"Sections explained"},{"location":"images/","text":"OpenVoiceOS vs Neon A.I. vs Mycroft A.I. Ready to go images compared OpenVoiceOS ready to use images come in two flavours; The buildroot version, being the minimal consumer type of image and the Manjaro version, being the full distribution easy / easier for developing. OpenVoiceOS (Buildroot) OpenVoiceOS (Manjaro) Neon AI Mark II (Dinkum) Mycroft A.I. (PiCroft) Software - Architecture Core ovos-core ovos-core neon-core Dinkum mycroft-core GUI ovos-shell (mycroft-gui based) ovos-shell (mycroft-gui based) ovos-shell (mycroft-gui based) plasma-nano (mycroft-gui based) N/A Services systemd user session systemd system session systemd system session systemd system session N/A Hardware - Compatibility Raspberry Pi 3/3b/3b/4 4 4 Mark II (only) 3/3b/3b/4 X86_64 planned No WIP No No Virtual Appliance planned No Unknown No No Docker No possibly in future Yes Yes No No Mark-1 Yes WIP No No No No Mark-2 Yes Dev-Kit Retail (WIP) Yes Dev-Kit Retail Yes Dev-Kit Retail Yes Retail ONLY No Hardware - Peripherals ReSpeaker 2-mic 4-mic squared 4-mic linear 6-mic 2-mic 4-mic squared 4-mic linear 6-mic Unknown No Yes manual installation? USB Yes Yes Unknown No Yes manual installation SJ-201 Yes Yes Yes Yes No sandbox image maybe Google AIY v1 Yes manual configuration Yes manual installation Unknown No No manual installation? Google AIY v2 No perhaps in the future Yes manual installation Unknown No No manual installation? Screen - GUI GUI supported Showing a GUI if a screen is attached Yes ovos-shell on eglfs Yes ovos-shell on eglfs Yes ovos-shell on eglfs Yes plasma-nano on X11 No Network Setup - Options Mobile WiFi Setup Easy device \"hotspot\" to connect to preset network from phone or pad. Yes No No Yes No On device WiFi Setup Configure the WiFi connection on the device itself Yes Yes Yes No No On screen keyboard Yes Yes Yes Yes No Reconfigure network Easy way to change the network settings Yes Yes Yes No No Configuration - Option Data privacy Yes Yes Yes Partial Partial Offline mode Yes Yes Yes No No Color theming Yes Yes Yes No No Non-Pairing mode Yes Yes Yes No No API Access w/o pairing Yes Yes Yes No No On-Device configuration Yes Yes Yes No No Online configuration Dashboard wip Dashboard wip WIP Yes Yes Customization Open Build System Yes Yes Yes Partial build tools are not public Yes Package manager No No buildtools available. Perhaps opkg in the future Yes (pacman) Yes Yes *limited becuase of read-only filesystem Yes Updating Update mechanism(s) pip In the future: Firmware updates. On-device and Over The Air pip package manager Plugin-based update mechanism OS Updates WIP OTA controlled by Mycroft pip package manager Voice Assistant - Functionality STT - On device Yes Kaldi/Vosk-API WhisperCPP (WIP) Whisper TFlite (WIP) Yes Kaldi/Vosk-API Yes Vosk Deepspeech Yes Vosk Coqui No STT - On premises Yes Ovos STT Server (any plugin) Yes Ovos STT Server (any plugin) Yes Ovos STT Server (any plugin) No No STT - Cloud Yes Ovos Server Proxy Google More...? Yes Ovos Server Proxy Google Yes Google Yes Selene Google Cloud Proxy Yes Selene Google (Chromium) Proxy TTS - On device Yes Mimic 1 More...? Yes Mimic 1 More...? Yes Mimic 1 Mimic 3 Coqui Yes Mimic 3 Yes Mimic 1 TTS - On premises Yes ? Yes ? Yes Coqui Mozilla Larynx No No TTS - Cloud Yes Google Mimic 2 Mimic 3 More...? Yes Google Mimic 2 Mimic 3 More...? Yes Amazon Polly No No Smart Speaker - Functionality Music player connectivity The use of external application on other devices to connect to your device. Yes Airplay Spotifyd Bluetooth Snapcast KDE Connect Unknown Unknown Yes MPD Local Files No manual installation? Music player sync Yes OCP MPRIS Yes OCP MPRIS Yes OCP MPRIS No No HomeAssistant integration unknown Yes HomeAssistant PHAL Plugin WIP Mycroft Skill reported working unknown unknown Camera support Yes wip Yes unknown unknown","title":"OpenVoiceOS vs Neon A.I. vs Mycroft A.I."},{"location":"images/#openvoiceos-vs-neon-ai-vs-mycroft-ai","text":"","title":"OpenVoiceOS vs Neon A.I. vs Mycroft A.I."},{"location":"images/#ready-to-go-images-compared","text":"OpenVoiceOS ready to use images come in two flavours; The buildroot version, being the minimal consumer type of image and the Manjaro version, being the full distribution easy / easier for developing. OpenVoiceOS (Buildroot) OpenVoiceOS (Manjaro) Neon AI Mark II (Dinkum) Mycroft A.I. (PiCroft) Software - Architecture Core ovos-core ovos-core neon-core Dinkum mycroft-core GUI ovos-shell (mycroft-gui based) ovos-shell (mycroft-gui based) ovos-shell (mycroft-gui based) plasma-nano (mycroft-gui based) N/A Services systemd user session systemd system session systemd system session systemd system session N/A Hardware - Compatibility Raspberry Pi 3/3b/3b/4 4 4 Mark II (only) 3/3b/3b/4 X86_64 planned No WIP No No Virtual Appliance planned No Unknown No No Docker No possibly in future Yes Yes No No Mark-1 Yes WIP No No No No Mark-2 Yes Dev-Kit Retail (WIP) Yes Dev-Kit Retail Yes Dev-Kit Retail Yes Retail ONLY No Hardware - Peripherals ReSpeaker 2-mic 4-mic squared 4-mic linear 6-mic 2-mic 4-mic squared 4-mic linear 6-mic Unknown No Yes manual installation? USB Yes Yes Unknown No Yes manual installation SJ-201 Yes Yes Yes Yes No sandbox image maybe Google AIY v1 Yes manual configuration Yes manual installation Unknown No No manual installation? Google AIY v2 No perhaps in the future Yes manual installation Unknown No No manual installation? Screen - GUI GUI supported Showing a GUI if a screen is attached Yes ovos-shell on eglfs Yes ovos-shell on eglfs Yes ovos-shell on eglfs Yes plasma-nano on X11 No Network Setup - Options Mobile WiFi Setup Easy device \"hotspot\" to connect to preset network from phone or pad. Yes No No Yes No On device WiFi Setup Configure the WiFi connection on the device itself Yes Yes Yes No No On screen keyboard Yes Yes Yes Yes No Reconfigure network Easy way to change the network settings Yes Yes Yes No No Configuration - Option Data privacy Yes Yes Yes Partial Partial Offline mode Yes Yes Yes No No Color theming Yes Yes Yes No No Non-Pairing mode Yes Yes Yes No No API Access w/o pairing Yes Yes Yes No No On-Device configuration Yes Yes Yes No No Online configuration Dashboard wip Dashboard wip WIP Yes Yes Customization Open Build System Yes Yes Yes Partial build tools are not public Yes Package manager No No buildtools available. Perhaps opkg in the future Yes (pacman) Yes Yes *limited becuase of read-only filesystem Yes Updating Update mechanism(s) pip In the future: Firmware updates. On-device and Over The Air pip package manager Plugin-based update mechanism OS Updates WIP OTA controlled by Mycroft pip package manager Voice Assistant - Functionality STT - On device Yes Kaldi/Vosk-API WhisperCPP (WIP) Whisper TFlite (WIP) Yes Kaldi/Vosk-API Yes Vosk Deepspeech Yes Vosk Coqui No STT - On premises Yes Ovos STT Server (any plugin) Yes Ovos STT Server (any plugin) Yes Ovos STT Server (any plugin) No No STT - Cloud Yes Ovos Server Proxy Google More...? Yes Ovos Server Proxy Google Yes Google Yes Selene Google Cloud Proxy Yes Selene Google (Chromium) Proxy TTS - On device Yes Mimic 1 More...? Yes Mimic 1 More...? Yes Mimic 1 Mimic 3 Coqui Yes Mimic 3 Yes Mimic 1 TTS - On premises Yes ? Yes ? Yes Coqui Mozilla Larynx No No TTS - Cloud Yes Google Mimic 2 Mimic 3 More...? Yes Google Mimic 2 Mimic 3 More...? Yes Amazon Polly No No Smart Speaker - Functionality Music player connectivity The use of external application on other devices to connect to your device. Yes Airplay Spotifyd Bluetooth Snapcast KDE Connect Unknown Unknown Yes MPD Local Files No manual installation? Music player sync Yes OCP MPRIS Yes OCP MPRIS Yes OCP MPRIS No No HomeAssistant integration unknown Yes HomeAssistant PHAL Plugin WIP Mycroft Skill reported working unknown unknown Camera support Yes wip Yes unknown unknown","title":"Ready to go images compared"},{"location":"images_buildroot/","text":"Open Voice Operating System - Buildroot Edition Auto detection and configuration of HAT's The buildroot OpenVoiceOS editions is considered to be consumer friendly type of device, or as Mycroft A.I. would like to call, a retail version. However as we so not target a specific hardware platform and would like to support custom made systems we are implementing a smart way to detect and configure different type of Raspberry Pi HAT's. At boot the system scan the I2C bus for known and supported HAT's and if found configures the underlying linux sound system. At the moment this is still very much in development, however the below HAT's are or should soon be supported by this system; - ReSpeaker 2-mic HAT - ReSpeaker 4-mic Square HAT - ReSpeaker 4-mic linear / 6-mic HAT - USB devices such as the PS2 EYE - SJ-201 Dev Kits - SJ-201 Mark2 retail device Snapcast Client & Server TODO - write docs Remote shared folder access (SMB - Windows) Your OpenVoiceOS device is accessible over the network from your Windows computer. This is still a work in process, but you can open a file explorer and navigate to you OpenVoiceOS device. At the moment the following directories within the user's home directory are shared over the network. - Documents - Music - Pictures These folders are also used by KDE Connect file transfer plugins and for instance the Camera skill (Hey Mycroft, take a selfie) and / or Homescreen Skill (Hey Mycroft, take a screenshot) Remote shared folder access (NFS - Linux) In the near future the above Windows network shares are also made available over NFS for Linux clients. This is still a Work In Progress / To Do item. Development. At this moment development is in very early stages and focussed on the Raspberry Pi 3B & 4. As soon as an initial first workable version is created, other hardware might be added. Source code: https://github.com/OpenVoiceOS/ovos-buildroot Build Environment Only use x86_64 based architecture/ hardware to build the image. The following example Build environment has been tested : Architecture: x86_64 Hardware: Intel Core i5 processor, 8GB RAM, 240GB SSD (you can build on less RAM (2GB) and slower storage but more RAM, faster storage = quicker image building) OS: Ubuntu 22.04 LTS desktop Installing System Build Dependencies The following system packages are required to build the image: gcc subversion qttools5-dev qttools5-dev-tools python git make g++ curl wget qtdeclarative5-dev The following firewall ports need to be allowed to the internet. In addition to the usual http/https ports (tcp 80, tcp 443) a couple of other ports need to be allowed to the internet : - tcp 9418 (git). - tcp 21 (ftp PASV) and random ports for DATA channel. This can be optional but better to have this allowed along with the corresponding random data channel ports. (knowledge of firewalls required) Getting the code. First, get the code on your system! The simplest method is via git. - cd ~/ - git clone --recurse-submodules https://github.com/OpenVoiceOS/OpenVoiceOS.git - cd OpenVoiceOS Patching Buildroot. (ONLY at the first clean checkout/clone) If this is the very first time you are going to build an image, you need to execute the following command once; - ./scripts/br-patches.sh This will patch the Buildroot packages. Building the image. Building the image(s) can be done by utilizing a proper Makefile; To see the available commands, just run: 'make help' As example to build the rpi4 version; - make clean - make rpi4_64-gui-config - make rpi4_64-gui Now grab a cup of coffee, go for a walk, sleep and repeat as the build process takes up a long time pulling everything from source and cross compiling everything for the device. Especially the qtwebengine package is taking a LONG time. (At the moment there is an outstanding issue which prevents the build to run completely to the end. The plasma-workspace package will error out, not finding the libGLESv4 properly linked within QT5GUI. When the build stopped because of this error, edit the following file; buildroot/output/host/aarch64-buildroot-linux-gnu/sysroot/usr/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake at the bottom of the file replace this line; _qt5gui_find_extra_libs(OPENGL \"GLESv2\" \"\" \"\") And replace it bit this line; _qt5gui_find_extra_libs(OPENGL \"${CMAKE_SYSROOT}/usr/lib/libGLESv2.so\" \"\" \"${CMAKE_SYSROOT}/usr/include/libdrm\") Then you can continue the build process by re-running the \"make rpi4_64-gui\" command. (DO NOT, run \"make clean\" and/or \"make rpi4_64-gui-config\" again, or you will start from scratch again !!!) When everything goes fine the xz compressed image will be available within the release directory. Booting image from sd card for the first time (setting up Wi-Fi and backend). 1.Ensure all required peripherals (mic, speakers, HDMI, usb mouse etc) are plugged in before powering on your RPI4 for the first time. 2. Skip this step if RPI4 is using an ethernet cable. Once powered on, the screen will present the Wifi setup screen ( a Wifi HotSpot is created). Connect to the Wifi HotSpot (ssid OVOS) from another device and follow the on-screen instructions to set up Wifi. 3.Once Wifi is setup a choice of Mycroft backend and Local backend is presented. Choose the Mycroft backend for now and follow the on-screen instructions, Local backend is not ready to use yet. After the pairing process has completed and skills downloaded it's time to test/ use it. Accessing the CLI. SSH to ip address of RPI4 default credentials 'mycroft/mycroft'","title":"Buildroot"},{"location":"images_buildroot/#open-voice-operating-system-buildroot-edition","text":"","title":" Open Voice Operating System - Buildroot Edition"},{"location":"images_buildroot/#auto-detection-and-configuration-of-hats","text":"The buildroot OpenVoiceOS editions is considered to be consumer friendly type of device, or as Mycroft A.I. would like to call, a retail version. However as we so not target a specific hardware platform and would like to support custom made systems we are implementing a smart way to detect and configure different type of Raspberry Pi HAT's. At boot the system scan the I2C bus for known and supported HAT's and if found configures the underlying linux sound system. At the moment this is still very much in development, however the below HAT's are or should soon be supported by this system; - ReSpeaker 2-mic HAT - ReSpeaker 4-mic Square HAT - ReSpeaker 4-mic linear / 6-mic HAT - USB devices such as the PS2 EYE - SJ-201 Dev Kits - SJ-201 Mark2 retail device","title":"Auto detection and configuration of HAT's"},{"location":"images_buildroot/#snapcast-client-server","text":"TODO - write docs","title":"Snapcast Client & Server"},{"location":"images_buildroot/#remote-shared-folder-access-smb-windows","text":"Your OpenVoiceOS device is accessible over the network from your Windows computer. This is still a work in process, but you can open a file explorer and navigate to you OpenVoiceOS device. At the moment the following directories within the user's home directory are shared over the network. - Documents - Music - Pictures These folders are also used by KDE Connect file transfer plugins and for instance the Camera skill (Hey Mycroft, take a selfie) and / or Homescreen Skill (Hey Mycroft, take a screenshot)","title":"Remote shared folder access (SMB - Windows)"},{"location":"images_buildroot/#remote-shared-folder-access-nfs-linux","text":"In the near future the above Windows network shares are also made available over NFS for Linux clients. This is still a Work In Progress / To Do item.","title":"Remote shared folder access (NFS - Linux)"},{"location":"images_buildroot/#development","text":"At this moment development is in very early stages and focussed on the Raspberry Pi 3B & 4. As soon as an initial first workable version is created, other hardware might be added. Source code: https://github.com/OpenVoiceOS/ovos-buildroot","title":"Development."},{"location":"images_buildroot/#build-environment","text":"Only use x86_64 based architecture/ hardware to build the image. The following example Build environment has been tested : Architecture: x86_64 Hardware: Intel Core i5 processor, 8GB RAM, 240GB SSD (you can build on less RAM (2GB) and slower storage but more RAM, faster storage = quicker image building) OS: Ubuntu 22.04 LTS desktop","title":"Build Environment"},{"location":"images_buildroot/#installing-system-build-dependencies","text":"The following system packages are required to build the image: gcc subversion qttools5-dev qttools5-dev-tools python git make g++ curl wget qtdeclarative5-dev","title":"Installing System Build Dependencies"},{"location":"images_buildroot/#the-following-firewall-ports-need-to-be-allowed-to-the-internet","text":"In addition to the usual http/https ports (tcp 80, tcp 443) a couple of other ports need to be allowed to the internet : - tcp 9418 (git). - tcp 21 (ftp PASV) and random ports for DATA channel. This can be optional but better to have this allowed along with the corresponding random data channel ports. (knowledge of firewalls required)","title":"The following firewall ports need to be allowed to the internet."},{"location":"images_buildroot/#getting-the-code","text":"First, get the code on your system! The simplest method is via git. - cd ~/ - git clone --recurse-submodules https://github.com/OpenVoiceOS/OpenVoiceOS.git - cd OpenVoiceOS","title":"Getting the code."},{"location":"images_buildroot/#patching-buildroot","text":"(ONLY at the first clean checkout/clone) If this is the very first time you are going to build an image, you need to execute the following command once; - ./scripts/br-patches.sh This will patch the Buildroot packages.","title":"Patching Buildroot."},{"location":"images_buildroot/#building-the-image","text":"Building the image(s) can be done by utilizing a proper Makefile; To see the available commands, just run: 'make help' As example to build the rpi4 version; - make clean - make rpi4_64-gui-config - make rpi4_64-gui Now grab a cup of coffee, go for a walk, sleep and repeat as the build process takes up a long time pulling everything from source and cross compiling everything for the device. Especially the qtwebengine package is taking a LONG time. (At the moment there is an outstanding issue which prevents the build to run completely to the end. The plasma-workspace package will error out, not finding the libGLESv4 properly linked within QT5GUI. When the build stopped because of this error, edit the following file; buildroot/output/host/aarch64-buildroot-linux-gnu/sysroot/usr/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake at the bottom of the file replace this line; _qt5gui_find_extra_libs(OPENGL \"GLESv2\" \"\" \"\") And replace it bit this line; _qt5gui_find_extra_libs(OPENGL \"${CMAKE_SYSROOT}/usr/lib/libGLESv2.so\" \"\" \"${CMAKE_SYSROOT}/usr/include/libdrm\") Then you can continue the build process by re-running the \"make rpi4_64-gui\" command. (DO NOT, run \"make clean\" and/or \"make rpi4_64-gui-config\" again, or you will start from scratch again !!!) When everything goes fine the xz compressed image will be available within the release directory.","title":"Building the image."},{"location":"images_buildroot/#booting-image-from-sd-card-for-the-first-time-setting-up-wi-fi-and-backend","text":"1.Ensure all required peripherals (mic, speakers, HDMI, usb mouse etc) are plugged in before powering on your RPI4 for the first time. 2. Skip this step if RPI4 is using an ethernet cable. Once powered on, the screen will present the Wifi setup screen ( a Wifi HotSpot is created). Connect to the Wifi HotSpot (ssid OVOS) from another device and follow the on-screen instructions to set up Wifi. 3.Once Wifi is setup a choice of Mycroft backend and Local backend is presented. Choose the Mycroft backend for now and follow the on-screen instructions, Local backend is not ready to use yet. After the pairing process has completed and skills downloaded it's time to test/ use it.","title":"Booting image from sd card for the first time (setting up Wi-Fi and backend)."},{"location":"images_buildroot/#accessing-the-cli","text":"SSH to ip address of RPI4 default credentials 'mycroft/mycroft'","title":"Accessing the CLI."},{"location":"images_intro/","text":"OpenVoiceOS Image Options ** Coming soon **","title":"Introduction"},{"location":"images_intro/#openvoiceos-image-options","text":"** Coming soon **","title":"OpenVoiceOS Image Options"},{"location":"install_gui/","text":"Installing OVOS-GUI The GUI is a totaly optional component of OVOS, but adds a ton more functionality to the device. Some skills will not work without one. About the GUI The OVOS GUI is an independent component of OVOS which uses QT5/6 to display information on your devices screen. It is touchscreen compliant 1 , and has an on-screen keyboard for entering data. On a Raspberry Pi, the GUI runs in a framebuffer, therefore not needing a full window manager. This saves resources on underpowered devices. mycroft-gui-qt5 is a fork of the original mycroft-gui mycroft-gui-qt6 is in the works, but not all skills support it yet. Installing the GUI The GUI software comes with a nice script which will install the needed packages for you. To get the software we will use git and the dev_setup.sh script that is provided. cd ~ git clone https://github.com/OpenVoiceOS/mycroft-gui-qt5 cd mycroft-gui-qt5 bash dev_setup.sh NOTE The mycroft-gui is NOT a python script, therefore will not run in the venv created for the rest of the software stack. That's it !! That is all it takes to install the GUI for OVOS. Invoke the GUI with the command: ovos-gui-app You can refer to the README in the mycroft-gui-qt5 repository for more information Starting the GUI Footnotes 1 It has been my experience that while the touchscreen will work with OVOS, some have the touch matrix opposite of what the screen is displayed. With one of these screens, it is still possible to use it, but you will need a full window manager installed instead of the GUI running in a framebuffer.","title":"Install GUI"},{"location":"install_gui/#installing-ovos-gui","text":"The GUI is a totaly optional component of OVOS, but adds a ton more functionality to the device. Some skills will not work without one.","title":"Installing OVOS-GUI"},{"location":"install_gui/#about-the-gui","text":"The OVOS GUI is an independent component of OVOS which uses QT5/6 to display information on your devices screen. It is touchscreen compliant 1 , and has an on-screen keyboard for entering data. On a Raspberry Pi, the GUI runs in a framebuffer, therefore not needing a full window manager. This saves resources on underpowered devices. mycroft-gui-qt5 is a fork of the original mycroft-gui mycroft-gui-qt6 is in the works, but not all skills support it yet.","title":"About the GUI"},{"location":"install_gui/#installing-the-gui","text":"The GUI software comes with a nice script which will install the needed packages for you. To get the software we will use git and the dev_setup.sh script that is provided. cd ~ git clone https://github.com/OpenVoiceOS/mycroft-gui-qt5 cd mycroft-gui-qt5 bash dev_setup.sh NOTE The mycroft-gui is NOT a python script, therefore will not run in the venv created for the rest of the software stack.","title":"Installing the GUI"},{"location":"install_gui/#thats-it","text":"That is all it takes to install the GUI for OVOS. Invoke the GUI with the command: ovos-gui-app You can refer to the README in the mycroft-gui-qt5 repository for more information Starting the GUI","title":"That's it !!"},{"location":"install_gui/#footnotes","text":"","title":"Footnotes"},{"location":"install_gui/#1","text":"It has been my experience that while the touchscreen will work with OVOS, some have the touch matrix opposite of what the screen is displayed. With one of these screens, it is still possible to use it, but you will need a full window manager installed instead of the GUI running in a framebuffer.","title":"1"},{"location":"install_intro/","text":"OpenVoiceOS OpenVoiceOS is an open source platform for smart speakers and other voice-centric devices. ovos-core is a backwards-compatible descendant of Mycroft-core , the central component of Mycroft. It contains extensions and features not present upstream. All Mycroft Skills and Plugins should work normally with OVOS-core. ovos-core is fully modular. Furthermore, common components have been repackaged as plugins. That means it isn't just a great assistant on its own, but also a pretty small library! Getting Started There a couple of ways to install and use the OVOS ecosystem. Prebuilt Images The easiest and fastest way to experience what OVOS has to offer is to use one of the prebuilt images that the OVOS team has provided. NOTE Images are currently only available for a RPi3b/b+/4. More may be on the way. Buildroot-ovos The most complete and advanced image OVOS provides, complete with a default set of skills and a GUI. raspbian-ovos The newest image from the OVOS team. This is a \"headless\" image (NO GUI), but comes with a preinstalled set of skills also. This image will provide you with an experience similar to the origional picroft Get started with an image From source as a library Images are not the only way to use OVOS. It can be installed on almost any system as a set of Python libraries. ovos-core is very modular; depending on where you are running ovos-core you may want to run only a subset of the services This is an advanced setup and requires access to a command shell and can take more effort to get working. Get started with OVOS libraries Docker Docker images are also available and have been tested and working on Linux, Windows, and even Mac. Get started with OVOS Docker","title":"Introduction"},{"location":"install_intro/#openvoiceos","text":"OpenVoiceOS is an open source platform for smart speakers and other voice-centric devices. ovos-core is a backwards-compatible descendant of Mycroft-core , the central component of Mycroft. It contains extensions and features not present upstream. All Mycroft Skills and Plugins should work normally with OVOS-core. ovos-core is fully modular. Furthermore, common components have been repackaged as plugins. That means it isn't just a great assistant on its own, but also a pretty small library!","title":"OpenVoiceOS"},{"location":"install_intro/#getting-started","text":"There a couple of ways to install and use the OVOS ecosystem.","title":"Getting Started"},{"location":"install_intro/#prebuilt-images","text":"The easiest and fastest way to experience what OVOS has to offer is to use one of the prebuilt images that the OVOS team has provided. NOTE Images are currently only available for a RPi3b/b+/4. More may be on the way. Buildroot-ovos The most complete and advanced image OVOS provides, complete with a default set of skills and a GUI. raspbian-ovos The newest image from the OVOS team. This is a \"headless\" image (NO GUI), but comes with a preinstalled set of skills also. This image will provide you with an experience similar to the origional picroft Get started with an image","title":"Prebuilt Images"},{"location":"install_intro/#from-source-as-a-library","text":"Images are not the only way to use OVOS. It can be installed on almost any system as a set of Python libraries. ovos-core is very modular; depending on where you are running ovos-core you may want to run only a subset of the services This is an advanced setup and requires access to a command shell and can take more effort to get working. Get started with OVOS libraries","title":"From source as a library"},{"location":"install_intro/#docker","text":"Docker images are also available and have been tested and working on Linux, Windows, and even Mac. Get started with OVOS Docker","title":"Docker"},{"location":"install_ovos_core/","text":"Installing OVOS The OVOS ecosystem is very modular, depending on where you are running ovos-core you may want to run only a subset of the services. By default ovos-core only installs the minimum components common to all services, for the purposes of this document we will assume you want a full install with a GUI. NOTE The GUI requires separate packages in addition to what is required by ovos-core . The GUI installation is covered in its own section. Host system OVOS requires some system dependencies, how to do this will depend on your distro. Ubuntu/Debian based images. sudo apt install build-essential python3-dev python3-pip swig libssl-dev libfann-dev portaudio19-dev libpulse-dev A few packages are not necessary, but are needed to install from source and may be required for some plugins. To add these packages run this command. sudo apt install git libpulse-dev cmake libncurses-dev pulseaudio-utils pulseaudio NOTE : MycroftAI's dev_setup.sh does not exist in OVOS-core. See the community provided, WIP , manual_user_install for a minimal, almost, replacement. Get started We suggest you do this in a virtualenv. Create and activate the virtual environment. python -m venv .venv . .venv/bin/activate Update pip and install wheel pip install -U pip wheel From PyPi ovos-core To install a full OVOS software stack with enough skills and plugins to have a working system, the OVOS team includes a subset of packages that can be installed automatically with pip. It is recommended to use the latest alpha versions until the 0.1.0 release as it contains all of the latest bug fixes and improvements. latest stable ovos-core 0.0.7 does not include the new extras [mycroft] so we use [all] . pip install ovos-core[all] alpha version pip install --pre ovos-core[mycroft] This should install everything needed for a basic OVOS software stack. There are additional extras options available other than [mycroft] and can be found in the ovos-core setup.py file. Starting OVOS Individual Modules Each module can be installed independently to only include the parts needed or wanted for a specific system. ovos-core pip install --pre ovos-core Starting Modules - core ovos-messagebus pip install --pre ovos-messagebus Starting Modules - messagebus ovos-audio pip install --pre ovos-audio Starting Modules - audio dinkum-listener pip install --pre ovos-dinkum-listener Starting Modules - listener ovos-phal pip install --pre ovos-phal Starting Modules - phal From Source We will use git to clone the repositories to a local directory. While not specifically necessary, we are assuming this to be the users HOME directory. ovos-core Install ovos-core from github source files. git clone https://github.com/OpenVoiceOS/ovos-core The ovos-core repository provides extra requirements files. For the complete stack, we will use the mycroft.txt file. pip install ~/ ovos-core [mycroft] This should install everything needed to use the basic OVOS software stack. NOTE this also installs lgpl licenced software. Starting OVOS Install individual modules Some systems may not require a full install of OVOS. Luckily, it can be installed as individual modules. core library git clone https://github.com/OpenVoiceOS/ovos-core pip install ~/ovos-core This is the minimal library needed as the brain of the system. There are no skills, no messagebus, and no plugins installed yet. Starting Core messagebus git clone https://github.com/OpenVoiceOS/ovos-messagebus pip install ~/ovos-messagebus This is the nervous system of OVOS needed for modules to talk to each other. Starting the Messagebus listener OVOS has updated their listener to use ovos-dinkum-listener instead of ovos-listener . It is code from mycroft-dinkum adopted for use with the OVOS ecosystem. Previous listeners are still available, but not recommended. git clone https://github.com/OpenVoiceOS/ovos-dinkum-listener pip install ~/ovos-dinkum-listener You now have what is needed for OVOS to use a microphone and its associated services, WakeWords, HotWords, and STT Starting the listener PHAL The OVOS Plugin based Hardware Abstraction Layer is what is used to allow the OVOS software to communicate with hardware devices such as the operating system or interacting with the Mycroft Mark 1 device . The PHAL system consists of two interfaces. ovos-phal is the basic interface that normal plugins would use. ovos-admin-phal is used where superuser privileges are needed. Be extremely careful when installing admin-phal plugins as they provide full control over the host system. git clone https://github.com/OpenVoiceOS/ovos-PHAL pip install ~/ovos-PHAL This just installs the basic system that allows the plugins to work. Starting PHAL audio This is the service that is used by OVOS to play all of the audio. It can be a voice response, or a stream from somewhere such as music, or a podcast. It also installs OVOS Common Play which can be used as a standalone media player and is required for OVOS audio playback. git clone https://github.com/OpenVoiceOS/ovos-audio pip install ~/ovos-audio This will enable the default TTS (Text To Speech) engine for voice feedback from your OVOS device. However, plenty of alternative TTS engines are available. Starting Audio You now should have all of the separate components needed to run a full OVOS software stack. Starting the OVOS software stack","title":"Install as a Library"},{"location":"install_ovos_core/#installing-ovos","text":"The OVOS ecosystem is very modular, depending on where you are running ovos-core you may want to run only a subset of the services. By default ovos-core only installs the minimum components common to all services, for the purposes of this document we will assume you want a full install with a GUI. NOTE The GUI requires separate packages in addition to what is required by ovos-core . The GUI installation is covered in its own section.","title":"Installing OVOS"},{"location":"install_ovos_core/#host-system","text":"OVOS requires some system dependencies, how to do this will depend on your distro. Ubuntu/Debian based images. sudo apt install build-essential python3-dev python3-pip swig libssl-dev libfann-dev portaudio19-dev libpulse-dev A few packages are not necessary, but are needed to install from source and may be required for some plugins. To add these packages run this command. sudo apt install git libpulse-dev cmake libncurses-dev pulseaudio-utils pulseaudio NOTE : MycroftAI's dev_setup.sh does not exist in OVOS-core. See the community provided, WIP , manual_user_install for a minimal, almost, replacement.","title":"Host system"},{"location":"install_ovos_core/#get-started","text":"We suggest you do this in a virtualenv. Create and activate the virtual environment. python -m venv .venv . .venv/bin/activate Update pip and install wheel pip install -U pip wheel","title":"Get started"},{"location":"install_ovos_core/#from-pypi","text":"","title":"From PyPi"},{"location":"install_ovos_core/#ovos-core","text":"To install a full OVOS software stack with enough skills and plugins to have a working system, the OVOS team includes a subset of packages that can be installed automatically with pip. It is recommended to use the latest alpha versions until the 0.1.0 release as it contains all of the latest bug fixes and improvements. latest stable ovos-core 0.0.7 does not include the new extras [mycroft] so we use [all] . pip install ovos-core[all] alpha version pip install --pre ovos-core[mycroft] This should install everything needed for a basic OVOS software stack. There are additional extras options available other than [mycroft] and can be found in the ovos-core setup.py file. Starting OVOS","title":"ovos-core"},{"location":"install_ovos_core/#individual-modules","text":"Each module can be installed independently to only include the parts needed or wanted for a specific system. ovos-core pip install --pre ovos-core Starting Modules - core ovos-messagebus pip install --pre ovos-messagebus Starting Modules - messagebus ovos-audio pip install --pre ovos-audio Starting Modules - audio dinkum-listener pip install --pre ovos-dinkum-listener Starting Modules - listener ovos-phal pip install --pre ovos-phal Starting Modules - phal","title":"Individual Modules"},{"location":"install_ovos_core/#from-source","text":"We will use git to clone the repositories to a local directory. While not specifically necessary, we are assuming this to be the users HOME directory.","title":"From Source"},{"location":"install_ovos_core/#ovos-core_1","text":"Install ovos-core from github source files. git clone https://github.com/OpenVoiceOS/ovos-core The ovos-core repository provides extra requirements files. For the complete stack, we will use the mycroft.txt file. pip install ~/ ovos-core [mycroft] This should install everything needed to use the basic OVOS software stack. NOTE this also installs lgpl licenced software. Starting OVOS","title":"ovos-core"},{"location":"install_ovos_core/#install-individual-modules","text":"Some systems may not require a full install of OVOS. Luckily, it can be installed as individual modules. core library git clone https://github.com/OpenVoiceOS/ovos-core pip install ~/ovos-core This is the minimal library needed as the brain of the system. There are no skills, no messagebus, and no plugins installed yet. Starting Core messagebus git clone https://github.com/OpenVoiceOS/ovos-messagebus pip install ~/ovos-messagebus This is the nervous system of OVOS needed for modules to talk to each other. Starting the Messagebus listener OVOS has updated their listener to use ovos-dinkum-listener instead of ovos-listener . It is code from mycroft-dinkum adopted for use with the OVOS ecosystem. Previous listeners are still available, but not recommended. git clone https://github.com/OpenVoiceOS/ovos-dinkum-listener pip install ~/ovos-dinkum-listener You now have what is needed for OVOS to use a microphone and its associated services, WakeWords, HotWords, and STT Starting the listener PHAL The OVOS Plugin based Hardware Abstraction Layer is what is used to allow the OVOS software to communicate with hardware devices such as the operating system or interacting with the Mycroft Mark 1 device . The PHAL system consists of two interfaces. ovos-phal is the basic interface that normal plugins would use. ovos-admin-phal is used where superuser privileges are needed. Be extremely careful when installing admin-phal plugins as they provide full control over the host system. git clone https://github.com/OpenVoiceOS/ovos-PHAL pip install ~/ovos-PHAL This just installs the basic system that allows the plugins to work. Starting PHAL audio This is the service that is used by OVOS to play all of the audio. It can be a voice response, or a stream from somewhere such as music, or a podcast. It also installs OVOS Common Play which can be used as a standalone media player and is required for OVOS audio playback. git clone https://github.com/OpenVoiceOS/ovos-audio pip install ~/ovos-audio This will enable the default TTS (Text To Speech) engine for voice feedback from your OVOS device. However, plenty of alternative TTS engines are available. Starting Audio You now should have all of the separate components needed to run a full OVOS software stack. Starting the OVOS software stack","title":"Install individual modules"},{"location":"install_ovos_docker/","text":"OVOS Docker WIP Check back soon see ovos-docker/README","title":"Install with Docker"},{"location":"install_ovos_docker/#ovos-docker","text":"WIP Check back soon see ovos-docker/README","title":"OVOS Docker"},{"location":"kdeconnect/","text":"KDE Connect KDE Connect is a multi-platform application developed by KDE, which facilitates wireless communications and data transfer between devices over local networks and is installed and configured by default on the Buildroot based image. A couple of features of KDE Connect are: Shared clipboard: copy and paste between your phone, computer and/or OpenVoiceOS device. Share files and URLs instantly from one device to another including your OpenVoiceOS device. Multimedia remote control: Use your phone, tablet or computer as a remote for what is playing on your OpenVoiceOS device. Auto mute your OpenVoiceOS device when your mobile phone rings. Virtual touchpad / keyboard: Use your phone/tablet screen as your OpenVoiceOS device its mouse and keyboard. For the sake of simplicity the below screenshots are made using the iPhone KDE Connect client, however as it is not yet fully feature complete and / or stable, it is recommended to use the Android and / or Linux client. Especially if you would like to have full MPRIS control of your OpenVoiceOS device. On your mobile device, open the KDE Connect app and it will see the advertised OpenVoiceOS KDE Connect device automatically. { width=50% } Click / Tap on the \"OpenVoiceOS-*\" to start the pairing process. By clicking / tapping the pair button a similar pop-up will appear on the screen of the OpenVoiceOS device. Also click / tap on the pair button finalises the pairing proces allowing your Mobile device to automatically connect with your OpenVoiceOS device and make use of all the extra functionality of what KDE Connect brings.","title":"KDE Connect"},{"location":"kdeconnect/#kde-connect","text":"KDE Connect is a multi-platform application developed by KDE, which facilitates wireless communications and data transfer between devices over local networks and is installed and configured by default on the Buildroot based image. A couple of features of KDE Connect are: Shared clipboard: copy and paste between your phone, computer and/or OpenVoiceOS device. Share files and URLs instantly from one device to another including your OpenVoiceOS device. Multimedia remote control: Use your phone, tablet or computer as a remote for what is playing on your OpenVoiceOS device. Auto mute your OpenVoiceOS device when your mobile phone rings. Virtual touchpad / keyboard: Use your phone/tablet screen as your OpenVoiceOS device its mouse and keyboard. For the sake of simplicity the below screenshots are made using the iPhone KDE Connect client, however as it is not yet fully feature complete and / or stable, it is recommended to use the Android and / or Linux client. Especially if you would like to have full MPRIS control of your OpenVoiceOS device. On your mobile device, open the KDE Connect app and it will see the advertised OpenVoiceOS KDE Connect device automatically. { width=50% } Click / Tap on the \"OpenVoiceOS-*\" to start the pairing process. By clicking / tapping the pair button a similar pop-up will appear on the screen of the OpenVoiceOS device. Also click / tap on the pair button finalises the pairing proces allowing your Mobile device to automatically connect with your OpenVoiceOS device and make use of all the extra functionality of what KDE Connect brings.","title":"KDE Connect"},{"location":"license/","text":"License We have a universal donor policy, our code should be able to be used anywhere by anyone, no ifs or conditions attached. OVOS is predominately Apache2 or BSD licensed. There are only a few exceptions to this, which are all licensed under other compatible open source licenses. Individual plugins or skills may have their own license, for example mimic3 is AGPL, so we can not change the license of our plugin. We are committed to maintain all core components fully free, any code that we have no control over the license will live in an optional plugin and be flagged as such. This includes avoiding LGPL code for reasons explained here . Our license policy has the following properties: It gives you, the user of the software, complete and unrestrained access to the software, such that you may inspect, modify, and redistribute your changes Inspection - Anyone may inspect the software for security vulnerabilities Modification - Anyone may modify the software to fix issues or add features Redistribution - Anyone may redistribute the software on their terms It is compatible with GPL licenses - Projects licensed as GPL can be distributed with OVOS It allows for the incorporation of GPL-incompatible free software, such as software that is CDDL licensed The license does not restrict the software that may run on OVOS, however -- and thanks to the plugin architecture, even traditionally tightly-coupled components such as drivers can be distributed separately, so maintainers are free to choose whatever license they like for their projects. Notable licensing exceptions The following repositories do not respect our universal donor policy, please ensure their licenses are compatible before you use them Repository License Reason ovos-intent-plugin-padatious Apache2.0 padatious license might not be valid, depends on libfann2 (LGPL) ovos-tts-plugin-mimic3 AGPL depends on mimic3 (AGPL) ovos-tts-plugin-SAM ? reverse engineered abandonware","title":"License"},{"location":"license/#license","text":"We have a universal donor policy, our code should be able to be used anywhere by anyone, no ifs or conditions attached. OVOS is predominately Apache2 or BSD licensed. There are only a few exceptions to this, which are all licensed under other compatible open source licenses. Individual plugins or skills may have their own license, for example mimic3 is AGPL, so we can not change the license of our plugin. We are committed to maintain all core components fully free, any code that we have no control over the license will live in an optional plugin and be flagged as such. This includes avoiding LGPL code for reasons explained here . Our license policy has the following properties: It gives you, the user of the software, complete and unrestrained access to the software, such that you may inspect, modify, and redistribute your changes Inspection - Anyone may inspect the software for security vulnerabilities Modification - Anyone may modify the software to fix issues or add features Redistribution - Anyone may redistribute the software on their terms It is compatible with GPL licenses - Projects licensed as GPL can be distributed with OVOS It allows for the incorporation of GPL-incompatible free software, such as software that is CDDL licensed The license does not restrict the software that may run on OVOS, however -- and thanks to the plugin architecture, even traditionally tightly-coupled components such as drivers can be distributed separately, so maintainers are free to choose whatever license they like for their projects.","title":"License"},{"location":"license/#notable-licensing-exceptions","text":"The following repositories do not respect our universal donor policy, please ensure their licenses are compatible before you use them Repository License Reason ovos-intent-plugin-padatious Apache2.0 padatious license might not be valid, depends on libfann2 (LGPL) ovos-tts-plugin-mimic3 AGPL depends on mimic3 (AGPL) ovos-tts-plugin-SAM ? reverse engineered abandonware","title":"Notable licensing exceptions"},{"location":"members/","text":"OVOS Team Members JarbasAI Daniel McKnight j1nx forslund ChanceNCounter 5trongthany builderjer goldyfruit mikejgray emphasize dscripka Members hosting services goldyfruit use with ovos-tts-server-plugin Mimic1 TTS Mimic3 TTS Piper TTS builderjer use with ovos-tts-server-plugin Mimic3 TTS Piper TTS use with ovos-stt-server-plugin Faster Whisper STT use with ovos-tts-plugin-mimic3-server Mimic3 TTS OVOS use with ovos-stt-server-plugin Faster Whisper STT BlueSystems use with ovos-tts-plugin-mimic3-server Mimic3 TTS","title":"OVOS Members"},{"location":"members/#ovos-team-members","text":"JarbasAI Daniel McKnight j1nx forslund ChanceNCounter 5trongthany builderjer goldyfruit mikejgray emphasize dscripka","title":"OVOS Team Members"},{"location":"members/#members-hosting-services","text":"","title":"Members hosting services"},{"location":"members/#goldyfruit","text":"use with ovos-tts-server-plugin Mimic1 TTS Mimic3 TTS Piper TTS","title":"goldyfruit"},{"location":"members/#builderjer","text":"use with ovos-tts-server-plugin Mimic3 TTS Piper TTS use with ovos-stt-server-plugin Faster Whisper STT use with ovos-tts-plugin-mimic3-server Mimic3 TTS","title":"builderjer"},{"location":"members/#ovos","text":"use with ovos-stt-server-plugin Faster Whisper STT","title":"OVOS"},{"location":"members/#bluesystems","text":"use with ovos-tts-plugin-mimic3-server Mimic3 TTS","title":"BlueSystems"},{"location":"mycroft/","text":"ovos-core vs mycroft-core Speech Client Feature Mycroft OVOS Description Wake Word (listen) yes yes Only transcribe speech (STT) after a certain word is spoken Wake Up Word (sleep mode) yes yes When in sleep mode only listen for \"wake up\" (no STT) Hotword (bus event) no yes Emit bus events when a hotword is detected (no STT) Multiple Wake Words no yes Load multiple hotword engines/models simultaneously Fallback STT no yes fallback STT if the main one fails (eg, internet outage) Instant Listen no yes Do not pause between wake word detection and recording start Hybrid Listen no WIP Do not require wake word for follow up questions Continuous Listen no WIP Do not require wake word, always listen using VAD Recording mode no WIP Save audio instead of processing speech Wake Word Plugins yes yes Supports 3rd party integrations for hotword detection STT Plugins yes yes Supports 3rd party integrations for STT VAD plugins no * yes Supports 3rd party integrations for voice activity detection NOTES: HiveMind Voice Satellite uses ovos-core and supports the same features Pyaudio has a bug in python 3.10, you may need to use this fork (ovos-core and mk2 only) VAD is supported in mycroft mark2 branch, but is hardcoded for silero Sleep mode loop has been rewritten in ovos-core and is much more responsive than mycroft Mic handling logic has been ported from mk2 branch and is much more responsive than mycroft dev branch Instant / Hybrid / Continuous listen settings are experimental, good microphone and AEC are highly recommended (such as a mark2) in ovos-core this functionality has been refactored and moved to the new mycroft.listener module Audio Feature Mycroft OVOS Description MPRIS integration no yes Integrate with MPRIS protocol NOTES: OCP can be used with mycroft-core, but not mk2 OCP can be controlled via MPRIS, e.g. KDEConnect OCP can control MPRIS enabled players, e.g. spotify Skills Feature Mycroft OVOS Description Skill Plugins no yes skills can be packaged like standard python projects and installed via setup.py (eg. with pip or your package manager) User Resources no yes Users can override resource files, eg. customize dialogs for installed skills Skill permissions no WIP Users can limit converse and fallback functionality per skill and configure the order in which skills are executed Intent Plugins no WIP Supports 3rd party integrations for Intent Matching Hardware Feature Mycroft OVOS Description System Plugins no yes Support for 3rd party hardware (eg. mk2-plugin ) and OS level integrations (eg. wifi-setup ) NOTES: PHAL can be used with mycroft-core Misc Feature Mycroft OVOS Description Offline usage no yes Can be configured to work without internet connectivity MultiLingual no WIP Can be configured to work in multiple languages at the same time HiveMind support WIP WIP Supports HiveMind for a distributed/remote mycroft experience XDG compliance WIP yes All resources respect XDG standards and support multiple users Usage as a lib no yes Packaged as a library, supports derivative voice assistants NOTES: HiveMind is being developed against ovos-core, development under mycroft-core is stalled, see the hivemind wiki for details XDG support includes multiple skill directories, all skill data, all configuration files, and all cached files You can build your own assistant on top of ovos-core, multiple assistants can co-exist in the same machine and use their own configuration files , ovos-core is packaged like a regular python package and can be handled as a requirement by package managers examples projects: neon-core , hivemind-voice-sat Dinkum What is Dinkum Mycroft Mark2 shipped with a new version of mycroft called \"dinkum\", this is a total overhaul of mycroft-core and incompatible mycroft-core is now referred to as \"Classic Core\" by MycroftAI MycroftAI now provides what they call sandbox images, to add to the confusion those only work in the mark 2 and \"Classic Core\" means the mark-ii/latest branch of mycroft-core, this is a derivative version of the branch that was used in the dev kits ( mark-ii/qa ) and is also backwards incompatible, changes in this branch were not done via PRs and had no review or community input Mark2 useful links: mark 2 docs dinkum source code sandbox images mark2/latest (core for sandbox images) mark2/qa (core for dev kit image) Dinkum vs ovos-core you can find mycroft's guide to porting skills to dinkum here https://mycroft-ai.gitbook.io/mark-ii/differences-to-classic-core/porting-classic-core-skills mark2/qa brought some changes to mycroft-core, not all of them backwards compatible and some that were contentious within the community. VAD - VAD has been added to the mark-ii, but it is hardcoded to silero, this feature has been adopted via OPM, it is an important part of ovos-core listening modes introduced in version 0.0.5 self.resources - resource file loading was overhauled, this feature has been improved ( ovos-core/pull/130 + ovos-core/pull/131 + ovos-core/pull/135 + ovos-core/pull/170 ) and ported to OVOS and is also available in OVOSkill class ( OVOS-workshop/pull/30 ) for usage in classic core audio hal - audio playback was rewritten from scratch, audio plugin support has been removed, OVOS will not adopt this new approach but keep improving the previous one skill states - converse method introduced skill states, this changed some core assumptions behind converse method and active skills, OVOS will not adopt skill states, see community discussion here mycroft-core/pull/2901 + mycroft-core/pull/2906 pure regex intents - pure regex intents have been introduced, we strongly recommend you use padatious instead if this is desired, regex makes language support really hard, let the intent engines do their jobs adapt fork - a fork of adapt is used in the mark2, it introduces the exactly and excludes methods. excludes will be added upstream in adapt/pull/156 . Any skill using these new methods will be incompatible with most core versions activities - an activity is just a set of bus messages to indicate something started and ended, it is a reimplementation of an already existing feature, in ovos we use the native events from the self.add_event skill method dinkum contains all changes above and also brought further changes to the table sessions - in dinkum session handling is done by skills, it completely ignores the message.context mechanism and existing session_id, in ovos we believe session should come in the message and handled by the clients (eg, a chat user or a hivemind client....), in ovos we are expanding the original session concept ovos-core/pull/160 dbus-hal - a dbus service specific to the mk2 has been introduced, in ovos we have a generic PHAL service and companion plugins to interface with mk2 hardware instead, this component is mark2 specific and should be ignored in the ovos ecosystem Any skills using these new \"features\" will not work outside the mark2 FAQ Do OVOS skills run in dinkum? No, not even classic core skills run in dinkum. We have no plans to support this Do Dinkum skills run in ovos? No, dinkum is designed in a very incompatible way, the mycroft module is not always mycroft-core and the MycroftSkill class is not always a MycroftSkill, we have no intention of transparently loading dinkum skills in ovos-core We have a small proof of concept tool to convert a dinkum skill into an ovos/classic core compatible skill, see https://github.com/OpenVoiceOS/undinkumfier Does OCP work in dinkum? No, Audio plugin support has been removed, you can run OCP standalone but will be missing the compatibility layers and can't load OCP skills anyway It could be made to work but this is not in the roadmap, PRs will be accepted and reviewed Does PHAL work in dinkum? It should! We don't explicitly target or test it with dinkum, but it is a fairly standalone component Does OPM work in dinkum? STT , TTS and WW plugins should work, We don't explicitly target or test compatibility, PRs will be accepted and reviewed","title":"Mycroft"},{"location":"mycroft/#ovos-core-vs-mycroft-core","text":"","title":"ovos-core vs mycroft-core"},{"location":"mycroft/#speech-client","text":"Feature Mycroft OVOS Description Wake Word (listen) yes yes Only transcribe speech (STT) after a certain word is spoken Wake Up Word (sleep mode) yes yes When in sleep mode only listen for \"wake up\" (no STT) Hotword (bus event) no yes Emit bus events when a hotword is detected (no STT) Multiple Wake Words no yes Load multiple hotword engines/models simultaneously Fallback STT no yes fallback STT if the main one fails (eg, internet outage) Instant Listen no yes Do not pause between wake word detection and recording start Hybrid Listen no WIP Do not require wake word for follow up questions Continuous Listen no WIP Do not require wake word, always listen using VAD Recording mode no WIP Save audio instead of processing speech Wake Word Plugins yes yes Supports 3rd party integrations for hotword detection STT Plugins yes yes Supports 3rd party integrations for STT VAD plugins no * yes Supports 3rd party integrations for voice activity detection NOTES: HiveMind Voice Satellite uses ovos-core and supports the same features Pyaudio has a bug in python 3.10, you may need to use this fork (ovos-core and mk2 only) VAD is supported in mycroft mark2 branch, but is hardcoded for silero Sleep mode loop has been rewritten in ovos-core and is much more responsive than mycroft Mic handling logic has been ported from mk2 branch and is much more responsive than mycroft dev branch Instant / Hybrid / Continuous listen settings are experimental, good microphone and AEC are highly recommended (such as a mark2) in ovos-core this functionality has been refactored and moved to the new mycroft.listener module","title":"Speech Client"},{"location":"mycroft/#audio","text":"Feature Mycroft OVOS Description MPRIS integration no yes Integrate with MPRIS protocol NOTES: OCP can be used with mycroft-core, but not mk2 OCP can be controlled via MPRIS, e.g. KDEConnect OCP can control MPRIS enabled players, e.g. spotify","title":"Audio"},{"location":"mycroft/#skills","text":"Feature Mycroft OVOS Description Skill Plugins no yes skills can be packaged like standard python projects and installed via setup.py (eg. with pip or your package manager) User Resources no yes Users can override resource files, eg. customize dialogs for installed skills Skill permissions no WIP Users can limit converse and fallback functionality per skill and configure the order in which skills are executed Intent Plugins no WIP Supports 3rd party integrations for Intent Matching","title":"Skills"},{"location":"mycroft/#hardware","text":"Feature Mycroft OVOS Description System Plugins no yes Support for 3rd party hardware (eg. mk2-plugin ) and OS level integrations (eg. wifi-setup ) NOTES: PHAL can be used with mycroft-core","title":"Hardware"},{"location":"mycroft/#misc","text":"Feature Mycroft OVOS Description Offline usage no yes Can be configured to work without internet connectivity MultiLingual no WIP Can be configured to work in multiple languages at the same time HiveMind support WIP WIP Supports HiveMind for a distributed/remote mycroft experience XDG compliance WIP yes All resources respect XDG standards and support multiple users Usage as a lib no yes Packaged as a library, supports derivative voice assistants NOTES: HiveMind is being developed against ovos-core, development under mycroft-core is stalled, see the hivemind wiki for details XDG support includes multiple skill directories, all skill data, all configuration files, and all cached files You can build your own assistant on top of ovos-core, multiple assistants can co-exist in the same machine and use their own configuration files , ovos-core is packaged like a regular python package and can be handled as a requirement by package managers examples projects: neon-core , hivemind-voice-sat","title":"Misc"},{"location":"mycroft/#dinkum","text":"","title":"Dinkum"},{"location":"mycroft/#what-is-dinkum","text":"Mycroft Mark2 shipped with a new version of mycroft called \"dinkum\", this is a total overhaul of mycroft-core and incompatible mycroft-core is now referred to as \"Classic Core\" by MycroftAI MycroftAI now provides what they call sandbox images, to add to the confusion those only work in the mark 2 and \"Classic Core\" means the mark-ii/latest branch of mycroft-core, this is a derivative version of the branch that was used in the dev kits ( mark-ii/qa ) and is also backwards incompatible, changes in this branch were not done via PRs and had no review or community input Mark2 useful links: mark 2 docs dinkum source code sandbox images mark2/latest (core for sandbox images) mark2/qa (core for dev kit image)","title":"What is Dinkum"},{"location":"mycroft/#dinkum-vs-ovos-core","text":"you can find mycroft's guide to porting skills to dinkum here https://mycroft-ai.gitbook.io/mark-ii/differences-to-classic-core/porting-classic-core-skills mark2/qa brought some changes to mycroft-core, not all of them backwards compatible and some that were contentious within the community. VAD - VAD has been added to the mark-ii, but it is hardcoded to silero, this feature has been adopted via OPM, it is an important part of ovos-core listening modes introduced in version 0.0.5 self.resources - resource file loading was overhauled, this feature has been improved ( ovos-core/pull/130 + ovos-core/pull/131 + ovos-core/pull/135 + ovos-core/pull/170 ) and ported to OVOS and is also available in OVOSkill class ( OVOS-workshop/pull/30 ) for usage in classic core audio hal - audio playback was rewritten from scratch, audio plugin support has been removed, OVOS will not adopt this new approach but keep improving the previous one skill states - converse method introduced skill states, this changed some core assumptions behind converse method and active skills, OVOS will not adopt skill states, see community discussion here mycroft-core/pull/2901 + mycroft-core/pull/2906 pure regex intents - pure regex intents have been introduced, we strongly recommend you use padatious instead if this is desired, regex makes language support really hard, let the intent engines do their jobs adapt fork - a fork of adapt is used in the mark2, it introduces the exactly and excludes methods. excludes will be added upstream in adapt/pull/156 . Any skill using these new methods will be incompatible with most core versions activities - an activity is just a set of bus messages to indicate something started and ended, it is a reimplementation of an already existing feature, in ovos we use the native events from the self.add_event skill method dinkum contains all changes above and also brought further changes to the table sessions - in dinkum session handling is done by skills, it completely ignores the message.context mechanism and existing session_id, in ovos we believe session should come in the message and handled by the clients (eg, a chat user or a hivemind client....), in ovos we are expanding the original session concept ovos-core/pull/160 dbus-hal - a dbus service specific to the mk2 has been introduced, in ovos we have a generic PHAL service and companion plugins to interface with mk2 hardware instead, this component is mark2 specific and should be ignored in the ovos ecosystem Any skills using these new \"features\" will not work outside the mark2","title":"Dinkum vs ovos-core"},{"location":"mycroft/#faq","text":"","title":"FAQ"},{"location":"mycroft/#do-ovos-skills-run-in-dinkum","text":"No, not even classic core skills run in dinkum. We have no plans to support this","title":"Do OVOS skills run in dinkum?"},{"location":"mycroft/#do-dinkum-skills-run-in-ovos","text":"No, dinkum is designed in a very incompatible way, the mycroft module is not always mycroft-core and the MycroftSkill class is not always a MycroftSkill, we have no intention of transparently loading dinkum skills in ovos-core We have a small proof of concept tool to convert a dinkum skill into an ovos/classic core compatible skill, see https://github.com/OpenVoiceOS/undinkumfier","title":"Do Dinkum skills run in ovos?"},{"location":"mycroft/#does-ocp-work-in-dinkum","text":"No, Audio plugin support has been removed, you can run OCP standalone but will be missing the compatibility layers and can't load OCP skills anyway It could be made to work but this is not in the roadmap, PRs will be accepted and reviewed","title":"Does OCP work in dinkum?"},{"location":"mycroft/#does-phal-work-in-dinkum","text":"It should! We don't explicitly target or test it with dinkum, but it is a fairly standalone component","title":"Does PHAL work in dinkum?"},{"location":"mycroft/#does-opm-work-in-dinkum","text":"STT , TTS and WW plugins should work, We don't explicitly target or test compatibility, PRs will be accepted and reviewed","title":"Does OPM work in dinkum?"},{"location":"neonos/","text":"NeonOS ** Coming soon **","title":"NeonOS"},{"location":"neonos/#neonos","text":"** Coming soon **","title":"NeonOS"},{"location":"personal_backend/","text":"OVOS Personal Backend Personal mycroft backend alternative to mycroft.home, written in flask This repo is an alternative to the backend meant for personal usage, this allows you to run without mycroft servers :warning: there are no user accounts :warning: This is NOT meant to provision third party devices, but rather to run on the mycroft devices directly or on a private network For a full backend experience, the official mycroft backend has been open sourced, read the blog post NOTE: There is no pairing, devices will just activate themselves and work Install from pip pip install ovos-local-backend Mycroft Setup There are 2 main intended ways to run local backend with mycroft on same device as mycroft-core, tricking it to run without mycroft servers on a private network, to manage all your devices locally NOTE: you can not fully run mycroft-core offline, it refuses to launch without internet connection, you can only replace the calls to use this backend instead of mycroft.home We recommend you use ovos-core instead update your mycroft config to use this backend, delete identity2.json and restart mycroft { \"server\": { \"url\": \"http://0.0.0.0:6712\", \"version\": \"v1\", \"update\": true, \"metrics\": true }, \"listener\": { \"wake_word_upload\": { \"url\": \"http://0.0.0.0:6712/precise/upload\" } } } Companion projects ovos-backend-client - reference python library to interact with selene/local backend ovos-backend-manager - graphical interface to manage all things backend ovos-stt-plugin-selene - stt plugin for selene/local backend Usage start backend $ ovos-local-backend -h usage: ovos-local-backend [-h] [--flask-port FLASK_PORT] [--flask-host FLASK_HOST] optional arguments: -h, --help show this help message and exit --flask-port FLASK_PORT Mock backend port number --flask-host FLASK_HOST Mock backend host Docker There is also a docker container you can use docker run -p 8086:6712 -d --restart always --name local_backend ghcr.io/openvoiceos/local-backend:dev a docker-compose.yml could look like this version: '3.6' services: # ... ovosbackend: container_name: ovos_backend image: ghcr.io/openvoiceos/local-backend:dev # or build from local source (relative to docker-compose.yml) # build: ../ovos/ovos-personal-backend/. restart: unless-stopped ports: - \"6712:6712\" # default port backend API - \"36535:36535\" # default port backend-manager volumes: # :: - ./ovos/backend/config:/root/.config/json_database:z # shared config directory - ./ovos/backend/data:/root/.local/share/ovos_backend:Z # shared data directory # set `data_path` to `/root/.local/share/ovos_backend` about selinux flags (omit if you don't deal with selinux) How it works Configuration WIP Coming Soon","title":"Personal Backend"},{"location":"personal_backend/#ovos-personal-backend","text":"Personal mycroft backend alternative to mycroft.home, written in flask This repo is an alternative to the backend meant for personal usage, this allows you to run without mycroft servers :warning: there are no user accounts :warning: This is NOT meant to provision third party devices, but rather to run on the mycroft devices directly or on a private network For a full backend experience, the official mycroft backend has been open sourced, read the blog post NOTE: There is no pairing, devices will just activate themselves and work","title":"OVOS Personal Backend"},{"location":"personal_backend/#install","text":"from pip pip install ovos-local-backend","title":"Install"},{"location":"personal_backend/#mycroft-setup","text":"There are 2 main intended ways to run local backend with mycroft on same device as mycroft-core, tricking it to run without mycroft servers on a private network, to manage all your devices locally NOTE: you can not fully run mycroft-core offline, it refuses to launch without internet connection, you can only replace the calls to use this backend instead of mycroft.home We recommend you use ovos-core instead update your mycroft config to use this backend, delete identity2.json and restart mycroft { \"server\": { \"url\": \"http://0.0.0.0:6712\", \"version\": \"v1\", \"update\": true, \"metrics\": true }, \"listener\": { \"wake_word_upload\": { \"url\": \"http://0.0.0.0:6712/precise/upload\" } } }","title":"Mycroft Setup"},{"location":"personal_backend/#companion-projects","text":"ovos-backend-client - reference python library to interact with selene/local backend ovos-backend-manager - graphical interface to manage all things backend ovos-stt-plugin-selene - stt plugin for selene/local backend","title":"Companion projects"},{"location":"personal_backend/#usage","text":"start backend $ ovos-local-backend -h usage: ovos-local-backend [-h] [--flask-port FLASK_PORT] [--flask-host FLASK_HOST] optional arguments: -h, --help show this help message and exit --flask-port FLASK_PORT Mock backend port number --flask-host FLASK_HOST Mock backend host","title":"Usage"},{"location":"personal_backend/#docker","text":"There is also a docker container you can use docker run -p 8086:6712 -d --restart always --name local_backend ghcr.io/openvoiceos/local-backend:dev a docker-compose.yml could look like this version: '3.6' services: # ... ovosbackend: container_name: ovos_backend image: ghcr.io/openvoiceos/local-backend:dev # or build from local source (relative to docker-compose.yml) # build: ../ovos/ovos-personal-backend/. restart: unless-stopped ports: - \"6712:6712\" # default port backend API - \"36535:36535\" # default port backend-manager volumes: # :: - ./ovos/backend/config:/root/.config/json_database:z # shared config directory - ./ovos/backend/data:/root/.local/share/ovos_backend:Z # shared data directory # set `data_path` to `/root/.local/share/ovos_backend` about selinux flags (omit if you don't deal with selinux)","title":"Docker"},{"location":"personal_backend/#how-it-works","text":"","title":"How it works"},{"location":"personal_backend/#configuration","text":"WIP Coming Soon","title":"Configuration"},{"location":"qs_first_boot/","text":"OVOS Quick Start - First Boot You now have a flashed boot medium with OVOS installed. Now what? Insert your boot medium, and power on your device. NOTE If using a Raspberry Pi 4 with a device conneected via USB-A, the top-right USB 3.2 Gen 1 port is recommended. Each image has a different experience when you first boot your device. Buildroot - GUI Raspbian-ovos - Headless Notes on internet While OVOS can run fully offline, with on device STT and TTS, your device will be lacking many of the things you expect from a smart assistant. On a RPi, this includes the date and time because the Pi lacks a Real Time Clock, and therefor needs to connect to a server to set those on your device. Buildroot - GUI When you first boot the Buildroot image, you will be greeted with an OVOS splash screen as shown below. As this is the first time you have booted your device, it might take a bit longer than normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device. Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown. Again, since this is your first boot, this might take a bit longer as the ovos-core configuration is populated and skills are being setup. Ethernet There should be no issues connecting automatically if your router accepts DHCP requests. WiFi If you do not have an internet connection, you will be prompted with a screen with options to connect. You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work. This includes the date and time as a Raspberry Pi does not have a Real Time Clock and therefore does not know this data without being told). On Device Setup If this option is selected the next screen will have a list of available WiFi connections. Select your desired network, and enter a password if needed. If everything went correctly, you should be connected to the internet and after a short period of time, OVOS is loading skills that require internet, you will be presented with the homescreen. Configure WiFi after skipping initial setup If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the \"Wi-Fi\" icon. This brings you to the same on-device configuration screen. From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks. Mobile Setup If this option is chosen, you will be prompted to connect your mobile device, or computer to the hotspot OVOS . The rest of this option coincides with the headless image WiFi setup. Continue Mobile WiFi Setup . Raspbian-ovos - Headless If you are trying to run OVOS on a RPi3, this is the image to use. It DOES NOT provide a GUI, therefore some things that are available on the buildroot image, are not available here. Once again, it may take several minutes for this first boot to complete. OVOS resizes the partition on the drive, and loads the required plugins. Ethernet There should be no issues connecting automatically if your router accepts DHCP requests. WiFi The Raspbian image will create a HotSpot which is a temporary access point that allows you to configure your WiFi credentials. On first boot, you should hear a voice prompt instructing you to connect to the OVOS hotspot. \"Open a browser to 'start dot openvoiceos dot com'\" Connect your mobile device or computer to the OVOS HotSpot and open the webpage http://start.openvoiceos.com NOTE This is NOT the official OVOS website, rather a \"HotSpot\" created by OVOS and will be removed after the device is connected to WiFi. Choose from the list of WiFi access points from the list on your mobile device. Enter your password if needed and enjoy your OVOS smart assistant. NOTE There is a known bug in with balena-wifi connecting to WPA3 security. You must use on device setup or use raspi-config from a command prompt if your WiFi security is not supported with the mobile setup option. Thats it!! You should have a working OVOS device, QUICK !! While this is the fastest and easiest way to get OVOS, it is not the only way. Installing OVOS with libraries Installing OVOS Docker","title":"First Boot"},{"location":"qs_first_boot/#ovos-quick-start-first-boot","text":"You now have a flashed boot medium with OVOS installed. Now what? Insert your boot medium, and power on your device. NOTE If using a Raspberry Pi 4 with a device conneected via USB-A, the top-right USB 3.2 Gen 1 port is recommended. Each image has a different experience when you first boot your device. Buildroot - GUI Raspbian-ovos - Headless Notes on internet While OVOS can run fully offline, with on device STT and TTS, your device will be lacking many of the things you expect from a smart assistant. On a RPi, this includes the date and time because the Pi lacks a Real Time Clock, and therefor needs to connect to a server to set those on your device.","title":"OVOS Quick Start - First Boot"},{"location":"qs_first_boot/#buildroot-gui","text":"When you first boot the Buildroot image, you will be greeted with an OVOS splash screen as shown below. As this is the first time you have booted your device, it might take a bit longer than normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device. Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown. Again, since this is your first boot, this might take a bit longer as the ovos-core configuration is populated and skills are being setup.","title":"Buildroot - GUI"},{"location":"qs_first_boot/#ethernet","text":"There should be no issues connecting automatically if your router accepts DHCP requests.","title":"Ethernet"},{"location":"qs_first_boot/#wifi","text":"If you do not have an internet connection, you will be prompted with a screen with options to connect. You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work. This includes the date and time as a Raspberry Pi does not have a Real Time Clock and therefore does not know this data without being told). On Device Setup If this option is selected the next screen will have a list of available WiFi connections. Select your desired network, and enter a password if needed. If everything went correctly, you should be connected to the internet and after a short period of time, OVOS is loading skills that require internet, you will be presented with the homescreen. Configure WiFi after skipping initial setup If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the \"Wi-Fi\" icon. This brings you to the same on-device configuration screen. From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks. Mobile Setup If this option is chosen, you will be prompted to connect your mobile device, or computer to the hotspot OVOS . The rest of this option coincides with the headless image WiFi setup. Continue Mobile WiFi Setup .","title":"WiFi"},{"location":"qs_first_boot/#raspbian-ovos-headless","text":"If you are trying to run OVOS on a RPi3, this is the image to use. It DOES NOT provide a GUI, therefore some things that are available on the buildroot image, are not available here. Once again, it may take several minutes for this first boot to complete. OVOS resizes the partition on the drive, and loads the required plugins.","title":"Raspbian-ovos - Headless"},{"location":"qs_first_boot/#ethernet_1","text":"There should be no issues connecting automatically if your router accepts DHCP requests.","title":"Ethernet"},{"location":"qs_first_boot/#wifi_1","text":"The Raspbian image will create a HotSpot which is a temporary access point that allows you to configure your WiFi credentials. On first boot, you should hear a voice prompt instructing you to connect to the OVOS hotspot. \"Open a browser to 'start dot openvoiceos dot com'\" Connect your mobile device or computer to the OVOS HotSpot and open the webpage http://start.openvoiceos.com NOTE This is NOT the official OVOS website, rather a \"HotSpot\" created by OVOS and will be removed after the device is connected to WiFi. Choose from the list of WiFi access points from the list on your mobile device. Enter your password if needed and enjoy your OVOS smart assistant. NOTE There is a known bug in with balena-wifi connecting to WPA3 security. You must use on device setup or use raspi-config from a command prompt if your WiFi security is not supported with the mobile setup option. Thats it!! You should have a working OVOS device, QUICK !! While this is the fastest and easiest way to get OVOS, it is not the only way. Installing OVOS with libraries Installing OVOS Docker","title":"WiFi"},{"location":"qs_intro/","text":"OVOS Quick Start Guide - Get an image So you just want to give OVOS a try? This quick start will help get an OVOS image installed and running on your Raspberry Pi. NOTE The GUI will not reliably run on a RPi3 and is therefore not recommended for that device. Download an OVOS image OVOS provides a couple of different images specifically for the Raspberry Pi. Buildroot Latest Image The most advanced and featureful is the Buildroot image. If you want a GUI, this is currently your only choice. RPi4-64 Raspbian Latest Images OVOS also provides a \"Headless\" image that is similar to the origional picroft software from MycroftAI . It runs without a screen and works with a RPi3b/b+ headless images Flashing your image Once you have an image downloaded, it needs to be flashed to a boot device. NOTE If you have a Raspberry Pi 4 we recommend to use a good USB3.1 device or better, a USB3 SSD. If you have a Raspberry Pi 3, use a proper SD card. (From fast to slow: SSD - USB3.1 - SD card - USB2) Decompress the image Some image writing methods, dd , may require your file be decompressed. Others, BalenaEtcher for example, can use a compressed image. The Buildroot image is compressed in .xz format and the raspbian image is in .zip format. Windows Use winzip or 7-zip to decompress the image. Linux Use gunzip to decompress .xz compressed images and unzip to decompress .zip images. The resulting file should end in .img and is now ready to flash to a device. Flashing Software Flashing your image to your SD card or USB drive is not different from flashing any other image. For the non-technical users we advise to use the flashing utility, Raspberry Pi Imager from the Raspberry Pi Foundation. It is available for Windows, Mac OS, and Linux. Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with \" sudo raspi-imager\". For \"Choose OS\", select \"Use custom\" and select the OVOS/PI image file downloaded from the OpenVoiceOS Downloads Site. For \"Choose Storage\", select your removable boot media, probably something like \"Internal SD Card Reader\". Then select \"Write\". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot. Upon completion, you should have a bootable SD card or USB drive. Warning EXTREME CARE needs to be taken while using this method Be careful with the dd command, you can easily render your computer useless Check where your sd card or usb drive is located using the lsusb command. It should be visible as sdxx Write the unzipped image to the disk sudo dd if= of= bs=4M status=progress No matter what method you used, upon completion, you should have a bootable SD card or USB drive.","title":"Installing an Image"},{"location":"qs_intro/#ovos-quick-start-guide-get-an-image","text":"So you just want to give OVOS a try? This quick start will help get an OVOS image installed and running on your Raspberry Pi. NOTE The GUI will not reliably run on a RPi3 and is therefore not recommended for that device.","title":"OVOS Quick Start Guide - Get an image"},{"location":"qs_intro/#download-an-ovos-image","text":"OVOS provides a couple of different images specifically for the Raspberry Pi.","title":"Download an OVOS image"},{"location":"qs_intro/#buildroot-latest-image","text":"The most advanced and featureful is the Buildroot image. If you want a GUI, this is currently your only choice. RPi4-64","title":"Buildroot Latest Image"},{"location":"qs_intro/#raspbian-latest-images","text":"OVOS also provides a \"Headless\" image that is similar to the origional picroft software from MycroftAI . It runs without a screen and works with a RPi3b/b+ headless images","title":"Raspbian Latest Images"},{"location":"qs_intro/#flashing-your-image","text":"Once you have an image downloaded, it needs to be flashed to a boot device. NOTE If you have a Raspberry Pi 4 we recommend to use a good USB3.1 device or better, a USB3 SSD. If you have a Raspberry Pi 3, use a proper SD card. (From fast to slow: SSD - USB3.1 - SD card - USB2)","title":"Flashing your image"},{"location":"qs_intro/#decompress-the-image","text":"Some image writing methods, dd , may require your file be decompressed. Others, BalenaEtcher for example, can use a compressed image. The Buildroot image is compressed in .xz format and the raspbian image is in .zip format. Windows Use winzip or 7-zip to decompress the image. Linux Use gunzip to decompress .xz compressed images and unzip to decompress .zip images. The resulting file should end in .img and is now ready to flash to a device.","title":"Decompress the image"},{"location":"qs_intro/#flashing-software","text":"Flashing your image to your SD card or USB drive is not different from flashing any other image. For the non-technical users we advise to use the flashing utility, Raspberry Pi Imager from the Raspberry Pi Foundation. It is available for Windows, Mac OS, and Linux. Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with \" sudo raspi-imager\". For \"Choose OS\", select \"Use custom\" and select the OVOS/PI image file downloaded from the OpenVoiceOS Downloads Site. For \"Choose Storage\", select your removable boot media, probably something like \"Internal SD Card Reader\". Then select \"Write\". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot. Upon completion, you should have a bootable SD card or USB drive.","title":"Flashing Software"},{"location":"qs_intro/#warning-extreme-care-needs-to-be-taken-while-using-this-method","text":"Be careful with the dd command, you can easily render your computer useless Check where your sd card or usb drive is located using the lsusb command. It should be visible as sdxx Write the unzipped image to the disk sudo dd if= of= bs=4M status=progress No matter what method you used, upon completion, you should have a bootable SD card or USB drive.","title":"Warning EXTREME CARE needs to be taken while using this method"},{"location":"qs_next_steps/","text":"OpenVoiceOS Quickstart - Next Steps Woo Woo!! You Have A Running OVOS Device!! Now what? Prebuilt images come with a default set of skills installed, including, but not limited to the date/time, and weather. Give them a shot. Speak these commands and enjoy the spoils: Hey Mycroft, what time is it? Hey Mycroft, what is today's date? Hey Mycroft, what is the weather today? Hey Mycroft, will it rain today? While there are several default skills installed, there are many more available to be used. The link below will show you how to find and install more skills. Installing Skills But wait, there's more!! OVOS is highly configurable, and uses a file in either JSON or YAML format to provide these options. While in most cases, OVOS should just work, sometimes you either need to, or want to change some options. OVOS Configuration OVOS ships with a default TTS (Text to Speech) engine which speaks in the original Alan-Pope voice that Mycroft used. There are MANY more to choose from. The following link will help you choose and configure a different voice for your assistant. Configuring TTS Your device does not understand your voice when you speak? There are options for different STT (Speech To Text) engines also. Some work better than others, but can provide less privacy. Changing STT Your OVOS assistant uses a \"wake word\" which lets it know it is time to start listening to your commands. By default, the phrase to wake OVOS is Hey Mycroft . This, like most things in OVOS, is totally configurable. Follow the link to learn more. Changing the Wake Word PHAL plugins allow OVOS to interact with the underlying hardware and operating system. Several are available, and may be installed and run together. Configuring PHAL OVOS ships with default services available to the public to use. These include public TTS and STT servers, a weather API provided by OpenMeteo , access to Wolfram, and more. Since OVOS is an open and private system, you can also change these to your own preferences. Install your own Services WIP","title":"Next Steps"},{"location":"qs_next_steps/#openvoiceos-quickstart-next-steps","text":"","title":"OpenVoiceOS Quickstart - Next Steps"},{"location":"qs_next_steps/#woo-woo-you-have-a-running-ovos-device-now-what","text":"Prebuilt images come with a default set of skills installed, including, but not limited to the date/time, and weather. Give them a shot. Speak these commands and enjoy the spoils: Hey Mycroft, what time is it? Hey Mycroft, what is today's date? Hey Mycroft, what is the weather today? Hey Mycroft, will it rain today? While there are several default skills installed, there are many more available to be used. The link below will show you how to find and install more skills. Installing Skills But wait, there's more!! OVOS is highly configurable, and uses a file in either JSON or YAML format to provide these options. While in most cases, OVOS should just work, sometimes you either need to, or want to change some options. OVOS Configuration OVOS ships with a default TTS (Text to Speech) engine which speaks in the original Alan-Pope voice that Mycroft used. There are MANY more to choose from. The following link will help you choose and configure a different voice for your assistant. Configuring TTS Your device does not understand your voice when you speak? There are options for different STT (Speech To Text) engines also. Some work better than others, but can provide less privacy. Changing STT Your OVOS assistant uses a \"wake word\" which lets it know it is time to start listening to your commands. By default, the phrase to wake OVOS is Hey Mycroft . This, like most things in OVOS, is totally configurable. Follow the link to learn more. Changing the Wake Word PHAL plugins allow OVOS to interact with the underlying hardware and operating system. Several are available, and may be installed and run together. Configuring PHAL OVOS ships with default services available to the public to use. These include public TTS and STT servers, a weather API provided by OpenMeteo , access to Wolfram, and more. Since OVOS is an open and private system, you can also change these to your own preferences. Install your own Services WIP","title":"Woo Woo!! You Have A Running OVOS Device!! Now what?"},{"location":"selene_backend/","text":"Selene Backend The Mycroft Selene backend is deprecated but the software has been made avaliable on github. https://github.com/MycroftAI/selene-backend Coming Soon Installing your own Selene backend","title":"Selene Backend"},{"location":"selene_backend/#selene-backend","text":"The Mycroft Selene backend is deprecated but the software has been made avaliable on github. https://github.com/MycroftAI/selene-backend Coming Soon Installing your own Selene backend","title":"Selene Backend"},{"location":"starting_intro/","text":"Starting OVOS Being a modular system has the advantage of being able to start with several different methods. If you installed an image, your device is already pre-configured to start all of the services automatically. As of July 2023, both the Buildroot image, and the Rasbpian image, use systemd service files to start, restart, and stop each OVOS module. Typical command to restart the OVOS stack systemctl --user restart ovos.service ovos.service is a special systemd service file that instructs the rest of the stack to follow what it is doing. If you stop ovos.service all of the services will stop. Same with start and restart . This makes it handy to restart the complete stack in one command after changes have been made. The rest of this section will describe this method, and others in detail. Starting as stand alone modules Starting with systemd service files Starting Docker","title":"Introduction"},{"location":"starting_intro/#starting-ovos","text":"Being a modular system has the advantage of being able to start with several different methods. If you installed an image, your device is already pre-configured to start all of the services automatically. As of July 2023, both the Buildroot image, and the Rasbpian image, use systemd service files to start, restart, and stop each OVOS module. Typical command to restart the OVOS stack systemctl --user restart ovos.service ovos.service is a special systemd service file that instructs the rest of the stack to follow what it is doing. If you stop ovos.service all of the services will stop. Same with start and restart . This makes it handy to restart the complete stack in one command after changes have been made. The rest of this section will describe this method, and others in detail. Starting as stand alone modules Starting with systemd service files Starting Docker","title":"Starting OVOS"},{"location":"starting_modules/","text":"Starting OVOS - modules OVOS in its simplest form is just a python module, and can be invoked as one. In fact, the systemd service method of starting OVOS uses a glorified version of this. ovos-core ovos-core is the brains of the device. Without it, you would have some cool software that does not work together. It controls the skills service and directs intents to the right skill. Invoking the skills module Open a command shell and type the following ovos-core You will see a bunch of lines from the logs, and at the end, it will say WARNING - Message Bus Client will reconnect in 5.0 seconds. This is because we have not started the messagebus service and that is the nervous system . You cannot communicate to the other parts without it. ovos-messagebus ovos-messagebus is the nervous system of OVOS. This is what makes everything work together. NOTE The messagebus is an unsecured bus to your system and should NOT be exposed to the outside world. firewall your messagebus WIP Invoking the messagebus With ovos-core running in one terminal shell, open another and type the command ovos-messagebus Once again, a whole bunch of log lines will scroll by, and at the end, it will say INFO - Message bus service started! If you look back at the terminal with ovos-core, you will notice that there are new logs that ovos-core has connected to the messagebus.","title":"Starting Modules"},{"location":"starting_modules/#starting-ovos-modules","text":"OVOS in its simplest form is just a python module, and can be invoked as one. In fact, the systemd service method of starting OVOS uses a glorified version of this.","title":"Starting OVOS - modules"},{"location":"starting_modules/#ovos-core","text":"ovos-core is the brains of the device. Without it, you would have some cool software that does not work together. It controls the skills service and directs intents to the right skill.","title":"ovos-core"},{"location":"starting_modules/#invoking-the-skills-module","text":"Open a command shell and type the following ovos-core You will see a bunch of lines from the logs, and at the end, it will say WARNING - Message Bus Client will reconnect in 5.0 seconds. This is because we have not started the messagebus service and that is the nervous system . You cannot communicate to the other parts without it.","title":"Invoking the skills module"},{"location":"starting_modules/#ovos-messagebus","text":"ovos-messagebus is the nervous system of OVOS. This is what makes everything work together. NOTE The messagebus is an unsecured bus to your system and should NOT be exposed to the outside world. firewall your messagebus WIP","title":"ovos-messagebus"},{"location":"starting_modules/#invoking-the-messagebus","text":"With ovos-core running in one terminal shell, open another and type the command ovos-messagebus Once again, a whole bunch of log lines will scroll by, and at the end, it will say INFO - Message bus service started! If you look back at the terminal with ovos-core, you will notice that there are new logs that ovos-core has connected to the messagebus.","title":"Invoking the messagebus"},{"location":"starting_systemd/","text":"Starting OVOS - systemd OVOS, being a modular system, has several pieces that can start individually. The OVOS team suggests doing this in a virtual environment . While not necessary, it can keep dependency problems in a running system from conflicting with one another. Starting a Virtual Environment We will assume that you are starting from your home directory. Enter the following commands into a shell terminal. python -m venv .venv . .venv/bin/activate After a couple of seconds, your command prompt will change with (.venv) being at the front. nn User systemd service files are the recommended way to start each module. Other methods exist, such as using the modules as a python library, but are advanced topics and not discussed here. This is the preferred method to start the OVOS modules. If you have not used systemd before, there are many references on the web with more information. It is out of scope of this document. The following is assuming the user ovos is being used. A systemd service file and a systemd hook file is required for this to work. We will create both files for the ovos-messagebus service because this is used by all other modules. The provided system hook files need another Python package sdnotify to work as written. pip install sdnotify ovos.service This is the main service file that is used to start the stack as a unit. This is not necessasary, but helpful if more than one module should start together. Create the service file nano .config/systemd/user/ovos.service This file should contain [Unit] Description=OVOS A.I. Software stack. [Service] Type=oneshot ExecStart=/bin/true RemainAfterExit=yes [Install] WantedBy=default.target There is no hook file needed for this service. ovos-messagebus.service The messagebus is the main nervous system for OVOS and is needed by all other modules to enable communication between them. Create the service file nano ~/.config/systemd/user/ovos-messagebus.service And make it contain the following [Unit] Description=OVOS Messagebus PartOf=ovos.service After=ovos.service [Service] Type=notify ExecStart=/home/ovos/.local/bin/ovos-systemd-messagebus TimeoutStartSec=1m TimeoutStopSec=1m Restart=on-failure StartLimitInterval=5min StartLimitBurst=4 #StartLimitAction=reboot-force #WatchdogSec=30s [Install] WantedBy=ovos.service Create the hook file nano ~/.local/bin/ovos-systemd-messagebus This file should contain #!/usr/bin/env python ########################################################################## # ovos-systemd_messagebus.py # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ########################################################################## import sdnotify from mycroft.messagebus.service.__main__ import main n = sdnotify.SystemdNotifier() def notify_ready(): n.notify('READY=1') print('Startup of Mycroft Messagebus service complete') def notify_stopping(): n.notify('STOPPING=1') print('Stopping the Mycroft Messagebus service') main(ready_hook=notify_ready, stopping_hook=notify_stopping) Reload the systemd daemon systemctl --user daemon-reload The service can now be started systemctl --user start ovos-messagebus.service To start the stack on boot, enable both of the services systemctl --user enable ovos.service systemctl --user enable ovos-messagebus.service Now on every reboot, the OVOS system should start automatically. NOTE the systemd service and hook files are examples used in the raspbian-ovos repository. For each module that needs to be started, they should have similar service and hook files. For a complete system the following need to be running. - ovos-messagebus.service - ovos-skills.service - ovos-audio.service - ovos-dinkum-listener.service - ovos-phal.service The ovos-admin-phal.service need to run as a system service or with the user root - ovos-admin-phal.service","title":"Starting Systemd"},{"location":"starting_systemd/#starting-ovos-systemd","text":"OVOS, being a modular system, has several pieces that can start individually. The OVOS team suggests doing this in a virtual environment . While not necessary, it can keep dependency problems in a running system from conflicting with one another.","title":"Starting OVOS - systemd"},{"location":"starting_systemd/#starting-a-virtual-environment","text":"We will assume that you are starting from your home directory. Enter the following commands into a shell terminal. python -m venv .venv . .venv/bin/activate After a couple of seconds, your command prompt will change with (.venv) being at the front. nn User systemd service files are the recommended way to start each module. Other methods exist, such as using the modules as a python library, but are advanced topics and not discussed here. This is the preferred method to start the OVOS modules. If you have not used systemd before, there are many references on the web with more information. It is out of scope of this document. The following is assuming the user ovos is being used. A systemd service file and a systemd hook file is required for this to work. We will create both files for the ovos-messagebus service because this is used by all other modules. The provided system hook files need another Python package sdnotify to work as written. pip install sdnotify","title":"Starting a Virtual Environment"},{"location":"starting_systemd/#ovosservice","text":"This is the main service file that is used to start the stack as a unit. This is not necessasary, but helpful if more than one module should start together. Create the service file nano .config/systemd/user/ovos.service This file should contain [Unit] Description=OVOS A.I. Software stack. [Service] Type=oneshot ExecStart=/bin/true RemainAfterExit=yes [Install] WantedBy=default.target There is no hook file needed for this service.","title":"ovos.service"},{"location":"starting_systemd/#ovos-messagebusservice","text":"The messagebus is the main nervous system for OVOS and is needed by all other modules to enable communication between them. Create the service file nano ~/.config/systemd/user/ovos-messagebus.service And make it contain the following [Unit] Description=OVOS Messagebus PartOf=ovos.service After=ovos.service [Service] Type=notify ExecStart=/home/ovos/.local/bin/ovos-systemd-messagebus TimeoutStartSec=1m TimeoutStopSec=1m Restart=on-failure StartLimitInterval=5min StartLimitBurst=4 #StartLimitAction=reboot-force #WatchdogSec=30s [Install] WantedBy=ovos.service Create the hook file nano ~/.local/bin/ovos-systemd-messagebus This file should contain #!/usr/bin/env python ########################################################################## # ovos-systemd_messagebus.py # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ########################################################################## import sdnotify from mycroft.messagebus.service.__main__ import main n = sdnotify.SystemdNotifier() def notify_ready(): n.notify('READY=1') print('Startup of Mycroft Messagebus service complete') def notify_stopping(): n.notify('STOPPING=1') print('Stopping the Mycroft Messagebus service') main(ready_hook=notify_ready, stopping_hook=notify_stopping) Reload the systemd daemon systemctl --user daemon-reload The service can now be started systemctl --user start ovos-messagebus.service To start the stack on boot, enable both of the services systemctl --user enable ovos.service systemctl --user enable ovos-messagebus.service Now on every reboot, the OVOS system should start automatically. NOTE the systemd service and hook files are examples used in the raspbian-ovos repository. For each module that needs to be started, they should have similar service and hook files. For a complete system the following need to be running. - ovos-messagebus.service - ovos-skills.service - ovos-audio.service - ovos-dinkum-listener.service - ovos-phal.service The ovos-admin-phal.service need to run as a system service or with the user root - ovos-admin-phal.service","title":"ovos-messagebus.service"},{"location":"troubleshooting_audio/","text":"Troubleshooting operating system audio If audio isn't working for OpenVoiceOS, it's useful to verify that the operating system audio is working. Architecture ALSA ALSA is the kernel level sound mixer, it manages your sound card directly. ALSA is crap (seriously) and it can handle a few (sometimes just one) channel. We don't generally have to deal with ALSA directly. ALSA can be configured to use PulseAudio as it's default device, that way ALSA applications that are not PulseAudio aware will still use PulseAudio via an indirection layer. Pulse Audio PulseAudio is a software mixer, running in user space. When it runs, it uses Alsa's channel and manages everything. mixing, devices, network devices, etc. PulseAudio always uses ALSA as backend, and on startup opens all ALSA devices. Since most ALSA devices can't be opened multiple times, this will cause all ALSA applications that try to use an ALSA device directly when PulseAudio is running to fail. If you have a legacy application that for some reason doesn't work, you can use pasuspender to temporary suspend PulseAudio to run this particular application. Pulse Audio Modules Troubleshooting Commands List hardware cards cat /proc/asound/cards List Playback and capture devices visible to ALSA (note the Card Number) aplay -l arecord -l This will list the cards, which can then be referenced in arecord using -D hw: ,0: arecord -f dat -r 16000 -D hw:4,0 -c 4 -d 10 test2.wav You can then play the file back to test your speakers aplay -D hw:2,0 test2.wav ** If PulseAudio is installed, Alsa should be configured to use PulseAudio as it's default, and we don't change anything in Alsa, we configure our default sources and sinks in Pulse Audio** Verify that pulseaudio is installed apt list pulseaudio Verify that Alsa is using Pulse Audio as the default $ aplay -L | head -n9 null Discard all samples (playback) or generate zero samples (capture) default Playback/recording through the PulseAudio sound server PulseAudio List sinks (speakers) and sources (microphones) visible to PulseAudio pactl list sinks pactl list sources This will list the sources that can be used to set the default source for pulseaudio below. pacmd set-default-source e.g. pacmd set-default-source alsa_input.usb-OmniVision_Technologies__Inc._USB_Camera-B4.09.24.1-01.multichannel-input Test if OVOS is receiving mic input You can test if OVOS is recieving mic input using the ovos-cli-client Install the ovos-cli-client from github to ensure you have the latest version cd ~ git clone https://github.com/openvoiceos/ovos-cli-client pip install ./ovos-cli-client Run the ovos-cli-client ovos-cli-client In the lower left you can observe the microphone levels, when you talk, the levels should increase. If they don't ovos is probably using the wrong microphone. Gather Data Before submitting an issue or asking a question on Element, please gather the following data. For Microphone issues: arecord -l arecord -L | head -n9 pactl list sources pacmd dump For Speaker issues: aplay -l aplay -L | head -n9 pactl list sinks pacmd dump Additional Resources PulseAudio Troubleshooting Problem/fix ERROR: pulseaudio sink always suspended Try disabling suspend-on-idle in /etc/pulse/default.pa Change this: ### Automatically suspend sinks/sources that become idle for too long load-module module-suspend-on-idle to this: ### Automatically suspend sinks/sources that become idle for too long #load-module module-suspend-on-idle and then restart PulseAudio. There is quite a lot of variation in how people do this but killall pulseaudio is one option (it gets automatically started again). If you want to be sure, you can restart the system.","title":"Troubleshooting Audio"},{"location":"troubleshooting_audio/#troubleshooting-operating-system-audio","text":"If audio isn't working for OpenVoiceOS, it's useful to verify that the operating system audio is working.","title":"Troubleshooting operating system audio"},{"location":"troubleshooting_audio/#architecture","text":"","title":"Architecture"},{"location":"troubleshooting_audio/#alsa","text":"ALSA is the kernel level sound mixer, it manages your sound card directly. ALSA is crap (seriously) and it can handle a few (sometimes just one) channel. We don't generally have to deal with ALSA directly. ALSA can be configured to use PulseAudio as it's default device, that way ALSA applications that are not PulseAudio aware will still use PulseAudio via an indirection layer.","title":"ALSA"},{"location":"troubleshooting_audio/#pulse-audio","text":"PulseAudio is a software mixer, running in user space. When it runs, it uses Alsa's channel and manages everything. mixing, devices, network devices, etc. PulseAudio always uses ALSA as backend, and on startup opens all ALSA devices. Since most ALSA devices can't be opened multiple times, this will cause all ALSA applications that try to use an ALSA device directly when PulseAudio is running to fail. If you have a legacy application that for some reason doesn't work, you can use pasuspender to temporary suspend PulseAudio to run this particular application. Pulse Audio Modules","title":"Pulse Audio"},{"location":"troubleshooting_audio/#troubleshooting-commands","text":"List hardware cards cat /proc/asound/cards List Playback and capture devices visible to ALSA (note the Card Number) aplay -l arecord -l This will list the cards, which can then be referenced in arecord using -D hw: ,0: arecord -f dat -r 16000 -D hw:4,0 -c 4 -d 10 test2.wav You can then play the file back to test your speakers aplay -D hw:2,0 test2.wav ** If PulseAudio is installed, Alsa should be configured to use PulseAudio as it's default, and we don't change anything in Alsa, we configure our default sources and sinks in Pulse Audio** Verify that pulseaudio is installed apt list pulseaudio Verify that Alsa is using Pulse Audio as the default $ aplay -L | head -n9 null Discard all samples (playback) or generate zero samples (capture) default Playback/recording through the PulseAudio sound server","title":"Troubleshooting Commands"},{"location":"troubleshooting_audio/#pulseaudio","text":"List sinks (speakers) and sources (microphones) visible to PulseAudio pactl list sinks pactl list sources This will list the sources that can be used to set the default source for pulseaudio below. pacmd set-default-source e.g. pacmd set-default-source alsa_input.usb-OmniVision_Technologies__Inc._USB_Camera-B4.09.24.1-01.multichannel-input","title":"PulseAudio"},{"location":"troubleshooting_audio/#test-if-ovos-is-receiving-mic-input","text":"You can test if OVOS is recieving mic input using the ovos-cli-client Install the ovos-cli-client from github to ensure you have the latest version cd ~ git clone https://github.com/openvoiceos/ovos-cli-client pip install ./ovos-cli-client Run the ovos-cli-client ovos-cli-client In the lower left you can observe the microphone levels, when you talk, the levels should increase. If they don't ovos is probably using the wrong microphone.","title":"Test if OVOS is receiving mic input"},{"location":"troubleshooting_audio/#gather-data","text":"Before submitting an issue or asking a question on Element, please gather the following data. For Microphone issues: arecord -l arecord -L | head -n9 pactl list sources pacmd dump For Speaker issues: aplay -l aplay -L | head -n9 pactl list sinks pacmd dump","title":"Gather Data"},{"location":"troubleshooting_audio/#additional-resources","text":"PulseAudio Troubleshooting","title":"Additional Resources"},{"location":"troubleshooting_audio/#problemfix","text":"","title":"Problem/fix"},{"location":"troubleshooting_audio/#error-pulseaudio-sink-always-suspended","text":"Try disabling suspend-on-idle in /etc/pulse/default.pa Change this: ### Automatically suspend sinks/sources that become idle for too long load-module module-suspend-on-idle to this: ### Automatically suspend sinks/sources that become idle for too long #load-module module-suspend-on-idle and then restart PulseAudio. There is quite a lot of variation in how people do this but killall pulseaudio is one option (it gets automatically started again). If you want to be sure, you can restart the system.","title":"ERROR: pulseaudio sink always suspended"},{"location":"troubleshooting_backend/","text":"Troubleshooting Backend Configuration Architecture Troubleshooting Commands Problem/Fix when I try to navigate to the backend-manager I get the error: \"This personal backend instance does not have the admin interface exposed\" The personal backend has a config file. https://github.com/OpenVoiceOS/ovos-personal-backend#configuration Set a value for \"admin_key\"","title":"Troubleshooting Backend"},{"location":"troubleshooting_backend/#troubleshooting-backend-configuration","text":"","title":"Troubleshooting Backend Configuration"},{"location":"troubleshooting_backend/#architecture","text":"","title":"Architecture"},{"location":"troubleshooting_backend/#troubleshooting-commands","text":"","title":"Troubleshooting Commands"},{"location":"troubleshooting_backend/#problemfix","text":"","title":"Problem/Fix"},{"location":"troubleshooting_backend/#when-i-try-to-navigate-to-the-backend-manager-i-get-the-error-this-personal-backend-instance-does-not-have-the-admin-interface-exposed","text":"The personal backend has a config file. https://github.com/OpenVoiceOS/ovos-personal-backend#configuration Set a value for \"admin_key\"","title":"when I try to navigate to the backend-manager I get the error: \"This personal backend instance does not have the admin interface exposed\""},{"location":"troubleshooting_installation/","text":"Troubleshooting OpenVoiceOS Installation coming soon Architecture Troubleshooting Commands Gather Data Problem/Fix Install fails to create OVOS wifi hotspot There is a known issue with balena (the wifi access point app) when it detects a WPA3 network of any sort, it fails. More Information Raspbian OVOS stuck on \"Generating SSH keys...\" Not sure what is causing this, but if you reboot the pi (ctrl-alt-del) it will come up fine on the second boot.","title":"Troubleshooting Installation"},{"location":"troubleshooting_installation/#troubleshooting-openvoiceos-installation","text":"coming soon","title":"Troubleshooting OpenVoiceOS Installation"},{"location":"troubleshooting_installation/#architecture","text":"","title":"Architecture"},{"location":"troubleshooting_installation/#troubleshooting-commands","text":"","title":"Troubleshooting Commands"},{"location":"troubleshooting_installation/#gather-data","text":"","title":"Gather Data"},{"location":"troubleshooting_installation/#problemfix","text":"","title":"Problem/Fix"},{"location":"troubleshooting_installation/#install-fails-to-create-ovos-wifi-hotspot","text":"There is a known issue with balena (the wifi access point app) when it detects a WPA3 network of any sort, it fails. More Information","title":"Install fails to create OVOS wifi hotspot"},{"location":"troubleshooting_installation/#raspbian-ovos-stuck-on-generating-ssh-keys","text":"Not sure what is causing this, but if you reboot the pi (ctrl-alt-del) it will come up fine on the second boot.","title":"Raspbian OVOS stuck on \"Generating SSH keys...\""},{"location":"troubleshooting_intro/","text":"Introduction to Troubleshooting OpenVoiceOS ** Coming soon ** Architecture Troubleshooting Commands Problem/Fix locale.Error: unsupported locale setting This error could from the ovos-cli-client, or other sources. To resolve, ensure that your locale is set correctly, try running raspi-config to set it if your on Raspberry Pi OS (Raspbian). Manually update /etc/default/locale use locale to verify your current locale, and locale -a to verify the local you've set is actually available. Source","title":"Introduction to Troubleshooting"},{"location":"troubleshooting_intro/#introduction-to-troubleshooting-openvoiceos","text":"** Coming soon **","title":"Introduction to Troubleshooting OpenVoiceOS"},{"location":"troubleshooting_intro/#architecture","text":"","title":"Architecture"},{"location":"troubleshooting_intro/#troubleshooting-commands","text":"","title":"Troubleshooting Commands"},{"location":"troubleshooting_intro/#problemfix","text":"","title":"Problem/Fix"},{"location":"troubleshooting_intro/#localeerror-unsupported-locale-setting","text":"This error could from the ovos-cli-client, or other sources. To resolve, ensure that your locale is set correctly, try running raspi-config to set it if your on Raspberry Pi OS (Raspbian). Manually update /etc/default/locale use locale to verify your current locale, and locale -a to verify the local you've set is actually available. Source","title":"locale.Error: unsupported locale setting"},{"location":"troubleshooting_network/","text":"Troubleshooting Networking Architecture Troubleshooting Commands Problem/Fix See Also Troubleshooting Installation","title":"Troubleshooting Network"},{"location":"troubleshooting_network/#troubleshooting-networking","text":"","title":"Troubleshooting Networking"},{"location":"troubleshooting_network/#architecture","text":"","title":"Architecture"},{"location":"troubleshooting_network/#troubleshooting-commands","text":"","title":"Troubleshooting Commands"},{"location":"troubleshooting_network/#problemfix","text":"See Also Troubleshooting Installation","title":"Problem/Fix"},{"location":"troubleshooting_ovos_core/","text":"Troubleshooting OVOS Core ** Coming Soon ** Architecture Troubleshooting Commands Problem/Fix","title":"Troubleshooting OVOS Core"},{"location":"troubleshooting_ovos_core/#troubleshooting-ovos-core","text":"** Coming Soon **","title":"Troubleshooting OVOS Core"},{"location":"troubleshooting_ovos_core/#architecture","text":"","title":"Architecture"},{"location":"troubleshooting_ovos_core/#troubleshooting-commands","text":"","title":"Troubleshooting Commands"},{"location":"troubleshooting_ovos_core/#problemfix","text":"","title":"Problem/Fix"},{"location":"troubleshooting_plugins/","text":"Troubleshooting Plugins Architecture Troubleshooting Commands Gather Data Problem/Fix","title":"Troubleshooting Plugins"},{"location":"troubleshooting_plugins/#troubleshooting-plugins","text":"","title":"Troubleshooting Plugins"},{"location":"troubleshooting_plugins/#architecture","text":"","title":"Architecture"},{"location":"troubleshooting_plugins/#troubleshooting-commands","text":"","title":"Troubleshooting Commands"},{"location":"troubleshooting_plugins/#gather-data","text":"","title":"Gather Data"},{"location":"troubleshooting_plugins/#problemfix","text":"","title":"Problem/Fix"},{"location":"troubleshooting_skills/","text":"Troubleshooting Skills Architecture Troubleshooting Commands Gather Data Problem/Fix","title":"Troubleshooting Skills"},{"location":"troubleshooting_skills/#troubleshooting-skills","text":"","title":"Troubleshooting Skills"},{"location":"troubleshooting_skills/#architecture","text":"","title":"Architecture"},{"location":"troubleshooting_skills/#troubleshooting-commands","text":"","title":"Troubleshooting Commands"},{"location":"troubleshooting_skills/#gather-data","text":"","title":"Gather Data"},{"location":"troubleshooting_skills/#problemfix","text":"","title":"Problem/Fix"},{"location":"troubleshooting_stt/","text":"Troubleshooting Speech To Text Architecture Troubleshooting Commands Gather Data Problem/Fix","title":"Troubleshooting STT"},{"location":"troubleshooting_stt/#troubleshooting-speech-to-text","text":"","title":"Troubleshooting Speech To Text"},{"location":"troubleshooting_stt/#architecture","text":"","title":"Architecture"},{"location":"troubleshooting_stt/#troubleshooting-commands","text":"","title":"Troubleshooting Commands"},{"location":"troubleshooting_stt/#gather-data","text":"","title":"Gather Data"},{"location":"troubleshooting_stt/#problemfix","text":"","title":"Problem/Fix"},{"location":"troubleshooting_tts/","text":"Troubleshooting Text To Speech Architecture Troubleshooting Commands Gather Data Problem/Fix","title":"Troubleshooting TTS"},{"location":"troubleshooting_tts/#troubleshooting-text-to-speech","text":"","title":"Troubleshooting Text To Speech"},{"location":"troubleshooting_tts/#architecture","text":"","title":"Architecture"},{"location":"troubleshooting_tts/#troubleshooting-commands","text":"","title":"Troubleshooting Commands"},{"location":"troubleshooting_tts/#gather-data","text":"","title":"Gather Data"},{"location":"troubleshooting_tts/#problemfix","text":"","title":"Problem/Fix"},{"location":"troubleshooting_wakeword/","text":"Troubleshooting Wake Word Architecture OpenVoiceOS uses a plugin (or plugins) to recognize the wake word. In your mycroft.conf file you'll specify the plugin used and what the wakeword is. Troubleshooting Commands Gather Data Verify it's the Wake Word and not the microphone To verify that it is the Wake Word and not the microphone causing the issue, we will get OVOS to ask us a question that we can respond to. In the OVOS cli type in the utterance \"set timer\". OVOS will then ask how long of a timer you would like. Speaking now should result in your utterance being transcribed. If your response is successfully transcribed, it is most likely the Wake Word engine causing the problem. Check your mycroft.conf file to see what plugin and wake word is configured. Determine which configuration files are being loaded grep mycroft.conf /var/log/syslog Look at your mycoft.conf file and verify how your wake word is configured. Look for the following lines (or similar): \"wake_word\": \"hey_mycroft\" ## this will match a hotword listed later in the config ... \"hotwords\": { \"hey_mycroft\": { ## matches the wake_word \"module\": \"ovos-ww-plugin-precise\", ## what plugin is used \"version\": \"0.3\", \"model\": \"https://github.com/MycroftAI/precise-data/raw/models-dev/hey-mycroft.tar.gz\", \"phonemes\": \"HH EY . M AY K R AO F T\", \"threshold\": 1e-90, \"lang\": \"en-us\", \"listen\": true, \"sound\": \"snd/start_listening.wav\" }, Verify the Wake Word plugin is loading properly grep wake /var/log/syslog Look for lines similar to: voice - ovos_plugin_manager.wakewords:load_module:85 - INFO - Loading \"wake_up\" wake word via ovos-ww-plugin-pocketsphinx voice - ovos_plugin_manager.wakewords:load_module:92 - INFO - Loaded the Wake Word plugin ovos-ww-plugin-pocketsphinx voice - ovos_plugin_manager.wakewords:load_module:85 - INFO - Loading \"hey_mycroft\" wake word via ovos-ww-plugin-precise voice - ovos_plugin_manager.wakewords:load_module:92 - INFO - Loaded the Wake Word plugin ovos-ww-plugin-precise If you see an error about \"failed to load plugin\" make sure the plugin is installed. Use ovos-cli-client to see if the microphone is working and the wakeword is being triggered. ovos-cli-client Look for 05:19:54.975 - voice - mycroft.listener.service:handle_wakeword:97 - INFO - Wakeword Detected: hey_mycroft 05:19:55.555 - voice - mycroft.listener.service:handle_record_begin:71 - INFO - Begin Recording... 05:19:57.052 - voice - mycroft.listener.service:handle_record_end:78 - INFO - End Recording... Problem/Fix","title":"Troubleshooting Wakeword"},{"location":"troubleshooting_wakeword/#troubleshooting-wake-word","text":"","title":"Troubleshooting Wake Word"},{"location":"troubleshooting_wakeword/#architecture","text":"OpenVoiceOS uses a plugin (or plugins) to recognize the wake word. In your mycroft.conf file you'll specify the plugin used and what the wakeword is.","title":"Architecture"},{"location":"troubleshooting_wakeword/#troubleshooting-commands","text":"","title":"Troubleshooting Commands"},{"location":"troubleshooting_wakeword/#gather-data","text":"","title":"Gather Data"},{"location":"troubleshooting_wakeword/#verify-its-the-wake-word-and-not-the-microphone","text":"To verify that it is the Wake Word and not the microphone causing the issue, we will get OVOS to ask us a question that we can respond to. In the OVOS cli type in the utterance \"set timer\". OVOS will then ask how long of a timer you would like. Speaking now should result in your utterance being transcribed. If your response is successfully transcribed, it is most likely the Wake Word engine causing the problem.","title":"Verify it's the Wake Word and not the microphone"},{"location":"troubleshooting_wakeword/#check-your-mycroftconf-file-to-see-what-plugin-and-wake-word-is-configured","text":"Determine which configuration files are being loaded grep mycroft.conf /var/log/syslog Look at your mycoft.conf file and verify how your wake word is configured. Look for the following lines (or similar): \"wake_word\": \"hey_mycroft\" ## this will match a hotword listed later in the config ... \"hotwords\": { \"hey_mycroft\": { ## matches the wake_word \"module\": \"ovos-ww-plugin-precise\", ## what plugin is used \"version\": \"0.3\", \"model\": \"https://github.com/MycroftAI/precise-data/raw/models-dev/hey-mycroft.tar.gz\", \"phonemes\": \"HH EY . M AY K R AO F T\", \"threshold\": 1e-90, \"lang\": \"en-us\", \"listen\": true, \"sound\": \"snd/start_listening.wav\" },","title":"Check your mycroft.conf file to see what plugin and wake word is configured."},{"location":"troubleshooting_wakeword/#verify-the-wake-word-plugin-is-loading-properly","text":"grep wake /var/log/syslog Look for lines similar to: voice - ovos_plugin_manager.wakewords:load_module:85 - INFO - Loading \"wake_up\" wake word via ovos-ww-plugin-pocketsphinx voice - ovos_plugin_manager.wakewords:load_module:92 - INFO - Loaded the Wake Word plugin ovos-ww-plugin-pocketsphinx voice - ovos_plugin_manager.wakewords:load_module:85 - INFO - Loading \"hey_mycroft\" wake word via ovos-ww-plugin-precise voice - ovos_plugin_manager.wakewords:load_module:92 - INFO - Loaded the Wake Word plugin ovos-ww-plugin-precise If you see an error about \"failed to load plugin\" make sure the plugin is installed. Use ovos-cli-client to see if the microphone is working and the wakeword is being triggered. ovos-cli-client Look for 05:19:54.975 - voice - mycroft.listener.service:handle_wakeword:97 - INFO - Wakeword Detected: hey_mycroft 05:19:55.555 - voice - mycroft.listener.service:handle_record_begin:71 - INFO - Begin Recording... 05:19:57.052 - voice - mycroft.listener.service:handle_record_end:78 - INFO - End Recording...","title":"Verify the Wake Word plugin is loading properly"},{"location":"troubleshooting_wakeword/#problemfix","text":"","title":"Problem/Fix"},{"location":"unused/OCA/","text":"OCA - OVOS Config Assistant OCA is a user facing interface to configure ovos devices Web UI OCA provides a local Web UI similar to ovos-backend-manager, in here you can configure your device, view metrics, handle OAuth and more CLI A command line interface is planned but not yet available to provide equivalent functionality to the Web UI Python utils from ovos_config_assistant.module_helpers import pprint_core_module_info pprint_core_module_info() \"\"\" ## Mycroft module info can import mycroft : True is ovos-core : True mycroft module location: /home/user/ovos-core/mycroft ## Downstream ovos.conf overrides Module: neon_core can import neon_core : False neon_core module location: None xdg compliance : True base xdg folder : neon mycroft config filename : neon.conf default mycroft.conf path : /home/user/NeonCore/neon_core/configuration/neon.conf Module: hivemind can import hivemind : False hivemind module location: None xdg compliance : True base xdg folder : hivemind mycroft config filename : hivemind.conf default mycroft.conf path : /home/user/PycharmProjects/ovos_workspace/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf ## Downstream module overrides: Module: neon_speech uses config from : neon_core can import neon_speech : False neon_speech module location: None Module: neon_audio uses config from : neon_core can import neon_audio : False neon_audio module location: None Module: neon_enclosure uses config from : neon_core can import neon_enclosure : False neon_enclosure module location: None Module: hivemind_voice_satellite uses config from : hivemind can import hivemind_voice_satellite : True hivemind_voice_satellite module location: /home/user/HiveMind-voice-sat/hivemind_voice_satellite \"\"\" from ovos_config_assistant.config_helpers import pprint_ovos_conf pprint_ovos_conf() \"\"\" ## OVOS Configuration ovos.conf exists : True /home/user/.config/OpenVoiceOS/ovos.conf xdg compliance : True base xdg folder : mycroft mycroft config filename : mycroft.conf default mycroft.conf path : /home/user/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf \"\"\"","title":"OCA - OVOS Config Assistant"},{"location":"unused/OCA/#oca-ovos-config-assistant","text":"OCA is a user facing interface to configure ovos devices","title":"OCA - OVOS Config Assistant"},{"location":"unused/OCA/#web-ui","text":"OCA provides a local Web UI similar to ovos-backend-manager, in here you can configure your device, view metrics, handle OAuth and more","title":"Web UI"},{"location":"unused/OCA/#cli","text":"A command line interface is planned but not yet available to provide equivalent functionality to the Web UI","title":"CLI"},{"location":"unused/OCA/#python-utils","text":"from ovos_config_assistant.module_helpers import pprint_core_module_info pprint_core_module_info() \"\"\" ## Mycroft module info can import mycroft : True is ovos-core : True mycroft module location: /home/user/ovos-core/mycroft ## Downstream ovos.conf overrides Module: neon_core can import neon_core : False neon_core module location: None xdg compliance : True base xdg folder : neon mycroft config filename : neon.conf default mycroft.conf path : /home/user/NeonCore/neon_core/configuration/neon.conf Module: hivemind can import hivemind : False hivemind module location: None xdg compliance : True base xdg folder : hivemind mycroft config filename : hivemind.conf default mycroft.conf path : /home/user/PycharmProjects/ovos_workspace/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf ## Downstream module overrides: Module: neon_speech uses config from : neon_core can import neon_speech : False neon_speech module location: None Module: neon_audio uses config from : neon_core can import neon_audio : False neon_audio module location: None Module: neon_enclosure uses config from : neon_core can import neon_enclosure : False neon_enclosure module location: None Module: hivemind_voice_satellite uses config from : hivemind can import hivemind_voice_satellite : True hivemind_voice_satellite module location: /home/user/HiveMind-voice-sat/hivemind_voice_satellite \"\"\" from ovos_config_assistant.config_helpers import pprint_ovos_conf pprint_ovos_conf() \"\"\" ## OVOS Configuration ovos.conf exists : True /home/user/.config/OpenVoiceOS/ovos.conf xdg compliance : True base xdg folder : mycroft mycroft config filename : mycroft.conf default mycroft.conf path : /home/user/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf \"\"\"","title":"Python utils"},{"location":"unused/PHAL/","text":"PHAL PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded \"enclosure\" from mycroft-core Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark2 plugin PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images Plugins Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities Plugin Description ovos-PHAL-plugin-alsa volume control ovos-PHAL-plugin-system reboot / shutdown / factory reset ovos-PHAL-plugin-mk1 mycroft mark1 integration ovos-PHAL-plugin-mk2 mycroft mark2 integration ovos-PHAL-plugin-respeaker-2mic respeaker 2mic hat integration ovos-PHAL-plugin-respeaker-4mic respeaker 4mic hat integration ovos-PHAL-plugin-wifi-setup wifi setup (central plugin) ovos-PHAL-plugin-gui-network-client wifi setup (GUI interface) ovos-PHAL-plugin-balena-wifi wifi setup (hotspot) ovos-PHAL-plugin-network-manager wifi setup (network manager) ovos-PHAL-plugin-brightness-control-rpi brightness control ovos-PHAL-plugin-ipgeo automatic geolocation (IP address) ovos-PHAL-plugin-gpsd automatic geolocation (GPS) ovos-PHAL-plugin-dashboard dashboard control (ovos-shell) ovos-PHAL-plugin-notification-widgets system notifications (ovos-shell) ovos-PHAL-plugin-color-scheme-manager GUI color schemes (ovos-shell) ovos-PHAL-plugin-configuration-provider UI to edit mycroft.conf (ovos-shell) ovos-PHAL-plugin-analog-media-devices video/audio capture devices (OCP) Developers In mycroft-core the equivalent to PHAL plugins would usually be shipped as skills or hardcoded in OVOS sometimes it may be unclear if we should develop a skill or plugin, there isn't a one size fits all answer, in some circumstances it may make sense to create both a plugin and a companion skill Template PHAL plugins do not follow a strict template, they are usually event listeners that perform certain actions and integrate with other components from mycroft_bus_client import Message from ovos_plugin_manager.phal import PHALPlugin class MyPHALPluginValidator: @staticmethod def validate(config=None): \"\"\" this method is called before loading the plugin. If it returns False the plugin is not loaded. This allows a plugin to run platform checks\"\"\" return True class MyPHALPlugin(PHALPlugin): validator = MyPHALPluginValidator def __init__(self, bus=None, config=None): super().__init__(bus=bus, name=\"ovos-PHAL-plugin-NAME\", config=config) # register events for plugin self.bus.on(\"my.event\", self.handle_event) def handle_event(self, message): # TODO plugin stuff self.bus.emit(Message(\"my.event.response\")) def shutdown(self): # cleanly remove any event listeners and perform shutdown actions self.bus.remove(\"my.event\", self.handle_event) super().shutdown() You can find plugin packaging documentation here Admin PHAL AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root privileges. This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely NOTE: Because this service runs as root, plugins it loads are responsible for not writing configuration changes which would result in breaking config file permissions. Service to use AdminPHAL create a launcher /usr/libexec/mycroft-systemd-admin-phal import sdnotify from ovos_PHAL.admin import main n = sdnotify.SystemdNotifier() def notify_ready(): n.notify('READY=1') print('Startup of Admin service complete') def notify_stopping(): n.notify('STOPPING=1') print('Stopping Admin service') main(ready_hook=notify_ready, stopping_hook=notify_stopping) and system service /usr/lib/systemd/user/mycroft-admin-phal.service [Unit] Description=Admin PHAL PartOf=mycroft.service After=mycroft-messagebus.service [Service] Type=notify TimeoutStopSec=30 Restart=always User=root ExecStart=/usr/libexec/mycroft-systemd-admin-phal [Install] WantedBy=mycroft.service Plugins AdminPlugins are just like regular PHAL plugins that run with root privileges A plugin needs to identify itself as an admin plugin via it's entry point, PHAL will not load Admin plugins and AdminPHAL will not load regular plugins Admin plugins will only load if their configuration contains \"enabled\": true . All admin plugins need to be explicitly enabled You can find plugin packaging documentation here","title":"PHAL"},{"location":"unused/PHAL/#phal","text":"PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded \"enclosure\" from mycroft-core Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark2 plugin PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images","title":"PHAL"},{"location":"unused/PHAL/#plugins","text":"Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities Plugin Description ovos-PHAL-plugin-alsa volume control ovos-PHAL-plugin-system reboot / shutdown / factory reset ovos-PHAL-plugin-mk1 mycroft mark1 integration ovos-PHAL-plugin-mk2 mycroft mark2 integration ovos-PHAL-plugin-respeaker-2mic respeaker 2mic hat integration ovos-PHAL-plugin-respeaker-4mic respeaker 4mic hat integration ovos-PHAL-plugin-wifi-setup wifi setup (central plugin) ovos-PHAL-plugin-gui-network-client wifi setup (GUI interface) ovos-PHAL-plugin-balena-wifi wifi setup (hotspot) ovos-PHAL-plugin-network-manager wifi setup (network manager) ovos-PHAL-plugin-brightness-control-rpi brightness control ovos-PHAL-plugin-ipgeo automatic geolocation (IP address) ovos-PHAL-plugin-gpsd automatic geolocation (GPS) ovos-PHAL-plugin-dashboard dashboard control (ovos-shell) ovos-PHAL-plugin-notification-widgets system notifications (ovos-shell) ovos-PHAL-plugin-color-scheme-manager GUI color schemes (ovos-shell) ovos-PHAL-plugin-configuration-provider UI to edit mycroft.conf (ovos-shell) ovos-PHAL-plugin-analog-media-devices video/audio capture devices (OCP)","title":"Plugins"},{"location":"unused/PHAL/#developers","text":"In mycroft-core the equivalent to PHAL plugins would usually be shipped as skills or hardcoded in OVOS sometimes it may be unclear if we should develop a skill or plugin, there isn't a one size fits all answer, in some circumstances it may make sense to create both a plugin and a companion skill","title":"Developers"},{"location":"unused/PHAL/#template","text":"PHAL plugins do not follow a strict template, they are usually event listeners that perform certain actions and integrate with other components from mycroft_bus_client import Message from ovos_plugin_manager.phal import PHALPlugin class MyPHALPluginValidator: @staticmethod def validate(config=None): \"\"\" this method is called before loading the plugin. If it returns False the plugin is not loaded. This allows a plugin to run platform checks\"\"\" return True class MyPHALPlugin(PHALPlugin): validator = MyPHALPluginValidator def __init__(self, bus=None, config=None): super().__init__(bus=bus, name=\"ovos-PHAL-plugin-NAME\", config=config) # register events for plugin self.bus.on(\"my.event\", self.handle_event) def handle_event(self, message): # TODO plugin stuff self.bus.emit(Message(\"my.event.response\")) def shutdown(self): # cleanly remove any event listeners and perform shutdown actions self.bus.remove(\"my.event\", self.handle_event) super().shutdown() You can find plugin packaging documentation here","title":"Template"},{"location":"unused/PHAL/#admin-phal","text":"AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root privileges. This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely NOTE: Because this service runs as root, plugins it loads are responsible for not writing configuration changes which would result in breaking config file permissions.","title":"Admin PHAL"},{"location":"unused/PHAL/#service","text":"to use AdminPHAL create a launcher /usr/libexec/mycroft-systemd-admin-phal import sdnotify from ovos_PHAL.admin import main n = sdnotify.SystemdNotifier() def notify_ready(): n.notify('READY=1') print('Startup of Admin service complete') def notify_stopping(): n.notify('STOPPING=1') print('Stopping Admin service') main(ready_hook=notify_ready, stopping_hook=notify_stopping) and system service /usr/lib/systemd/user/mycroft-admin-phal.service [Unit] Description=Admin PHAL PartOf=mycroft.service After=mycroft-messagebus.service [Service] Type=notify TimeoutStopSec=30 Restart=always User=root ExecStart=/usr/libexec/mycroft-systemd-admin-phal [Install] WantedBy=mycroft.service","title":"Service"},{"location":"unused/PHAL/#plugins_1","text":"AdminPlugins are just like regular PHAL plugins that run with root privileges A plugin needs to identify itself as an admin plugin via it's entry point, PHAL will not load Admin plugins and AdminPHAL will not load regular plugins Admin plugins will only load if their configuration contains \"enabled\": true . All admin plugins need to be explicitly enabled You can find plugin packaging documentation here","title":"Plugins"},{"location":"unused/airplay/","text":"Airplay By default, your OpenVoiceOS device is advertising itself as Airplay (v1 - currently) device on your network. This can be used from either the iOS Airplay selection screen if you play some local files, like shown below; Tap / Click the bottom middle Airplay icon on your music player which opens the Airplay devices menu. It should pick up your OpenVoiceOS device automatically from the network. Select the OpenVoiceOS device to re-route your sound output to your OpenVoiceOS device. The Airplay selection menu is also available within other music clients such as the Spotify app. And if that client also supports metadata over MPRIS your OpenVoiceOS device will show it on it's screen as well.","title":"Airplay"},{"location":"unused/airplay/#airplay","text":"By default, your OpenVoiceOS device is advertising itself as Airplay (v1 - currently) device on your network. This can be used from either the iOS Airplay selection screen if you play some local files, like shown below; Tap / Click the bottom middle Airplay icon on your music player which opens the Airplay devices menu. It should pick up your OpenVoiceOS device automatically from the network. Select the OpenVoiceOS device to re-route your sound output to your OpenVoiceOS device. The Airplay selection menu is also available within other music clients such as the Spotify app. And if that client also supports metadata over MPRIS your OpenVoiceOS device will show it on it's screen as well.","title":"Airplay"},{"location":"unused/arch_apis/","text":"APIs There are several API's used by OVOS including for weather inquiries, Wolfram-alpha and others OVOS provides default keys that are used by default and no more configuration is needed. You can always provide your own API keys for each service, and even add more APIs for OVOS to use. TODO fix link to api config API Configuration","title":"APIs"},{"location":"unused/arch_apis/#apis","text":"There are several API's used by OVOS including for weather inquiries, Wolfram-alpha and others OVOS provides default keys that are used by default and no more configuration is needed. You can always provide your own API keys for each service, and even add more APIs for OVOS to use. TODO fix link to api config API Configuration","title":"APIs"},{"location":"unused/arch_backend/","text":"OpenVoiceOS Backends A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device. The backend is optional and concidered to be advanced usage in OVOS. By default, OVOS does not use a backend, and is totally unnecessasary for basic usage. A local backend becomes usefull if you have several devices around the house and would like a central place to configure them. A backend can provide: A nice web ui to configure device, this allows for you to configure once and push it to all your devices. Free API services for weather and wolfram alpha Collecting data, e.g. upload ww and utterance audio samples from all your devices Available backends: Offline: no backend all configuration local. This is the DEFAULT Personal: Self hosted minimal backend TODO fix link to personal backend Setting Up a Personal Backend","title":"OpenVoiceOS Backends"},{"location":"unused/arch_backend/#openvoiceos-backends","text":"A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device. The backend is optional and concidered to be advanced usage in OVOS. By default, OVOS does not use a backend, and is totally unnecessasary for basic usage. A local backend becomes usefull if you have several devices around the house and would like a central place to configure them. A backend can provide: A nice web ui to configure device, this allows for you to configure once and push it to all your devices. Free API services for weather and wolfram alpha Collecting data, e.g. upload ww and utterance audio samples from all your devices Available backends: Offline: no backend all configuration local. This is the DEFAULT Personal: Self hosted minimal backend TODO fix link to personal backend Setting Up a Personal Backend","title":"OpenVoiceOS Backends"},{"location":"unused/arch_gui/","text":"GUI Architecture Introduction OVOS devices with displays provide skill developers the opportunity to create skills that can be empowered by both voice and screen interaction. The display interaction technology is based on the QML user interface markup language that gives you complete freedom to create in-depth innovative interactions without boundaries or provide you with simple templates within the Mycroft GUI framework that allow minimalistic display of text and images based on your skill development specifics and preferences. Framework Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt. Introduction to QML QML user interface markup language is a declarative language built on top of Qt's existing strengths designed to describe the user interface of a program: both what it looks like, and how it behaves. QML provides modules that consist of sophisticated set of graphical and behavioral building elements. A collection of resources to familiarize you with QML and Kirigami Framework. Introduction to QML Introduction to Kirigami GUI Extensions OVOS Core supports a GUI Extension framework which allows the GUI service to incorporate additional behaviour for a specific platform. GUI Extensions currently supported: Smartspeaker Extension This extension is responsible for managing the smartspeaker GUI interface behaviour, it supports homescreens and homescreen management. Enabling the smartspeaker GUI extension: Bigscreen Extension This extension is responsible for managing the plasma bigscreen GUI interface behaviour, it supports window management and window behaviour control on specific window managers like Kwin. Enabling the Bigscreen GUI extension: Mobile Extension This extension is responsible for managing the mobile GUI interface behaviour, it supports homescreens and additionally adds support for global page back navigation. Enabling the Mobile GUI extension: Generic Extension This extension provides a generic GUI interface and does not add any additional behaviour, it optionally supports homescreens if the platform or user manually enables it. This extension is enabled by default when no other extension is specified.","title":"GUI Architecture"},{"location":"unused/arch_gui/#gui-architecture","text":"","title":"GUI Architecture"},{"location":"unused/arch_gui/#introduction","text":"OVOS devices with displays provide skill developers the opportunity to create skills that can be empowered by both voice and screen interaction. The display interaction technology is based on the QML user interface markup language that gives you complete freedom to create in-depth innovative interactions without boundaries or provide you with simple templates within the Mycroft GUI framework that allow minimalistic display of text and images based on your skill development specifics and preferences.","title":"Introduction"},{"location":"unused/arch_gui/#framework","text":"Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt.","title":"Framework"},{"location":"unused/arch_gui/#introduction-to-qml","text":"QML user interface markup language is a declarative language built on top of Qt's existing strengths designed to describe the user interface of a program: both what it looks like, and how it behaves. QML provides modules that consist of sophisticated set of graphical and behavioral building elements. A collection of resources to familiarize you with QML and Kirigami Framework. Introduction to QML Introduction to Kirigami","title":"Introduction to QML"},{"location":"unused/arch_gui/#gui-extensions","text":"OVOS Core supports a GUI Extension framework which allows the GUI service to incorporate additional behaviour for a specific platform. GUI Extensions currently supported:","title":"GUI Extensions"},{"location":"unused/arch_gui/#smartspeaker-extension","text":"This extension is responsible for managing the smartspeaker GUI interface behaviour, it supports homescreens and homescreen management. Enabling the smartspeaker GUI extension:","title":"Smartspeaker Extension"},{"location":"unused/arch_gui/#bigscreen-extension","text":"This extension is responsible for managing the plasma bigscreen GUI interface behaviour, it supports window management and window behaviour control on specific window managers like Kwin. Enabling the Bigscreen GUI extension:","title":"Bigscreen Extension"},{"location":"unused/arch_gui/#mobile-extension","text":"This extension is responsible for managing the mobile GUI interface behaviour, it supports homescreens and additionally adds support for global page back navigation. Enabling the Mobile GUI extension:","title":"Mobile Extension"},{"location":"unused/arch_gui/#generic-extension","text":"This extension provides a generic GUI interface and does not add any additional behaviour, it optionally supports homescreens if the platform or user manually enables it. This extension is enabled by default when no other extension is specified.","title":"Generic Extension"},{"location":"unused/arch_phal/","text":"PHAL ** Editors Note ** Lots of cleanup coming here, just placeholder information for now. PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded \"enclosure\" from mycroft-core Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark2 plugin PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images Plugins Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities Plugin Description ovos-PHAL-plugin-alsa volume control ovos-PHAL-plugin-system reboot / shutdown / factory reset ovos-PHAL-plugin-mk1 mycroft mark1 integration ovos-PHAL-plugin-mk2 mycroft mark2 integration ovos-PHAL-plugin-respeaker-2mic respeaker 2mic hat integration ovos-PHAL-plugin-respeaker-4mic respeaker 4mic hat integration ovos-PHAL-plugin-wifi-setup wifi setup (central plugin) ovos-PHAL-plugin-gui-network-client wifi setup (GUI interface) ovos-PHAL-plugin-balena-wifi wifi setup (hotspot) ovos-PHAL-plugin-network-manager wifi setup (network manager) ovos-PHAL-plugin-brightness-control-rpi brightness control ovos-PHAL-plugin-ipgeo automatic geolocation (IP address) ovos-PHAL-plugin-gpsd automatic geolocation (GPS) ovos-PHAL-plugin-dashboard dashboard control (ovos-shell) ovos-PHAL-plugin-notification-widgets system notifications (ovos-shell) ovos-PHAL-plugin-color-scheme-manager GUI color schemes (ovos-shell) ovos-PHAL-plugin-configuration-provider UI to edit mycroft.conf (ovos-shell) ovos-PHAL-plugin-analog-media-devices video/audio capture devices (OCP) Admin PHAL AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root privileges. This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely NOTE: Because this service runs as root, plugins it loads are responsible for not writing configuration changes which would result in breaking config file permissions. Admin Plugins AdminPlugins are just like regular PHAL plugins that run with root privileges Admin plugins will only load if their configuration contains \"enabled\": true . All admin plugins need to be explicitly enabled You can find plugin packaging documentation here","title":"PHAL"},{"location":"unused/arch_phal/#phal","text":"** Editors Note ** Lots of cleanup coming here, just placeholder information for now. PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded \"enclosure\" from mycroft-core Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark2 plugin PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images","title":"PHAL"},{"location":"unused/arch_phal/#plugins","text":"Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities Plugin Description ovos-PHAL-plugin-alsa volume control ovos-PHAL-plugin-system reboot / shutdown / factory reset ovos-PHAL-plugin-mk1 mycroft mark1 integration ovos-PHAL-plugin-mk2 mycroft mark2 integration ovos-PHAL-plugin-respeaker-2mic respeaker 2mic hat integration ovos-PHAL-plugin-respeaker-4mic respeaker 4mic hat integration ovos-PHAL-plugin-wifi-setup wifi setup (central plugin) ovos-PHAL-plugin-gui-network-client wifi setup (GUI interface) ovos-PHAL-plugin-balena-wifi wifi setup (hotspot) ovos-PHAL-plugin-network-manager wifi setup (network manager) ovos-PHAL-plugin-brightness-control-rpi brightness control ovos-PHAL-plugin-ipgeo automatic geolocation (IP address) ovos-PHAL-plugin-gpsd automatic geolocation (GPS) ovos-PHAL-plugin-dashboard dashboard control (ovos-shell) ovos-PHAL-plugin-notification-widgets system notifications (ovos-shell) ovos-PHAL-plugin-color-scheme-manager GUI color schemes (ovos-shell) ovos-PHAL-plugin-configuration-provider UI to edit mycroft.conf (ovos-shell) ovos-PHAL-plugin-analog-media-devices video/audio capture devices (OCP)","title":"Plugins"},{"location":"unused/arch_phal/#admin-phal","text":"AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root privileges. This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely NOTE: Because this service runs as root, plugins it loads are responsible for not writing configuration changes which would result in breaking config file permissions.","title":"Admin PHAL"},{"location":"unused/arch_phal/#admin-plugins","text":"AdminPlugins are just like regular PHAL plugins that run with root privileges Admin plugins will only load if their configuration contains \"enabled\": true . All admin plugins need to be explicitly enabled You can find plugin packaging documentation here","title":"Admin Plugins"},{"location":"unused/arch_plugins/","text":"Plugin Architecture ** Coming Soon **","title":"Plugin Architecture"},{"location":"unused/arch_plugins/#plugin-architecture","text":"** Coming Soon **","title":"Plugin Architecture"},{"location":"unused/arch_services/","text":"OpenVoiceOS Services Editors Note Major revisions coming here, mostly placeholder information Skills Service The skills service is responsible for loading skills and intent parsers All user queries are handled by the skills service, you can think of it as OVOS's brain More Information Speech Service Speech Client The speech client is responsible for loading STT, VAD and Wake Word plugins Speech is transcribed into text and forwarded to the skills service Hotwords OVOS allows you to load any number of hot words in parallel and trigger different actions when they are detected each hotword can do one or more of the following: trigger listening, also called a wake_word play a sound emit a bus event take ovos-core out of sleep mode, also called a wakeup_word or standup_word take ovos-core out of recording mode, also called a stop_word To add a new hotword add its configuration under \"hotwords\" section. By default, all hotwords are disabled unless you set \"listen\": true . Under the \"listener\" setting a main wake word and stand up word are defined, those will be automatically enabled unless you set \"active\": false . This is usually not desired unless you are looking to completely disabled wake word usage STT Two STT plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used. This allows you to have a lower accuracy offline model as fallback to account for internet outages, this ensures your device never becomes fully unusable Listener You can modify microphone settings and enable additional features under the listener section such as wake word / utterance recording / uploading VAD Voice Activity Detection is used by the speech client to determine when a user stopped speaking, this indicates the voice command is ready to be executed. Several VAD strategies are supported","title":"OpenVoiceOS Services"},{"location":"unused/arch_services/#openvoiceos-services","text":"Editors Note Major revisions coming here, mostly placeholder information","title":"OpenVoiceOS Services"},{"location":"unused/arch_services/#skills-service","text":"The skills service is responsible for loading skills and intent parsers All user queries are handled by the skills service, you can think of it as OVOS's brain More Information","title":"Skills Service"},{"location":"unused/arch_services/#speech-service","text":"","title":"Speech Service"},{"location":"unused/arch_services/#speech-client","text":"The speech client is responsible for loading STT, VAD and Wake Word plugins Speech is transcribed into text and forwarded to the skills service","title":"Speech Client"},{"location":"unused/arch_services/#hotwords","text":"OVOS allows you to load any number of hot words in parallel and trigger different actions when they are detected each hotword can do one or more of the following: trigger listening, also called a wake_word play a sound emit a bus event take ovos-core out of sleep mode, also called a wakeup_word or standup_word take ovos-core out of recording mode, also called a stop_word To add a new hotword add its configuration under \"hotwords\" section. By default, all hotwords are disabled unless you set \"listen\": true . Under the \"listener\" setting a main wake word and stand up word are defined, those will be automatically enabled unless you set \"active\": false . This is usually not desired unless you are looking to completely disabled wake word usage","title":"Hotwords"},{"location":"unused/arch_services/#stt","text":"Two STT plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used. This allows you to have a lower accuracy offline model as fallback to account for internet outages, this ensures your device never becomes fully unusable","title":"STT"},{"location":"unused/arch_services/#listener","text":"You can modify microphone settings and enable additional features under the listener section such as wake word / utterance recording / uploading","title":"Listener"},{"location":"unused/arch_services/#vad","text":"Voice Activity Detection is used by the speech client to determine when a user stopped speaking, this indicates the voice command is ready to be executed. Several VAD strategies are supported","title":"VAD"},{"location":"unused/arch_skills/","text":"OVOS Skills What can a Skill do? Skills give OVOS the ability to perform a variety of functions. They can be installed or removed by the user, and can be easily updated to expand functionality. To get a good idea of what skills to build, let\u2019s talk about the best use cases for a voice assistant, and what types of things OVOS can do. OVOS can run on a variety of platforms from the Linux Desktop to SBCs like the raspberry pi. Different devices will have slightly different use cases. Devices in the home are generally located in the living room or kitchen and are ideal for listening to the news, playing music, general information, using timers while cooking, checking the weather, and other similar activities that are easily accomplished hands free. Basic functions We cover a lot of the basics with our Default Skills, things like Timers, Alarms, Weather, Time and Date, and more. Information We also call this General Question and Answer, and it covers all of those factual questions someone might think to ask a voice assistant. Questions like \u201cwho was the 32nd President of the United States?\u201d, or \u201chow tall is Eiffel Tower?\u201d Although the Default Skills cover a great deal of questions there is room for more. There are many topics that could use a specific skill such as Science, Academics, Movie Info, TV info, and Music info, etc.. Media One of the biggest use cases for Smart Speakers is playing media. The reason media playback is so popular is that it makes playing a song so easy, all you have to do is say \u201cHey Mycroft play the Beatles,\u201d and you can be enjoying music without having to reach for a phone or remote. In addition to listening to music, there are skills that handle videos as well. News Much like listening to music, getting the latest news with a simple voice interaction is extremely convenient. OVOS supports multiple news feeds, and has the ability to support multiple news skills. Smart Home Another popular use case for Voice Assistants is to control Smart Home and IoT products. Within the mycroft ecosystem there are skills for Home Assistant, Wink IoT, Lifx and more, but there are many products that we do not have skill for yet. The open source community has been enthusiastically expanding OVOS's ability to voice control all kinds of smart home products. Games Voice games are becoming more and more popular, especially those that allow multiple users to play together. Trivia games are some of the most popular types of games to develop for voice assistants. There are several games already available for OVOS. There are native voice adventure games, ports of the popular text adventure games from infocom, a Crystal Ball game, a Number Guessing game and much more! OpenVoiceOS Standard Skills Standard Skills Usage Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage. Date / Time skill You can ask your device what time or date it is just in case you lost your watch. Hey Mycroft, what time is it? Hey Mycroft, what is the date? Setting an alarm Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device. Hey Mycroft, set an alarm for 8 AM. Setting of timers Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers. Hey Mycroft, set a timer for 5 minutes. You can always set more timers and even name them, so you know which timers is for what. Hey, Mycroft, set another timer called rice cooking for 7 minutes. Asking the weather You can ask your device what the weather is or would be at any given time or place. Hey Mycroft, what is the weather like today? The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen. File Browser The file browser allows you to browse the filesystem in your device and any connected media, you can view images and play music and videos. KDEConnect integration allows you to share files with your mobile devices GUI Framework Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt. OVOS uses the standard mycroft-gui framework, you can find the official documentation here","title":"OVOS Skills"},{"location":"unused/arch_skills/#ovos-skills","text":"","title":"OVOS Skills"},{"location":"unused/arch_skills/#what-can-a-skill-do","text":"Skills give OVOS the ability to perform a variety of functions. They can be installed or removed by the user, and can be easily updated to expand functionality. To get a good idea of what skills to build, let\u2019s talk about the best use cases for a voice assistant, and what types of things OVOS can do. OVOS can run on a variety of platforms from the Linux Desktop to SBCs like the raspberry pi. Different devices will have slightly different use cases. Devices in the home are generally located in the living room or kitchen and are ideal for listening to the news, playing music, general information, using timers while cooking, checking the weather, and other similar activities that are easily accomplished hands free.","title":"What can a Skill do?"},{"location":"unused/arch_skills/#basic-functions","text":"We cover a lot of the basics with our Default Skills, things like Timers, Alarms, Weather, Time and Date, and more.","title":"Basic functions"},{"location":"unused/arch_skills/#information","text":"We also call this General Question and Answer, and it covers all of those factual questions someone might think to ask a voice assistant. Questions like \u201cwho was the 32nd President of the United States?\u201d, or \u201chow tall is Eiffel Tower?\u201d Although the Default Skills cover a great deal of questions there is room for more. There are many topics that could use a specific skill such as Science, Academics, Movie Info, TV info, and Music info, etc..","title":"Information"},{"location":"unused/arch_skills/#media","text":"One of the biggest use cases for Smart Speakers is playing media. The reason media playback is so popular is that it makes playing a song so easy, all you have to do is say \u201cHey Mycroft play the Beatles,\u201d and you can be enjoying music without having to reach for a phone or remote. In addition to listening to music, there are skills that handle videos as well.","title":"Media"},{"location":"unused/arch_skills/#news","text":"Much like listening to music, getting the latest news with a simple voice interaction is extremely convenient. OVOS supports multiple news feeds, and has the ability to support multiple news skills.","title":"News"},{"location":"unused/arch_skills/#smart-home","text":"Another popular use case for Voice Assistants is to control Smart Home and IoT products. Within the mycroft ecosystem there are skills for Home Assistant, Wink IoT, Lifx and more, but there are many products that we do not have skill for yet. The open source community has been enthusiastically expanding OVOS's ability to voice control all kinds of smart home products.","title":"Smart Home"},{"location":"unused/arch_skills/#games","text":"Voice games are becoming more and more popular, especially those that allow multiple users to play together. Trivia games are some of the most popular types of games to develop for voice assistants. There are several games already available for OVOS. There are native voice adventure games, ports of the popular text adventure games from infocom, a Crystal Ball game, a Number Guessing game and much more!","title":"Games"},{"location":"unused/arch_skills/#openvoiceos-standard-skills","text":"","title":"OpenVoiceOS Standard Skills"},{"location":"unused/arch_skills/#standard-skills-usage","text":"Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage.","title":"Standard Skills Usage"},{"location":"unused/arch_skills/#date-time-skill","text":"You can ask your device what time or date it is just in case you lost your watch. Hey Mycroft, what time is it? Hey Mycroft, what is the date?","title":"Date / Time skill"},{"location":"unused/arch_skills/#setting-an-alarm","text":"Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device. Hey Mycroft, set an alarm for 8 AM.","title":"Setting an alarm"},{"location":"unused/arch_skills/#setting-of-timers","text":"Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers. Hey Mycroft, set a timer for 5 minutes. You can always set more timers and even name them, so you know which timers is for what. Hey, Mycroft, set another timer called rice cooking for 7 minutes.","title":"Setting of timers"},{"location":"unused/arch_skills/#asking-the-weather","text":"You can ask your device what the weather is or would be at any given time or place. Hey Mycroft, what is the weather like today? The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen.","title":"Asking the weather"},{"location":"unused/arch_skills/#file-browser","text":"The file browser allows you to browse the filesystem in your device and any connected media, you can view images and play music and videos. KDEConnect integration allows you to share files with your mobile devices","title":"File Browser"},{"location":"unused/arch_skills/#gui-framework","text":"Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt. OVOS uses the standard mycroft-gui framework, you can find the official documentation here","title":"GUI Framework"},{"location":"unused/audio_plugins/","text":"Audio Plugins Audio plugins are responsible for handling playback of media, like music and podcasts If mycroft-gui is available these plugins will rarely be used unless ovos is explicitly configured to do so List of Audio plugins Plugin Description ovos-ocp-audio-plugin framework + compatibility layer ovos-audio-plugin-simple sox / aplay / paplay / mpg123 ovos-vlc-plugin vlc audio backend ovos-mplayer-plugin mplayer audio backend","title":"Audio Plugins"},{"location":"unused/audio_plugins/#audio-plugins","text":"Audio plugins are responsible for handling playback of media, like music and podcasts If mycroft-gui is available these plugins will rarely be used unless ovos is explicitly configured to do so","title":"Audio Plugins"},{"location":"unused/audio_plugins/#list-of-audio-plugins","text":"Plugin Description ovos-ocp-audio-plugin framework + compatibility layer ovos-audio-plugin-simple sox / aplay / paplay / mpg123 ovos-vlc-plugin vlc audio backend ovos-mplayer-plugin mplayer audio backend","title":"List of Audio plugins"},{"location":"unused/audio_service/","text":"Audio Service The audio service is responsible for loading TTS and Audio plugins All audio playback is handled by this service Native playback Usually playback is triggered by some originating bus message, eg \"recognizer_loop:utterance\" , this message contains metadata that is used to determine if playback should happen. message.context may contain a source and destination, playback is only triggered if a message destination is a native_source or if missing (considered a broadcast). This separation of native sources allows remote clients such as an android app to interact with OVOS without the actual device where ovos-core is running repeating all TTS and music playback out loud You can learn more about message targeting here By default, only utterances originating from the speech client and ovos cli are considered native for legacy reasons the names for ovos cli and speech client are \"debug_cli\" and \"audio\" respectively TTS Two TTS plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used. This allows you to have a lower quality offline voice as fallback to account for internet outages, this ensures your device can always give you feedback \"tts\": { \"pulse_duck\": false, \"module\": \"ovos-tts-plugin-mimic2\", \"fallback_module\": \"ovos-tts-plugin-mimic\" }, Audio You can enable additional Audio plugins and define the native sources described above under the \"Audio\" section of mycroft.conf ovos-core uses OCP natively for media playback, you can learn more about OCP here OCP will decide when to call the Audio service and what plugin to use, the main use case is for headless setups without a GUI NOTE: mycroft-core has a \"default-backend\" config option, in ovos-core this option has been deprecated and is always OCP. \"Audio\": { \"native_sources\": [\"debug_cli\", \"audio\"], \"backends\": { \"OCP\": { \"type\": \"ovos_common_play\", \"active\": true }, \"simple\": { \"type\": \"ovos_audio_simple\", \"active\": true }, \"vlc\": { \"type\": \"ovos_vlc\", \"active\": true } } },","title":"Audio Service"},{"location":"unused/audio_service/#audio-service","text":"The audio service is responsible for loading TTS and Audio plugins All audio playback is handled by this service","title":"Audio Service"},{"location":"unused/audio_service/#native-playback","text":"Usually playback is triggered by some originating bus message, eg \"recognizer_loop:utterance\" , this message contains metadata that is used to determine if playback should happen. message.context may contain a source and destination, playback is only triggered if a message destination is a native_source or if missing (considered a broadcast). This separation of native sources allows remote clients such as an android app to interact with OVOS without the actual device where ovos-core is running repeating all TTS and music playback out loud You can learn more about message targeting here By default, only utterances originating from the speech client and ovos cli are considered native for legacy reasons the names for ovos cli and speech client are \"debug_cli\" and \"audio\" respectively","title":"Native playback"},{"location":"unused/audio_service/#tts","text":"Two TTS plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used. This allows you to have a lower quality offline voice as fallback to account for internet outages, this ensures your device can always give you feedback \"tts\": { \"pulse_duck\": false, \"module\": \"ovos-tts-plugin-mimic2\", \"fallback_module\": \"ovos-tts-plugin-mimic\" },","title":"TTS"},{"location":"unused/audio_service/#audio","text":"You can enable additional Audio plugins and define the native sources described above under the \"Audio\" section of mycroft.conf ovos-core uses OCP natively for media playback, you can learn more about OCP here OCP will decide when to call the Audio service and what plugin to use, the main use case is for headless setups without a GUI NOTE: mycroft-core has a \"default-backend\" config option, in ovos-core this option has been deprecated and is always OCP. \"Audio\": { \"native_sources\": [\"debug_cli\", \"audio\"], \"backends\": { \"OCP\": { \"type\": \"ovos_common_play\", \"active\": true }, \"simple\": { \"type\": \"ovos_audio_simple\", \"active\": true }, \"vlc\": { \"type\": \"ovos_vlc\", \"active\": true } } },","title":"Audio"},{"location":"unused/backend/","text":"Available Backend Services Supported Backends ovos-core supports multiple backends under a single unified interface Personal backend - self hosted Selene - https://api.mycroft.ai OpenVoiceOS API Service - https://api.openvoiceos.com Offline - support for setting your own api keys and query services directly Developers do not need to worry about backend details in their applications and skills Identity Information A unique uuid and pairing information generated by registering with Home is stored in: ~/.config/mycroft/identity/identity2.json <-- DO NOT SHARE THIS WITH OTHERS! This file uniquely identifies your device and should be kept safe STT Plugin a companion stt plugin is available to use a backend as remote STT provider edit your configuration to use ovos-stt-plugin-selene { \"stt\": { \"module\": \"ovos-stt-plugin-selene\" } } source code Offline Backend OVOS by default runs without a backend, in this case you will need to configure api keys manually This can be done with OCA or by editing mycroft.conf edit your configuration to use the offline backend { \"server\": { \"backend_type\": \"offline\" } } Selene The official mycroft home backend is called selene, users need to create an account and pair devices with the mycroft servers. This backend is not considered optional by MycroftAI but is not used by OVOS unless explicitly enabled Selene is AGPL licensed: - backend source code - frontend source code edit your configuration to use the selene backend { \"server\": { \"backend_type\": \"selene\", \"url\": \"https://api.mycroft.ai\", \"version\": \"v1\", \"update\": true, \"metrics\": true, \"sync_skill_settings\": true } } Personal Backend Personal backend is a reverse engineered alternative to selene that predates it It provides the same functionality for devices and packs some extra options It is not intended to serve different users or thousands of devices, there are no user accounts! This is currently the only way to run a vanilla mycroft-core device offline edit your configuration to use your own personal backend instance { \"server\": { \"backend_type\": \"personal\", \"url\": \"http://0.0.0.0:6712\", \"version\": \"v1\", \"update\": true, \"metrics\": true, \"sync_skill_settings\": true } } source code OVOS API Service OVOS Api Service is not a full backend, it is a set of free proxy services hosted by the OVOS Team for usage in default skills device management functionality and user accounts do not exist, offline mode will be used for these apis edit your configuration to use the OVOS backend { \"server\": { \"backend_type\": \"ovos\", \"url\": \"https://api.openvoiceos.com\" } } source code","title":"Available Backend Services"},{"location":"unused/backend/#available-backend-services","text":"","title":"Available Backend Services"},{"location":"unused/backend/#supported-backends","text":"ovos-core supports multiple backends under a single unified interface Personal backend - self hosted Selene - https://api.mycroft.ai OpenVoiceOS API Service - https://api.openvoiceos.com Offline - support for setting your own api keys and query services directly Developers do not need to worry about backend details in their applications and skills","title":"Supported Backends"},{"location":"unused/backend/#identity-information","text":"A unique uuid and pairing information generated by registering with Home is stored in: ~/.config/mycroft/identity/identity2.json <-- DO NOT SHARE THIS WITH OTHERS! This file uniquely identifies your device and should be kept safe","title":"Identity Information"},{"location":"unused/backend/#stt-plugin","text":"a companion stt plugin is available to use a backend as remote STT provider edit your configuration to use ovos-stt-plugin-selene { \"stt\": { \"module\": \"ovos-stt-plugin-selene\" } } source code","title":"STT Plugin"},{"location":"unused/backend/#offline-backend","text":"OVOS by default runs without a backend, in this case you will need to configure api keys manually This can be done with OCA or by editing mycroft.conf edit your configuration to use the offline backend { \"server\": { \"backend_type\": \"offline\" } }","title":"Offline Backend"},{"location":"unused/backend/#selene","text":"The official mycroft home backend is called selene, users need to create an account and pair devices with the mycroft servers. This backend is not considered optional by MycroftAI but is not used by OVOS unless explicitly enabled Selene is AGPL licensed: - backend source code - frontend source code edit your configuration to use the selene backend { \"server\": { \"backend_type\": \"selene\", \"url\": \"https://api.mycroft.ai\", \"version\": \"v1\", \"update\": true, \"metrics\": true, \"sync_skill_settings\": true } }","title":"Selene"},{"location":"unused/backend/#personal-backend","text":"Personal backend is a reverse engineered alternative to selene that predates it It provides the same functionality for devices and packs some extra options It is not intended to serve different users or thousands of devices, there are no user accounts! This is currently the only way to run a vanilla mycroft-core device offline edit your configuration to use your own personal backend instance { \"server\": { \"backend_type\": \"personal\", \"url\": \"http://0.0.0.0:6712\", \"version\": \"v1\", \"update\": true, \"metrics\": true, \"sync_skill_settings\": true } } source code","title":"Personal Backend"},{"location":"unused/backend/#ovos-api-service","text":"OVOS Api Service is not a full backend, it is a set of free proxy services hosted by the OVOS Team for usage in default skills device management functionality and user accounts do not exist, offline mode will be used for these apis edit your configuration to use the OVOS backend { \"server\": { \"backend_type\": \"ovos\", \"url\": \"https://api.openvoiceos.com\" } } source code","title":"OVOS API Service"},{"location":"unused/backend_client/","text":"OVOS Backend Client Python client library for interaction with several supported backends under a single unified interface Personal backend - self hosted Selene - https://api.mycroft.ai OpenVoiceOS API Service - https://api.openvoiceos.com Offline - support for setting your own api keys and query services directly Backend Overview API Offline Personal Selene OVOS Admin yes [1] yes no no Device yes [2] yes yes yes [4] Metrics yes [2] yes yes yes [4] Dataset yes [2] yes yes yes [4] OAuth yes [2] yes yes yes [4] Wolfram yes [3] yes yes yes Geolocate yes yes yes yes STT yes [3] yes yes yes Weather yes [3] yes yes yes Email yes [3] yes yes yes [1] will update user level mycroft.conf [2] shared json database with personal backend for UI compat [3] needs additional configuration (eg. credentials) [4] uses offline_backend functionality Geolocation from ovos_backend_client.api import GeolocationApi geo = GeolocationApi() data = geo.get_geolocation(\"Lisbon Portugal\") # {'city': 'Lisboa', # 'country': 'Portugal', # 'latitude': 38.7077507, # 'longitude': -9.1365919, # 'timezone': 'Europe/Lisbon'} Remote Settings To interact with skill settings on selene from ovos_backend_client.settings import RemoteSkillSettings # in ovos-core skill_id is deterministic and safe s = RemoteSkillSettings(\"skill.author\") # in mycroft-core please ensure a valid remote_id # in MycroftSkill class you can use # remote_id = self.settings_meta.skill_gid # s = RemoteSkillSettings(\"skill.author\", remote_id=\"@|whatever_msm_decided\") s.download() s.settings[\"existing_value\"] = True s.settings[\"new_value\"] = \"will NOT show up in UI\" s.upload() # auto generate new settings meta for all new values before uploading s.settings[\"new_value\"] = \"will show up in UI\" s.generate_meta() # now \"new_value\" is in meta s.upload() Selene Cloud by hijacking skill settings we allows storing arbitrary data in selene and use it across devices and skills from ovos_backend_client.cloud import SeleneCloud cloud = SeleneCloud() cloud.add_entry(\"test\", {\"secret\": \"NOT ENCRYPTED MAN\"}) data = cloud.get_entry(\"test\") an encrypted version is also supported if you dont trust selene! from ovos_backend_client.cloud import SecretSeleneCloud k = \"D8fmXEP5VqzVw2HE\" # you need this to read back the data cloud = SecretSeleneCloud(k) cloud.add_entry(\"test\", {\"secret\": \"secret data, selene cant read this\"}) data = cloud.get_entry(\"test\") OAuth Retrieving the tokens in a skill does not depend on the selected backend, the mechanism to register a token is backend specific First you need to authorize the application, this can be done with ovos-config-assistant if running offline or ovos-backend-manager if using personal backend If using selene there is no automated process to add a token, you need to contact support@mycroft.ai from ovos_backend_client.api import OAuthApi, BackendType # api = OAuthApi() # determine oauth backend from mycroft.conf api = OAuthApi(backend_type=BackendType.OFFLINE) # explicitly use ovos-backend-manager oauth token_json = api.get_oauth_token(\"spotify\") OpenWeatherMap Proxy from ovos_backend_client.api import OpenWeatherMapApi owm = OpenWeatherMapApi() data = owm.get_weather() # dict - see api docs from owm onecall api Wolfram Alpha proxy from ovos_backend_client.api import WolframAlphaApi wolf = WolframAlphaApi() answer = wolf.spoken(\"what is the speed of light\") # The speed of light has a value of about 300 million meters per second data = wolf.full_results(\"2+2\") # dict - see api docs from wolfram STT a companion stt plugin is available - ovos-stt-plugin-selene Admin Api (personal backend only!) since local backend does not provide a web ui a admin api can be used to manage your devices from ovos_backend_client.api import AdminApi admin = AdminApi(\"secret_admin_key\") uuid = \"...\" # check identity2.json in the device you want to manage # manually pair a device identity_json = admin.pair(uuid) # set device info info = {\"opt_in\": True, \"name\": \"my_device\", \"device_location\": \"kitchen\", \"email\": \"notifications@me.com\", \"isolated_skills\": False, \"lang\": \"en-us\"} admin.set_device_info(uuid, info) # set device preferences prefs = {\"time_format\": \"full\", \"date_format\": \"DMY\", \"system_unit\": \"metric\", \"lang\": \"en-us\", \"wake_word\": \"hey_mycroft\", \"ww_config\": {\"phonemes\": \"HH EY . M AY K R AO F T\", \"module\": \"ovos-ww-plugin-pocketsphinx\", \"threshold\": 1e-90}, \"tts_module\": \"ovos-tts-plugin-mimic\", \"tts_config\": {\"voice\": \"ap\"}} admin.set_device_prefs(uuid, prefs) # set location data loc = { \"city\": { \"code\": \"Lawrence\", \"name\": \"Lawrence\", \"state\": { \"code\": \"KS\", \"name\": \"Kansas\", \"country\": { \"code\": \"US\", \"name\": \"United States\" } } }, \"coordinate\": { \"latitude\": 38.971669, \"longitude\": -95.23525 }, \"timezone\": { \"code\": \"America/Chicago\", \"name\": \"Central Standard Time\", \"dstOffset\": 3600000, \"offset\": -21600000 } } admin.set_device_location(uuid, loc)","title":"OVOS Backend Client"},{"location":"unused/backend_client/#ovos-backend-client","text":"Python client library for interaction with several supported backends under a single unified interface Personal backend - self hosted Selene - https://api.mycroft.ai OpenVoiceOS API Service - https://api.openvoiceos.com Offline - support for setting your own api keys and query services directly","title":"OVOS Backend Client"},{"location":"unused/backend_client/#backend-overview","text":"API Offline Personal Selene OVOS Admin yes [1] yes no no Device yes [2] yes yes yes [4] Metrics yes [2] yes yes yes [4] Dataset yes [2] yes yes yes [4] OAuth yes [2] yes yes yes [4] Wolfram yes [3] yes yes yes Geolocate yes yes yes yes STT yes [3] yes yes yes Weather yes [3] yes yes yes Email yes [3] yes yes yes [1] will update user level mycroft.conf [2] shared json database with personal backend for UI compat [3] needs additional configuration (eg. credentials) [4] uses offline_backend functionality","title":"Backend Overview"},{"location":"unused/backend_client/#geolocation","text":"from ovos_backend_client.api import GeolocationApi geo = GeolocationApi() data = geo.get_geolocation(\"Lisbon Portugal\") # {'city': 'Lisboa', # 'country': 'Portugal', # 'latitude': 38.7077507, # 'longitude': -9.1365919, # 'timezone': 'Europe/Lisbon'}","title":"Geolocation"},{"location":"unused/backend_client/#remote-settings","text":"To interact with skill settings on selene from ovos_backend_client.settings import RemoteSkillSettings # in ovos-core skill_id is deterministic and safe s = RemoteSkillSettings(\"skill.author\") # in mycroft-core please ensure a valid remote_id # in MycroftSkill class you can use # remote_id = self.settings_meta.skill_gid # s = RemoteSkillSettings(\"skill.author\", remote_id=\"@|whatever_msm_decided\") s.download() s.settings[\"existing_value\"] = True s.settings[\"new_value\"] = \"will NOT show up in UI\" s.upload() # auto generate new settings meta for all new values before uploading s.settings[\"new_value\"] = \"will show up in UI\" s.generate_meta() # now \"new_value\" is in meta s.upload()","title":"Remote Settings"},{"location":"unused/backend_client/#selene-cloud","text":"by hijacking skill settings we allows storing arbitrary data in selene and use it across devices and skills from ovos_backend_client.cloud import SeleneCloud cloud = SeleneCloud() cloud.add_entry(\"test\", {\"secret\": \"NOT ENCRYPTED MAN\"}) data = cloud.get_entry(\"test\") an encrypted version is also supported if you dont trust selene! from ovos_backend_client.cloud import SecretSeleneCloud k = \"D8fmXEP5VqzVw2HE\" # you need this to read back the data cloud = SecretSeleneCloud(k) cloud.add_entry(\"test\", {\"secret\": \"secret data, selene cant read this\"}) data = cloud.get_entry(\"test\")","title":"Selene Cloud"},{"location":"unused/backend_client/#oauth","text":"Retrieving the tokens in a skill does not depend on the selected backend, the mechanism to register a token is backend specific First you need to authorize the application, this can be done with ovos-config-assistant if running offline or ovos-backend-manager if using personal backend If using selene there is no automated process to add a token, you need to contact support@mycroft.ai from ovos_backend_client.api import OAuthApi, BackendType # api = OAuthApi() # determine oauth backend from mycroft.conf api = OAuthApi(backend_type=BackendType.OFFLINE) # explicitly use ovos-backend-manager oauth token_json = api.get_oauth_token(\"spotify\")","title":"OAuth"},{"location":"unused/backend_client/#openweathermap-proxy","text":"from ovos_backend_client.api import OpenWeatherMapApi owm = OpenWeatherMapApi() data = owm.get_weather() # dict - see api docs from owm onecall api","title":"OpenWeatherMap Proxy"},{"location":"unused/backend_client/#wolfram-alpha-proxy","text":"from ovos_backend_client.api import WolframAlphaApi wolf = WolframAlphaApi() answer = wolf.spoken(\"what is the speed of light\") # The speed of light has a value of about 300 million meters per second data = wolf.full_results(\"2+2\") # dict - see api docs from wolfram","title":"Wolfram Alpha proxy"},{"location":"unused/backend_client/#stt","text":"a companion stt plugin is available - ovos-stt-plugin-selene","title":"STT"},{"location":"unused/backend_client/#admin-api-personal-backend-only","text":"since local backend does not provide a web ui a admin api can be used to manage your devices from ovos_backend_client.api import AdminApi admin = AdminApi(\"secret_admin_key\") uuid = \"...\" # check identity2.json in the device you want to manage # manually pair a device identity_json = admin.pair(uuid) # set device info info = {\"opt_in\": True, \"name\": \"my_device\", \"device_location\": \"kitchen\", \"email\": \"notifications@me.com\", \"isolated_skills\": False, \"lang\": \"en-us\"} admin.set_device_info(uuid, info) # set device preferences prefs = {\"time_format\": \"full\", \"date_format\": \"DMY\", \"system_unit\": \"metric\", \"lang\": \"en-us\", \"wake_word\": \"hey_mycroft\", \"ww_config\": {\"phonemes\": \"HH EY . M AY K R AO F T\", \"module\": \"ovos-ww-plugin-pocketsphinx\", \"threshold\": 1e-90}, \"tts_module\": \"ovos-tts-plugin-mimic\", \"tts_config\": {\"voice\": \"ap\"}} admin.set_device_prefs(uuid, prefs) # set location data loc = { \"city\": { \"code\": \"Lawrence\", \"name\": \"Lawrence\", \"state\": { \"code\": \"KS\", \"name\": \"Kansas\", \"country\": { \"code\": \"US\", \"name\": \"United States\" } } }, \"coordinate\": { \"latitude\": 38.971669, \"longitude\": -95.23525 }, \"timezone\": { \"code\": \"America/Chicago\", \"name\": \"Central Standard Time\", \"dstOffset\": 3600000, \"offset\": -21600000 } } admin.set_device_location(uuid, loc)","title":"Admin Api (personal backend only!)"},{"location":"unused/backend_manager/","text":"","title":"Backend manager"},{"location":"unused/bigscreen/","text":"Introduction: OpenVoiceOS GUI supports various Skills and PHAL plugins that share a voice application interface with Plasma Bigscreen. In order to enable key navigation on Plasma Bigscreen and Media Centers, the user needs to export an environment variable. Exporting the Environment Variable: In order to enable key navigation on Plasma Bigscreen and Media Centers, the user needs to export the environment variable QT_FILE_SELECTORS=mediacenter . This can be done by executing the following command in the terminal: export QT_FILE_SELECTORS=mediacenter This environment variable by default is enabled and added to the Plasma Bigscreen environment. To create your own media center environment store the variable in /etc/environment or /etc/profile.d/99-ovos-media-center.sh Exporting the environment variable QT_FILE_SELECTORS=mediacenter is a necessary step to enable key navigation on Plasma Bigscreen and Media Centers for the Open Voice OS project GUI. With this in place, the user can enjoy seamless key navigation while using the Skills and PHAL plugins on their Plasma Bigscreen and Media Centers.","title":"Bigscreen"},{"location":"unused/bigscreen/#introduction","text":"OpenVoiceOS GUI supports various Skills and PHAL plugins that share a voice application interface with Plasma Bigscreen. In order to enable key navigation on Plasma Bigscreen and Media Centers, the user needs to export an environment variable.","title":"Introduction:"},{"location":"unused/bigscreen/#exporting-the-environment-variable","text":"In order to enable key navigation on Plasma Bigscreen and Media Centers, the user needs to export the environment variable QT_FILE_SELECTORS=mediacenter . This can be done by executing the following command in the terminal: export QT_FILE_SELECTORS=mediacenter This environment variable by default is enabled and added to the Plasma Bigscreen environment. To create your own media center environment store the variable in /etc/environment or /etc/profile.d/99-ovos-media-center.sh Exporting the environment variable QT_FILE_SELECTORS=mediacenter is a necessary step to enable key navigation on Plasma Bigscreen and Media Centers for the Open Voice OS project GUI. With this in place, the user can enjoy seamless key navigation while using the Skills and PHAL plugins on their Plasma Bigscreen and Media Centers.","title":"Exporting the Environment Variable:"},{"location":"unused/btspeaker/","text":"Bluetooth speaker The buildroot edition of OpenVoiceOS by default also acts as a bluetooth speaker. You can find it from any (mobile) device as discoverable within the bluetooth pairing menu. You can pair with it and use your OpenVoiceOS as any other Bluetooth speaker you might own. (NOTE: At the moment, pairing is broken but will be fixed as soon as we get to it)","title":"Bluetooth speaker"},{"location":"unused/btspeaker/#bluetooth-speaker","text":"The buildroot edition of OpenVoiceOS by default also acts as a bluetooth speaker. You can find it from any (mobile) device as discoverable within the bluetooth pairing menu. You can pair with it and use your OpenVoiceOS as any other Bluetooth speaker you might own. (NOTE: At the moment, pairing is broken but will be fixed as soon as we get to it)","title":"Bluetooth speaker"},{"location":"unused/buildroot/","text":"Open Voice Operating System - Buildroot Edition Auto detection and configuration of HAT's The buildroot OpenVoiceOS editions is considered to be consumer friendly type of device, or as Mycroft A.I. would like to call, a retail version. However as we so not target a specific hardware platform and would like to support custom made systems we are implementing a smart way to detect and configure different type of Raspberry Pi HAT's. At boot the system scan the I2C bus for known and supported HAT's and if found configures the underlying linux sound system. At the moment this is still very much in development, however the below HAT's are or should soon be supported by this system; - ReSpeaker 2-mic HAT - ReSpeaker 4-mic Square HAT - ReSpeaker 4-mic linear / 6-mic HAT - USB devices such as the PS EYE-2 - SJ-201 Dev Kits - SJ-201 Mark2 retail device Snapcast Client & Server TODO - write docs Remote shared folder access (SMB - Windows) Your OpenVoiceOS device is accessible over the network from your Windows computer. This is still a work in process, but you can open a file explorer and navigate to you OpenVoiceOS device. At the moment the following directories within the user's home directory are shared over the network. - Documents - Music - Pictures These folders are also used by KDE Connect file transfer plugins and for instance the Camera skill (Hey Mycroft, take a selfie) and / or Homescreen Skill (Hey Mycroft, take a screenshot) Remote shared folder access (NFS - Linux) In the near future the above Windows network shares are also made available over NFS for Linux clients. This is still a Work In Progress / To Do item. Development. At this moment development is in very early stages and focussed on the Raspberry Pi 3B & 4. As soon as an initial first workable version is created, other hardware might be added. Source code: https://github.com/OpenVoiceOS/ovos-buildroot Build Environment Only use x86_64 based architecture/ hardware to build the image. The following example Build environment has been tested : Architecture: x86_64 Hardware: Intel Core i5 processor, 8GB RAM, 240GB SSD (you can build on less RAM (2GB) and slower storage but more RAM, faster storage = quicker image building) OS: Ubuntu 22.04 LTS desktop Installing System Build Dependencies The following system packages are required to build the image: gcc subversion qttools5-dev qttools5-dev-tools python git make g++ curl wget qtdeclarative5-dev The following firewall ports need to be allowed to the internet. In addition to the usual http/https ports (tcp 80, tcp 443) a couple of other ports need to be allowed to the internet : - tcp 9418 (git). - tcp 21 (ftp PASV) and random ports for DATA channel. This can be optional but better to have this allowed along with the corresponding random data channel ports. (knowledge of firewalls required) Getting the code. First, get the code on your system! The simplest method is via git. - cd ~/ - git clone --recurse-submodules https://github.com/OpenVoiceOS/OpenVoiceOS.git - cd OpenVoiceOS Patching Buildroot. (ONLY at the first clean checkout/clone) If this is the very first time you are going to build an image, you need to execute the following command once; - ./scripts/br-patches.sh This will patch the Buildroot packages. Building the image. Building the image(s) can be done by utilizing a proper Makefile; To see the available commands, just run: 'make help' As example to build the rpi4 version; - make clean - make rpi4_64-gui-config - make rpi4_64-gui Now grab a cup of coffee, go for a walk, sleep and repeat as the build process takes up a long time pulling everything from source and cross compiling everything for the device. Especially the qtwebengine package is taking a LONG time. (At the moment there is an outstanding issue which prevents the build to run completely to the end. The plasma-workspace package will error out, not finding the libGLESv4 properly linked within QT5GUI. When the build stopped because of this error, edit the following file; buildroot/output/host/aarch64-buildroot-linux-gnu/sysroot/usr/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake at the bottom of the file replace this line; _qt5gui_find_extra_libs(OPENGL \"GLESv2\" \"\" \"\") And replace it bit this line; _qt5gui_find_extra_libs(OPENGL \"${CMAKE_SYSROOT}/usr/lib/libGLESv2.so\" \"\" \"${CMAKE_SYSROOT}/usr/include/libdrm\") Then you can continue the build process by re-running the \"make rpi4_64-gui\" command. (DO NOT, run \"make clean\" and/or \"make rpi4_64-gui-config\" again, or you will start from scratch again !!!) When everything goes fine the xz compressed image will be available within the release directory. Booting image from sd card for the first time (setting up Wi-Fi and backend). 1.Ensure all required peripherals (mic, speakers, HDMI, usb mouse etc) are plugged in before powering on your RPI4 for the first time. 2. Skip this step if RPI4 is using an ethernet cable. Once powered on, the screen will present the Wifi setup screen ( a Wifi HotSpot is created). Connect to the Wifi HotSpot (ssid OVOS) from another device and follow the on-screen instructions to set up Wifi. 3.Once Wifi is setup a choice of Mycroft backend and Local backend is presented. Choose the Mycroft backend for now and follow the on-screen instructions, Local backend is not ready to use yet. After the pairing process has completed and skills downloaded it's time to test/ use it. Accessing the CLI. SSH to ip address of RPI4 default credentials 'mycroft/mycroft'","title":" Open Voice Operating System - Buildroot Edition"},{"location":"unused/buildroot/#open-voice-operating-system-buildroot-edition","text":"","title":" Open Voice Operating System - Buildroot Edition"},{"location":"unused/buildroot/#auto-detection-and-configuration-of-hats","text":"The buildroot OpenVoiceOS editions is considered to be consumer friendly type of device, or as Mycroft A.I. would like to call, a retail version. However as we so not target a specific hardware platform and would like to support custom made systems we are implementing a smart way to detect and configure different type of Raspberry Pi HAT's. At boot the system scan the I2C bus for known and supported HAT's and if found configures the underlying linux sound system. At the moment this is still very much in development, however the below HAT's are or should soon be supported by this system; - ReSpeaker 2-mic HAT - ReSpeaker 4-mic Square HAT - ReSpeaker 4-mic linear / 6-mic HAT - USB devices such as the PS EYE-2 - SJ-201 Dev Kits - SJ-201 Mark2 retail device","title":"Auto detection and configuration of HAT's"},{"location":"unused/buildroot/#snapcast-client-server","text":"TODO - write docs","title":"Snapcast Client & Server"},{"location":"unused/buildroot/#remote-shared-folder-access-smb-windows","text":"Your OpenVoiceOS device is accessible over the network from your Windows computer. This is still a work in process, but you can open a file explorer and navigate to you OpenVoiceOS device. At the moment the following directories within the user's home directory are shared over the network. - Documents - Music - Pictures These folders are also used by KDE Connect file transfer plugins and for instance the Camera skill (Hey Mycroft, take a selfie) and / or Homescreen Skill (Hey Mycroft, take a screenshot)","title":"Remote shared folder access (SMB - Windows)"},{"location":"unused/buildroot/#remote-shared-folder-access-nfs-linux","text":"In the near future the above Windows network shares are also made available over NFS for Linux clients. This is still a Work In Progress / To Do item.","title":"Remote shared folder access (NFS - Linux)"},{"location":"unused/buildroot/#development","text":"At this moment development is in very early stages and focussed on the Raspberry Pi 3B & 4. As soon as an initial first workable version is created, other hardware might be added. Source code: https://github.com/OpenVoiceOS/ovos-buildroot","title":"Development."},{"location":"unused/buildroot/#build-environment","text":"Only use x86_64 based architecture/ hardware to build the image. The following example Build environment has been tested : Architecture: x86_64 Hardware: Intel Core i5 processor, 8GB RAM, 240GB SSD (you can build on less RAM (2GB) and slower storage but more RAM, faster storage = quicker image building) OS: Ubuntu 22.04 LTS desktop","title":"Build Environment"},{"location":"unused/buildroot/#installing-system-build-dependencies","text":"The following system packages are required to build the image: gcc subversion qttools5-dev qttools5-dev-tools python git make g++ curl wget qtdeclarative5-dev","title":"Installing System Build Dependencies"},{"location":"unused/buildroot/#the-following-firewall-ports-need-to-be-allowed-to-the-internet","text":"In addition to the usual http/https ports (tcp 80, tcp 443) a couple of other ports need to be allowed to the internet : - tcp 9418 (git). - tcp 21 (ftp PASV) and random ports for DATA channel. This can be optional but better to have this allowed along with the corresponding random data channel ports. (knowledge of firewalls required)","title":"The following firewall ports need to be allowed to the internet."},{"location":"unused/buildroot/#getting-the-code","text":"First, get the code on your system! The simplest method is via git. - cd ~/ - git clone --recurse-submodules https://github.com/OpenVoiceOS/OpenVoiceOS.git - cd OpenVoiceOS","title":"Getting the code."},{"location":"unused/buildroot/#patching-buildroot","text":"(ONLY at the first clean checkout/clone) If this is the very first time you are going to build an image, you need to execute the following command once; - ./scripts/br-patches.sh This will patch the Buildroot packages.","title":"Patching Buildroot."},{"location":"unused/buildroot/#building-the-image","text":"Building the image(s) can be done by utilizing a proper Makefile; To see the available commands, just run: 'make help' As example to build the rpi4 version; - make clean - make rpi4_64-gui-config - make rpi4_64-gui Now grab a cup of coffee, go for a walk, sleep and repeat as the build process takes up a long time pulling everything from source and cross compiling everything for the device. Especially the qtwebengine package is taking a LONG time. (At the moment there is an outstanding issue which prevents the build to run completely to the end. The plasma-workspace package will error out, not finding the libGLESv4 properly linked within QT5GUI. When the build stopped because of this error, edit the following file; buildroot/output/host/aarch64-buildroot-linux-gnu/sysroot/usr/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake at the bottom of the file replace this line; _qt5gui_find_extra_libs(OPENGL \"GLESv2\" \"\" \"\") And replace it bit this line; _qt5gui_find_extra_libs(OPENGL \"${CMAKE_SYSROOT}/usr/lib/libGLESv2.so\" \"\" \"${CMAKE_SYSROOT}/usr/include/libdrm\") Then you can continue the build process by re-running the \"make rpi4_64-gui\" command. (DO NOT, run \"make clean\" and/or \"make rpi4_64-gui-config\" again, or you will start from scratch again !!!) When everything goes fine the xz compressed image will be available within the release directory.","title":"Building the image."},{"location":"unused/buildroot/#booting-image-from-sd-card-for-the-first-time-setting-up-wi-fi-and-backend","text":"1.Ensure all required peripherals (mic, speakers, HDMI, usb mouse etc) are plugged in before powering on your RPI4 for the first time. 2. Skip this step if RPI4 is using an ethernet cable. Once powered on, the screen will present the Wifi setup screen ( a Wifi HotSpot is created). Connect to the Wifi HotSpot (ssid OVOS) from another device and follow the on-screen instructions to set up Wifi. 3.Once Wifi is setup a choice of Mycroft backend and Local backend is presented. Choose the Mycroft backend for now and follow the on-screen instructions, Local backend is not ready to use yet. After the pairing process has completed and skills downloaded it's time to test/ use it.","title":"Booting image from sd card for the first time (setting up Wi-Fi and backend)."},{"location":"unused/buildroot/#accessing-the-cli","text":"SSH to ip address of RPI4 default credentials 'mycroft/mycroft'","title":"Accessing the CLI."},{"location":"unused/bus_service/","text":"Bus Service The bus service provides a websocket where all internal events travel You can think of the bus service as OVOS's nervous system The mycroft-bus is considered an internal and private websocket, external clients should not connect directly to it. Exposing the bus Please do not expose the messagebus to the outside world! Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system! You can read more about the security issues over at Nhoya/MycroftAI-RCE If you need to connect to the bus from the outside world please check the companion project HiveMind Lots of guides for mycroft tell you to expose the websocket, please (re)read the info and links above, think 10 times before following steps blindly Message A Message consists of a json payload, it contains a type , some data and a context . The context is considered to be metadata and might be changed at any time in transit, the context can contain anything depending on where the message came from, and often is completely empty. You can think of the message context as a sort of session data for a individual interaction, in general messages down the chain keep the context from the original message, most listeners (eg, skills) will only care about type and data . Targeting Theory ovos-core uses the message context to add metadata about the messages themselves, where do they come from and what are they intended for. the Message object provides the following methods: message.forward method, keeps previous context. message continues going to same destination message.reply method swaps \"source\" with \"destination\" message goes back to source The context destination parameter in the original message can be set to a list with any number of intended targets: bus.emit(Message('recognizer_loop:utterance', data, context={'destination': ['audio', 'kde'], 'source': \"remote_service\")) Sources ovos-core injects the context when it emits an utterance, this can be either typed in the ovos-cli-client or spoken via STT service STT will identify itself as audio ovos-cli-client will identify itself as debug_cli mycroft.conf contains a native_sources section you can configure to change how the audio service processes external requests Destinations Output capable services are the cli and the TTS The command line is a debug tool, it will ignore the destination TTS checks the message context if it's the intended target for the message and will only speak in the following conditions: Explicitly targeted i.e. the destination is \"audio\" destination is set to None destination is missing completely The idea is that for example when the android app is used to access OpenVoiceOS the device at home shouldn't start to speak. TTS will be executed when \"audio\" or \"debug_cli\" are the destination A missing destination or if the destination is set to None is interpreted as a multicast and should trigger all output capable processes (be it the audio service, a web-interface, the KDE plasmoid or maybe the android app) Internal routing intent service will .reply to the original utterance message all skill/intent service messages are .forward (from previous intent service .reply ) skills sending their own messages might not respect this warning converse / get_response support is limited, the context may be lost warning in the context of the multiple users skills might keep a shared state between clients, eg. a client may enable parrot mode for everyone warning scheduled events support is limited, the context is lost warning Configuration The messagebus has a dedicated section in mycroft.conf \"websocket\": { \"host\": \"0.0.0.0\", \"port\": 8181, \"route\": \"/core\", \"shared_connection\": true } Security in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills you can see a demonstration of this problem with BusBrickerSkill \"shared_connection\": false ensures each skill gets its own websocket connection and avoids this problem Additionally, it is recommended you change \"host\": \"127.0.0.1\" , this will ensure no outside world connections are allowed","title":"Bus Service"},{"location":"unused/bus_service/#bus-service","text":"The bus service provides a websocket where all internal events travel You can think of the bus service as OVOS's nervous system The mycroft-bus is considered an internal and private websocket, external clients should not connect directly to it.","title":"Bus Service"},{"location":"unused/bus_service/#exposing-the-bus","text":"Please do not expose the messagebus to the outside world! Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system! You can read more about the security issues over at Nhoya/MycroftAI-RCE If you need to connect to the bus from the outside world please check the companion project HiveMind Lots of guides for mycroft tell you to expose the websocket, please (re)read the info and links above, think 10 times before following steps blindly","title":"Exposing the bus"},{"location":"unused/bus_service/#message","text":"A Message consists of a json payload, it contains a type , some data and a context . The context is considered to be metadata and might be changed at any time in transit, the context can contain anything depending on where the message came from, and often is completely empty. You can think of the message context as a sort of session data for a individual interaction, in general messages down the chain keep the context from the original message, most listeners (eg, skills) will only care about type and data .","title":"Message"},{"location":"unused/bus_service/#targeting-theory","text":"ovos-core uses the message context to add metadata about the messages themselves, where do they come from and what are they intended for. the Message object provides the following methods: message.forward method, keeps previous context. message continues going to same destination message.reply method swaps \"source\" with \"destination\" message goes back to source The context destination parameter in the original message can be set to a list with any number of intended targets: bus.emit(Message('recognizer_loop:utterance', data, context={'destination': ['audio', 'kde'], 'source': \"remote_service\"))","title":"Targeting Theory"},{"location":"unused/bus_service/#sources","text":"ovos-core injects the context when it emits an utterance, this can be either typed in the ovos-cli-client or spoken via STT service STT will identify itself as audio ovos-cli-client will identify itself as debug_cli mycroft.conf contains a native_sources section you can configure to change how the audio service processes external requests","title":"Sources"},{"location":"unused/bus_service/#destinations","text":"Output capable services are the cli and the TTS The command line is a debug tool, it will ignore the destination TTS checks the message context if it's the intended target for the message and will only speak in the following conditions: Explicitly targeted i.e. the destination is \"audio\" destination is set to None destination is missing completely The idea is that for example when the android app is used to access OpenVoiceOS the device at home shouldn't start to speak. TTS will be executed when \"audio\" or \"debug_cli\" are the destination A missing destination or if the destination is set to None is interpreted as a multicast and should trigger all output capable processes (be it the audio service, a web-interface, the KDE plasmoid or maybe the android app)","title":"Destinations"},{"location":"unused/bus_service/#internal-routing","text":"intent service will .reply to the original utterance message all skill/intent service messages are .forward (from previous intent service .reply ) skills sending their own messages might not respect this warning converse / get_response support is limited, the context may be lost warning in the context of the multiple users skills might keep a shared state between clients, eg. a client may enable parrot mode for everyone warning scheduled events support is limited, the context is lost warning","title":"Internal routing"},{"location":"unused/bus_service/#configuration","text":"The messagebus has a dedicated section in mycroft.conf \"websocket\": { \"host\": \"0.0.0.0\", \"port\": 8181, \"route\": \"/core\", \"shared_connection\": true }","title":"Configuration"},{"location":"unused/bus_service/#security","text":"in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills you can see a demonstration of this problem with BusBrickerSkill \"shared_connection\": false ensures each skill gets its own websocket connection and avoids this problem Additionally, it is recommended you change \"host\": \"127.0.0.1\" , this will ensure no outside world connections are allowed","title":"Security"},{"location":"unused/config_backend/","text":"OpenVoiceOS Backend Configuration Backend Manager a simple UI for ovos-personal-backend , utility to manage all your devices If you are running ovos-core without a backend OCA provides a similar local interface Install pip install ovos-backend-manager or from source pip install git+https://github.com/OpenVoiceOS/ovos-backend-manager Usage It needs to run on the same machine as the personal backend, it directly interacts with the databases and configuration files ovos-backend-manager will be available in the command line after installing Available Backend Services Supported Backends ovos-core supports multiple backends under a single unified interface Personal backend - self hosted Selene - https://api.mycroft.ai OpenVoiceOS API Service - https://api.openvoiceos.com Offline - support for setting your own api keys and query services directly Developers do not need to worry about backend details in their applications and skills Identity Information A unique uuid and pairing information generated by registering with Home is stored in: ~/.config/mycroft/identity/identity2.json <-- DO NOT SHARE THIS WITH OTHERS! This file uniquely identifies your device and should be kept safe STT Plugin a companion stt plugin is available to use a backend as remote STT provider edit your configuration to use ovos-stt-plugin-selene { \"stt\": { \"module\": \"ovos-stt-plugin-selene\" } } source code Offline Backend OVOS by default runs without a backend, in this case you will need to configure api keys manually This can be done with OCA or by editing mycroft.conf edit your configuration to use the offline backend { \"server\": { \"backend_type\": \"offline\" } } Selene The official mycroft home backend is called selene, users need to create an account and pair devices with the mycroft servers. This backend is not considered optional by MycroftAI but is not used by OVOS unless explicitly enabled Selene is AGPL licensed: - backend source code - frontend source code edit your configuration to use the selene backend { \"server\": { \"backend_type\": \"selene\", \"url\": \"https://api.mycroft.ai\", \"version\": \"v1\", \"update\": true, \"metrics\": true, \"sync_skill_settings\": true } } Personal Backend Personal backend is a reverse engineered alternative to selene that predates it It provides the same functionality for devices and packs some extra options It is not intended to serve different users or thousands of devices, there are no user accounts! This is currently the only way to run a vanilla mycroft-core device offline edit your configuration to use your own personal backend instance { \"server\": { \"backend_type\": \"personal\", \"url\": \"http://0.0.0.0:6712\", \"version\": \"v1\", \"update\": true, \"metrics\": true, \"sync_skill_settings\": true } } source code OVOS API Service OVOS Api Service is not a full backend, it is a set of free proxy services hosted by the OVOS Team for usage in default skills device management functionality and user accounts do not exist, offline mode will be used for these apis edit your configuration to use the OVOS backend { \"server\": { \"backend_type\": \"ovos\", \"url\": \"https://api.openvoiceos.com\" } } source code","title":"OpenVoiceOS Backend Configuration"},{"location":"unused/config_backend/#openvoiceos-backend-configuration","text":"","title":"OpenVoiceOS Backend Configuration"},{"location":"unused/config_backend/#backend-manager","text":"a simple UI for ovos-personal-backend , utility to manage all your devices If you are running ovos-core without a backend OCA provides a similar local interface","title":"Backend Manager"},{"location":"unused/config_backend/#install","text":"pip install ovos-backend-manager or from source pip install git+https://github.com/OpenVoiceOS/ovos-backend-manager","title":"Install"},{"location":"unused/config_backend/#usage","text":"It needs to run on the same machine as the personal backend, it directly interacts with the databases and configuration files ovos-backend-manager will be available in the command line after installing","title":"Usage"},{"location":"unused/config_backend/#available-backend-services","text":"","title":"Available Backend Services"},{"location":"unused/config_backend/#supported-backends","text":"ovos-core supports multiple backends under a single unified interface Personal backend - self hosted Selene - https://api.mycroft.ai OpenVoiceOS API Service - https://api.openvoiceos.com Offline - support for setting your own api keys and query services directly Developers do not need to worry about backend details in their applications and skills","title":"Supported Backends"},{"location":"unused/config_backend/#identity-information","text":"A unique uuid and pairing information generated by registering with Home is stored in: ~/.config/mycroft/identity/identity2.json <-- DO NOT SHARE THIS WITH OTHERS! This file uniquely identifies your device and should be kept safe","title":"Identity Information"},{"location":"unused/config_backend/#stt-plugin","text":"a companion stt plugin is available to use a backend as remote STT provider edit your configuration to use ovos-stt-plugin-selene { \"stt\": { \"module\": \"ovos-stt-plugin-selene\" } } source code","title":"STT Plugin"},{"location":"unused/config_backend/#offline-backend","text":"OVOS by default runs without a backend, in this case you will need to configure api keys manually This can be done with OCA or by editing mycroft.conf edit your configuration to use the offline backend { \"server\": { \"backend_type\": \"offline\" } }","title":"Offline Backend"},{"location":"unused/config_backend/#selene","text":"The official mycroft home backend is called selene, users need to create an account and pair devices with the mycroft servers. This backend is not considered optional by MycroftAI but is not used by OVOS unless explicitly enabled Selene is AGPL licensed: - backend source code - frontend source code edit your configuration to use the selene backend { \"server\": { \"backend_type\": \"selene\", \"url\": \"https://api.mycroft.ai\", \"version\": \"v1\", \"update\": true, \"metrics\": true, \"sync_skill_settings\": true } }","title":"Selene"},{"location":"unused/config_backend/#personal-backend","text":"Personal backend is a reverse engineered alternative to selene that predates it It provides the same functionality for devices and packs some extra options It is not intended to serve different users or thousands of devices, there are no user accounts! This is currently the only way to run a vanilla mycroft-core device offline edit your configuration to use your own personal backend instance { \"server\": { \"backend_type\": \"personal\", \"url\": \"http://0.0.0.0:6712\", \"version\": \"v1\", \"update\": true, \"metrics\": true, \"sync_skill_settings\": true } } source code","title":"Personal Backend"},{"location":"unused/config_backend/#ovos-api-service","text":"OVOS Api Service is not a full backend, it is a set of free proxy services hosted by the OVOS Team for usage in default skills device management functionality and user accounts do not exist, offline mode will be used for these apis edit your configuration to use the OVOS backend { \"server\": { \"backend_type\": \"ovos\", \"url\": \"https://api.openvoiceos.com\" } } source code","title":"OVOS API Service"},{"location":"unused/config_gui/","text":"OpenVoiceOS GUI Configuration Work in Progress","title":"OpenVoiceOS GUI Configuration"},{"location":"unused/config_gui/#openvoiceos-gui-configuration","text":"Work in Progress","title":"OpenVoiceOS GUI Configuration"},{"location":"unused/config_homescreen/","text":"OpenVoiceOS Home Screen The home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections. Features Night Mode Feature The Night Mode feature lets you quickly switch your home screen into a dark standby clock, reducing the amount of light emitted by your device. This is especially useful if you are using your device in a dark room or at night. You can enable the night mode feature by tapping on the left edge pill button on the home screen. Quick Actions Dashboard The Quick Actions Dashboard provides you with a card-based interface to quickly access and add your most used action. The Quick Actions dashboard comes with a variety of pre-defined actions like the ability to quickly add a new alarm, start a new timer or add a new note. You can also add your own custom actions to the dashboard by tapping on the plus button in the top right corner of the dashboard. The Quick Actions dashboard is accessible by tapping on the right edge pill button on the home screen. Application Launcher OpenVoiceOS comes with support for dedicated voice applications. Voice Applications can be dedicated skills or PHAL plugins, providing their own dedicated user interface. The application launcher will show you a list of all available voice applications. You can access the application launcher by tapping on the center pill button on the bottom of the home screen. Wallpapers The home screen supports custom wallpapers and comes with a bunch of wallpapers to choose from. You can easily change your custom wallpaper by swiping from right to left on the home screen. Widgets Notifications Widget The notifications widget provides you with a quick overview of all your notifications. The notifications bell icon will be displayed in the top left corner of the home screen. You can access the notifications overview by tapping on the bell icon when it is displayed. Timer Widget The timer widget is displayed in top left corner after the notifications bell icon. It will show up when you have an active timer running. Clicking on the timer widget will open the timers overview. Alarm Widget The alarm widget is displayed in top left corner after the timer widget. It will show up when you have an active alarm set. Clicking on the alarm widget will open the alarms overview. Media Player Widget The media player widget is displayed in the bottom of the home screen, It replaces the examples widget when a media player is active. The media player widget will show you the currently playing media and provide you with a quick way to pause, resume or skip the current media. You can also quickly access the media player by tapping the quick display media player button on the right side of the media player widget. Configuration Settings The homescreen has several customizations available. This is sample settings.json file with all of the options explained { \"__mycroft_skill_firstrun\": false, \"weather_skill\": \"skill-weather.openvoiceos\", \"datetime_skill\": \"skill-date-time.mycroftai\", \"examples_skill\": \"ovos-skills-info.openvoiceos\", \"wallpaper\": \"default.jpg\", \"persistent_menu_hint\": false, \"examples_enabled\": true, \"randomize_examples\": true, \"examples_prefix\": true } __mycroft_skill_firstrun: This is automatically set on first load of skill weather_skill: DEPRECATED and has no effect - PR pending datetime_skill: Allows you to use a custom skill to display the date and time. Defaults to skill-ovos-date-time.openvoiceos examples_skill: Allows use of a custom skill for the displayed examples. Defaults to ovos_skills_manager.utils.get_skills_example() function wallpaper: Allows a custom wallpaper to be displayed. Use a complete url without any tilde ~ persistent_menu_hint: When true, displayes a hint of the pull-down menu at the top of the page examples_enabled: When false, the examples at the bottom of the screen will be hidden randomize_examples: When false, the rotation of the examples will follow the way they are loaded examples_prefix: When false, the prefix 'Ask Me' will NOT be displayed with the examples","title":"OpenVoiceOS Home Screen"},{"location":"unused/config_homescreen/#openvoiceos-home-screen","text":"The home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections.","title":"OpenVoiceOS Home Screen"},{"location":"unused/config_homescreen/#features","text":"","title":"Features"},{"location":"unused/config_homescreen/#night-mode-feature","text":"The Night Mode feature lets you quickly switch your home screen into a dark standby clock, reducing the amount of light emitted by your device. This is especially useful if you are using your device in a dark room or at night. You can enable the night mode feature by tapping on the left edge pill button on the home screen.","title":"Night Mode Feature"},{"location":"unused/config_homescreen/#quick-actions-dashboard","text":"The Quick Actions Dashboard provides you with a card-based interface to quickly access and add your most used action. The Quick Actions dashboard comes with a variety of pre-defined actions like the ability to quickly add a new alarm, start a new timer or add a new note. You can also add your own custom actions to the dashboard by tapping on the plus button in the top right corner of the dashboard. The Quick Actions dashboard is accessible by tapping on the right edge pill button on the home screen.","title":"Quick Actions Dashboard"},{"location":"unused/config_homescreen/#application-launcher","text":"OpenVoiceOS comes with support for dedicated voice applications. Voice Applications can be dedicated skills or PHAL plugins, providing their own dedicated user interface. The application launcher will show you a list of all available voice applications. You can access the application launcher by tapping on the center pill button on the bottom of the home screen.","title":"Application Launcher"},{"location":"unused/config_homescreen/#wallpapers","text":"The home screen supports custom wallpapers and comes with a bunch of wallpapers to choose from. You can easily change your custom wallpaper by swiping from right to left on the home screen.","title":"Wallpapers"},{"location":"unused/config_homescreen/#widgets","text":"","title":"Widgets"},{"location":"unused/config_homescreen/#notifications-widget","text":"The notifications widget provides you with a quick overview of all your notifications. The notifications bell icon will be displayed in the top left corner of the home screen. You can access the notifications overview by tapping on the bell icon when it is displayed.","title":"Notifications Widget"},{"location":"unused/config_homescreen/#timer-widget","text":"The timer widget is displayed in top left corner after the notifications bell icon. It will show up when you have an active timer running. Clicking on the timer widget will open the timers overview.","title":"Timer Widget"},{"location":"unused/config_homescreen/#alarm-widget","text":"The alarm widget is displayed in top left corner after the timer widget. It will show up when you have an active alarm set. Clicking on the alarm widget will open the alarms overview.","title":"Alarm Widget"},{"location":"unused/config_homescreen/#media-player-widget","text":"The media player widget is displayed in the bottom of the home screen, It replaces the examples widget when a media player is active. The media player widget will show you the currently playing media and provide you with a quick way to pause, resume or skip the current media. You can also quickly access the media player by tapping the quick display media player button on the right side of the media player widget.","title":"Media Player Widget"},{"location":"unused/config_homescreen/#configuration","text":"","title":"Configuration"},{"location":"unused/config_homescreen/#settings","text":"The homescreen has several customizations available. This is sample settings.json file with all of the options explained { \"__mycroft_skill_firstrun\": false, \"weather_skill\": \"skill-weather.openvoiceos\", \"datetime_skill\": \"skill-date-time.mycroftai\", \"examples_skill\": \"ovos-skills-info.openvoiceos\", \"wallpaper\": \"default.jpg\", \"persistent_menu_hint\": false, \"examples_enabled\": true, \"randomize_examples\": true, \"examples_prefix\": true } __mycroft_skill_firstrun: This is automatically set on first load of skill weather_skill: DEPRECATED and has no effect - PR pending datetime_skill: Allows you to use a custom skill to display the date and time. Defaults to skill-ovos-date-time.openvoiceos examples_skill: Allows use of a custom skill for the displayed examples. Defaults to ovos_skills_manager.utils.get_skills_example() function wallpaper: Allows a custom wallpaper to be displayed. Use a complete url without any tilde ~ persistent_menu_hint: When true, displayes a hint of the pull-down menu at the top of the page examples_enabled: When false, the examples at the bottom of the screen will be hidden randomize_examples: When false, the rotation of the examples will follow the way they are loaded examples_prefix: When false, the prefix 'Ask Me' will NOT be displayed with the examples","title":"Settings"},{"location":"unused/config_oca/","text":"OCA - OVOS Config Assistant OCA is a user facing interface to configure ovos devices Web UI OCA provides a local Web UI similar to ovos-backend-manager, in here you can configure your device, view metrics, handle OAuth and more CLI A command line interface is available using ovos-config More Information Python utils from ovos_config_assistant.module_helpers import pprint_core_module_info pprint_core_module_info() \"\"\" ## Mycroft module info can import mycroft : True is ovos-core : True mycroft module location: /home/user/ovos-core/mycroft ## Downstream ovos.conf overrides Module: neon_core can import neon_core : False neon_core module location: None xdg compliance : True base xdg folder : neon mycroft config filename : neon.conf default mycroft.conf path : /home/user/NeonCore/neon_core/configuration/neon.conf Module: hivemind can import hivemind : False hivemind module location: None xdg compliance : True base xdg folder : hivemind mycroft config filename : hivemind.conf default mycroft.conf path : /home/user/PycharmProjects/ovos_workspace/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf ## Downstream module overrides: Module: neon_speech uses config from : neon_core can import neon_speech : False neon_speech module location: None Module: neon_audio uses config from : neon_core can import neon_audio : False neon_audio module location: None Module: neon_enclosure uses config from : neon_core can import neon_enclosure : False neon_enclosure module location: None Module: hivemind_voice_satellite uses config from : hivemind can import hivemind_voice_satellite : True hivemind_voice_satellite module location: /home/user/HiveMind-voice-sat/hivemind_voice_satellite \"\"\" from ovos_config_assistant.config_helpers import pprint_ovos_conf pprint_ovos_conf() \"\"\" ## OVOS Configuration ovos.conf exists : True /home/user/.config/OpenVoiceOS/ovos.conf xdg compliance : True base xdg folder : mycroft mycroft config filename : mycroft.conf default mycroft.conf path : /home/user/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf \"\"\"","title":"OCA - OVOS Config Assistant"},{"location":"unused/config_oca/#oca-ovos-config-assistant","text":"OCA is a user facing interface to configure ovos devices","title":"OCA - OVOS Config Assistant"},{"location":"unused/config_oca/#web-ui","text":"OCA provides a local Web UI similar to ovos-backend-manager, in here you can configure your device, view metrics, handle OAuth and more","title":"Web UI"},{"location":"unused/config_oca/#cli","text":"A command line interface is available using ovos-config More Information","title":"CLI"},{"location":"unused/config_oca/#python-utils","text":"from ovos_config_assistant.module_helpers import pprint_core_module_info pprint_core_module_info() \"\"\" ## Mycroft module info can import mycroft : True is ovos-core : True mycroft module location: /home/user/ovos-core/mycroft ## Downstream ovos.conf overrides Module: neon_core can import neon_core : False neon_core module location: None xdg compliance : True base xdg folder : neon mycroft config filename : neon.conf default mycroft.conf path : /home/user/NeonCore/neon_core/configuration/neon.conf Module: hivemind can import hivemind : False hivemind module location: None xdg compliance : True base xdg folder : hivemind mycroft config filename : hivemind.conf default mycroft.conf path : /home/user/PycharmProjects/ovos_workspace/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf ## Downstream module overrides: Module: neon_speech uses config from : neon_core can import neon_speech : False neon_speech module location: None Module: neon_audio uses config from : neon_core can import neon_audio : False neon_audio module location: None Module: neon_enclosure uses config from : neon_core can import neon_enclosure : False neon_enclosure module location: None Module: hivemind_voice_satellite uses config from : hivemind can import hivemind_voice_satellite : True hivemind_voice_satellite module location: /home/user/HiveMind-voice-sat/hivemind_voice_satellite \"\"\" from ovos_config_assistant.config_helpers import pprint_ovos_conf pprint_ovos_conf() \"\"\" ## OVOS Configuration ovos.conf exists : True /home/user/.config/OpenVoiceOS/ovos.conf xdg compliance : True base xdg folder : mycroft mycroft config filename : mycroft.conf default mycroft.conf path : /home/user/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf \"\"\"","title":"Python utils"},{"location":"unused/config_ovos_audio/","text":"Audio Service The audio service is responsible for loading TTS and Audio plugins All audio playback is handled by this service Native playback Usually playback is triggered by some originating bus message, eg \"recognizer_loop:utterance\" , this message contains metadata that is used to determine if playback should happen. message.context may contain a source and destination, playback is only triggered if a message destination is a native_source or if missing (considered a broadcast). This separation of native sources allows remote clients such as an android app to interact with OVOS without the actual device where ovos-core is running repeating all TTS and music playback out loud You can learn more about message targeting here By default, only utterances originating from the speech client and ovos cli are considered native for legacy reasons the names for ovos cli and speech client are \"debug_cli\" and \"audio\" respectively TTS Two TTS plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used. This allows you to have a lower quality offline voice as fallback to account for internet outages, this ensures your device can always give you feedback \"tts\": { \"pulse_duck\": false, \"module\": \"ovos-tts-plugin-mimic2\", \"fallback_module\": \"ovos-tts-plugin-mimic\" }, Audio You can enable additional Audio plugins and define the native sources described above under the \"Audio\" section of mycroft.conf ovos-core uses OCP natively for media playback, you can learn more about OCP here OCP will decide when to call the Audio service and what plugin to use, the main use case is for headless setups without a GUI NOTE: mycroft-core has a \"default-backend\" config option, in ovos-core this option has been deprecated and is always OCP. \"Audio\": { \"native_sources\": [\"debug_cli\", \"audio\"], \"backends\": { \"OCP\": { \"type\": \"ovos_common_play\", \"active\": true }, \"simple\": { \"type\": \"ovos_audio_simple\", \"active\": true }, \"vlc\": { \"type\": \"ovos_vlc\", \"active\": true } } }, Configuration Reading Configuration Configuring Configuration protected_keys disable_user_config disable_remote_config Meta Configuration ovos.conf Reading Configuration ovos_config.config.Configuration is a singleton that loads a single config object. The configuration files loaded are determined by ovos.conf as described below and can be in either json or yaml format. if Configuration() is called the following configs would be loaded in this order: {core-path} /configuration/mycroft.conf os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/mycroft/mycroft.conf os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH /neon/web_cache.json ~/.mycroft/mycroft.conf (Deprecated) XDG_CONFIG_DIRS + /mycroft/mycroft.conf /etc/xdg/mycroft/mycroft.conf XDG_CONFIG_HOME (default ~/.config) + /mycroft/mycroft.conf When the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that exist in multiple configuration files will be overridden by the last file to contain the value. This process results in a minimal amount being written for a specific device and user, without modifying default distribution files. Configuring Configuration There are a couple of special configuration keys that change the way the configuration stack loads. Default config refers to the config specified at default_config_path in ovos.conf (#1 {core-path}/configuration/mycroft.conf in the stack above). System config refers to the config at /etc/{base_folder}/{config_filename} (#2 /etc/mycroft/mycroft.conf in the stack above). protected_keys A \"protected_keys\" configuration section may be added to a Default or System Config file (default /etc/mycroft/mycroft.conf ). This configuration section specifies other configuration keys that may not be specified in remote or user configurations. Keys may specify nested parameters with . to exclude specific keys within nested dictionaries. An example config could be: { \"protected_keys\": { \"remote\": [ \"gui_websocket.host\", \"websocket.host\" ], \"user\": [ \"gui_websocket.host\" ] } } This example specifies that config['gui_websocket']['host'] may be specified in user configuration, but not remote. config['websocket']['host'] may not be specified in user or remote config, so it will only consider default and system configurations. disable_user_config If this config parameter is set to True in Default or System configuration, no user configurations will be loaded (no XDG configuration paths). disable_remote_config If this config parameter is set to True in Default or System configuration, the remote configuration ( web_cache.json ) will not be loaded. Meta Configuration while mycroft.conf configures the voice assistant, ovos.conf configures the library what this means is that ovos.conf decides what files are loaded by the Configuration class described above, as an end user or skill developer you should never have to worry about this all XDG paths across OpenVoiceOS packages build their paths taking ovos.conf into consideration this feature is what allows downstream voice assistants such as neon-core to change their config files to neon.yaml Using the above example, if Configuration() is called from neon-core , the following configs would be loaded in this order: {core-path} /configuration/neon.yaml os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/neon/neon.yaml os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH /neon/web_cache.json ~/.neon/neon.yaml (Deprecated) XDG_CONFIG_DIRS + /neon/neon.yaml /etc/xdg/neon/neon.yaml XDG_CONFIG_HOME (default ~/.config) + /neon/neon.yaml ovos.conf The ovos_config package determines which config files to load based on ovos.conf . get_ovos_config will return default values that load mycroft.conf unless otherwise configured. ovos.conf files are loaded in the following order, with later files taking priority over earlier ones in the list: /etc/OpenVoiceOS/ovos.conf /etc/mycroft/ovos.conf (Deprecated) XDG_CONFIG_DIRS + /OpenVoiceOS/ovos.conf /etc/xdg/OpenVoiceOS/ovos.conf XDG_CONFIG_HOME (default ~/.config) + /OpenVoiceOS/ovos.conf A simple ovos_config should have a structure like: { \"base_folder\": \"mycroft\", \"config_filename\": \"mycroft.conf\", \"default_config_path\": \"/configuration/mycroft.conf\", \"module_overrides\": {}, \"submodule_mappings\": {} } Note : default_config_path should always be an absolute path. This is generally detected automatically, but any manual override must specify an absolute path to a json or yaml config file. Non-Mycroft modules may specify alternate config paths. A call to get_ovos_config from neon_core or neon_messagebus will return a configuration like: { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\", \"module_overrides\": { \"neon_core\": { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\" } }, \"submodule_mappings\": { \"neon_messagebus\": \"neon_core\", \"neon_speech\": \"neon_core\", \"neon_audio\": \"neon_core\", \"neon_gui\": \"neon_core\" } } If get_ovos_config was called from mycroft with the same configuration file as the last example, the returned configuration would be: { \"base_folder\": \"mycroft\", \"config_filename\": \"mycroft.conf\", \"default_config_path\": \"/configuration/mycroft.conf\", \"module_overrides\": { \"neon_core\": { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\" } }, \"submodule_mappings\": { \"neon_messagebus\": \"neon_core\", \"neon_speech\": \"neon_core\", \"neon_audio\": \"neon_core\", \"neon_gui\": \"neon_core\" } }","title":"Config ovos audio"},{"location":"unused/config_ovos_audio/#audio-service","text":"The audio service is responsible for loading TTS and Audio plugins All audio playback is handled by this service","title":"Audio Service"},{"location":"unused/config_ovos_audio/#native-playback","text":"Usually playback is triggered by some originating bus message, eg \"recognizer_loop:utterance\" , this message contains metadata that is used to determine if playback should happen. message.context may contain a source and destination, playback is only triggered if a message destination is a native_source or if missing (considered a broadcast). This separation of native sources allows remote clients such as an android app to interact with OVOS without the actual device where ovos-core is running repeating all TTS and music playback out loud You can learn more about message targeting here By default, only utterances originating from the speech client and ovos cli are considered native for legacy reasons the names for ovos cli and speech client are \"debug_cli\" and \"audio\" respectively","title":"Native playback"},{"location":"unused/config_ovos_audio/#tts","text":"Two TTS plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used. This allows you to have a lower quality offline voice as fallback to account for internet outages, this ensures your device can always give you feedback \"tts\": { \"pulse_duck\": false, \"module\": \"ovos-tts-plugin-mimic2\", \"fallback_module\": \"ovos-tts-plugin-mimic\" },","title":"TTS"},{"location":"unused/config_ovos_audio/#audio","text":"You can enable additional Audio plugins and define the native sources described above under the \"Audio\" section of mycroft.conf ovos-core uses OCP natively for media playback, you can learn more about OCP here OCP will decide when to call the Audio service and what plugin to use, the main use case is for headless setups without a GUI NOTE: mycroft-core has a \"default-backend\" config option, in ovos-core this option has been deprecated and is always OCP. \"Audio\": { \"native_sources\": [\"debug_cli\", \"audio\"], \"backends\": { \"OCP\": { \"type\": \"ovos_common_play\", \"active\": true }, \"simple\": { \"type\": \"ovos_audio_simple\", \"active\": true }, \"vlc\": { \"type\": \"ovos_vlc\", \"active\": true } } },","title":"Audio"},{"location":"unused/config_ovos_audio/#configuration","text":"Reading Configuration Configuring Configuration protected_keys disable_user_config disable_remote_config Meta Configuration ovos.conf","title":"Configuration"},{"location":"unused/config_ovos_audio/#reading-configuration","text":"ovos_config.config.Configuration is a singleton that loads a single config object. The configuration files loaded are determined by ovos.conf as described below and can be in either json or yaml format. if Configuration() is called the following configs would be loaded in this order: {core-path} /configuration/mycroft.conf os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/mycroft/mycroft.conf os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH /neon/web_cache.json ~/.mycroft/mycroft.conf (Deprecated) XDG_CONFIG_DIRS + /mycroft/mycroft.conf /etc/xdg/mycroft/mycroft.conf XDG_CONFIG_HOME (default ~/.config) + /mycroft/mycroft.conf When the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that exist in multiple configuration files will be overridden by the last file to contain the value. This process results in a minimal amount being written for a specific device and user, without modifying default distribution files.","title":"Reading Configuration"},{"location":"unused/config_ovos_audio/#configuring-configuration","text":"There are a couple of special configuration keys that change the way the configuration stack loads. Default config refers to the config specified at default_config_path in ovos.conf (#1 {core-path}/configuration/mycroft.conf in the stack above). System config refers to the config at /etc/{base_folder}/{config_filename} (#2 /etc/mycroft/mycroft.conf in the stack above).","title":"Configuring Configuration"},{"location":"unused/config_ovos_audio/#protected_keys","text":"A \"protected_keys\" configuration section may be added to a Default or System Config file (default /etc/mycroft/mycroft.conf ). This configuration section specifies other configuration keys that may not be specified in remote or user configurations. Keys may specify nested parameters with . to exclude specific keys within nested dictionaries. An example config could be: { \"protected_keys\": { \"remote\": [ \"gui_websocket.host\", \"websocket.host\" ], \"user\": [ \"gui_websocket.host\" ] } } This example specifies that config['gui_websocket']['host'] may be specified in user configuration, but not remote. config['websocket']['host'] may not be specified in user or remote config, so it will only consider default and system configurations.","title":"protected_keys"},{"location":"unused/config_ovos_audio/#disable_user_config","text":"If this config parameter is set to True in Default or System configuration, no user configurations will be loaded (no XDG configuration paths).","title":"disable_user_config"},{"location":"unused/config_ovos_audio/#disable_remote_config","text":"If this config parameter is set to True in Default or System configuration, the remote configuration ( web_cache.json ) will not be loaded.","title":"disable_remote_config"},{"location":"unused/config_ovos_audio/#meta-configuration","text":"while mycroft.conf configures the voice assistant, ovos.conf configures the library what this means is that ovos.conf decides what files are loaded by the Configuration class described above, as an end user or skill developer you should never have to worry about this all XDG paths across OpenVoiceOS packages build their paths taking ovos.conf into consideration this feature is what allows downstream voice assistants such as neon-core to change their config files to neon.yaml Using the above example, if Configuration() is called from neon-core , the following configs would be loaded in this order: {core-path} /configuration/neon.yaml os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/neon/neon.yaml os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH /neon/web_cache.json ~/.neon/neon.yaml (Deprecated) XDG_CONFIG_DIRS + /neon/neon.yaml /etc/xdg/neon/neon.yaml XDG_CONFIG_HOME (default ~/.config) + /neon/neon.yaml","title":"Meta Configuration"},{"location":"unused/config_ovos_audio/#ovosconf","text":"The ovos_config package determines which config files to load based on ovos.conf . get_ovos_config will return default values that load mycroft.conf unless otherwise configured. ovos.conf files are loaded in the following order, with later files taking priority over earlier ones in the list: /etc/OpenVoiceOS/ovos.conf /etc/mycroft/ovos.conf (Deprecated) XDG_CONFIG_DIRS + /OpenVoiceOS/ovos.conf /etc/xdg/OpenVoiceOS/ovos.conf XDG_CONFIG_HOME (default ~/.config) + /OpenVoiceOS/ovos.conf A simple ovos_config should have a structure like: { \"base_folder\": \"mycroft\", \"config_filename\": \"mycroft.conf\", \"default_config_path\": \"/configuration/mycroft.conf\", \"module_overrides\": {}, \"submodule_mappings\": {} } Note : default_config_path should always be an absolute path. This is generally detected automatically, but any manual override must specify an absolute path to a json or yaml config file. Non-Mycroft modules may specify alternate config paths. A call to get_ovos_config from neon_core or neon_messagebus will return a configuration like: { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\", \"module_overrides\": { \"neon_core\": { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\" } }, \"submodule_mappings\": { \"neon_messagebus\": \"neon_core\", \"neon_speech\": \"neon_core\", \"neon_audio\": \"neon_core\", \"neon_gui\": \"neon_core\" } } If get_ovos_config was called from mycroft with the same configuration file as the last example, the returned configuration would be: { \"base_folder\": \"mycroft\", \"config_filename\": \"mycroft.conf\", \"default_config_path\": \"/configuration/mycroft.conf\", \"module_overrides\": { \"neon_core\": { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\" } }, \"submodule_mappings\": { \"neon_messagebus\": \"neon_core\", \"neon_speech\": \"neon_core\", \"neon_audio\": \"neon_core\", \"neon_gui\": \"neon_core\" } }","title":"ovos.conf"},{"location":"unused/config_ovos_core/","text":"Configuration ovos-core reads from several config files and is able to combine them into one main configuration to be used by all of the OVOS modules. The default configuration is locatated at /site-packages/ovos-config/mycroft/mycroft.conf In this file, you can see all of the avaliable configuration values and an explination of its use. The images will inclue a file at /etc/mycroft/mycroft.conf and values set there will override the system values. DO NOT EDIT THESE FILES These files are default values, and will be overwritten on an update. Next OVOS checks for a file in ~/.config/mycroft/web_cache.json . This file contains values retrieved from a remote server and will overwrite the previous two values. This one should also NOT be edited, it will be overwritten also. The user configuration file is located in ~/.config/mycroft/mycroft.conf . This is the file that you should use to change default values to custom ones. When this document refers to Add this to config this is the file that should be modified. This file needs to be a valid json or yaml file. OVOS knows how to handle both. Reading Configuration Configuring Configuration protected_keys disable_user_config disable_remote_config Meta Configuration ovos.conf Reading Configuration ovos_config.config.Configuration is a singleton that loads a single config object. The configuration files loaded are determined by ovos.conf as described below and can be in either json or yaml format. if Configuration() is called the following configs would be loaded in this order: {core-path} /configuration/mycroft.conf os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/mycroft/mycroft.conf os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH /neon/web_cache.json ~/.mycroft/mycroft.conf (Deprecated) XDG_CONFIG_DIRS + /mycroft/mycroft.conf /etc/xdg/mycroft/mycroft.conf XDG_CONFIG_HOME (default ~/.config) + /mycroft/mycroft.conf When the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that exist in multiple configuration files will be overridden by the last file to contain the value. This process results in a minimal amount being written for a specific device and user, without modifying default distribution files. Configuring Configuration There are a couple of special configuration keys that change the way the configuration stack loads. Default config refers to the config specified at default_config_path in ovos.conf (#1 {core-path}/configuration/mycroft.conf in the stack above). System config refers to the config at /etc/{base_folder}/{config_filename} (#2 /etc/mycroft/mycroft.conf in the stack above). protected_keys A \"protected_keys\" configuration section may be added to a Default or System Config file (default /etc/mycroft/mycroft.conf ). This configuration section specifies other configuration keys that may not be specified in remote or user configurations. Keys may specify nested parameters with . to exclude specific keys within nested dictionaries. An example config could be: { \"protected_keys\": { \"remote\": [ \"gui_websocket.host\", \"websocket.host\" ], \"user\": [ \"gui_websocket.host\" ] } } This example specifies that config['gui_websocket']['host'] may be specified in user configuration, but not remote. config['websocket']['host'] may not be specified in user or remote config, so it will only consider default and system configurations. disable_user_config If this config parameter is set to True in Default or System configuration, no user configurations will be loaded (no XDG configuration paths). disable_remote_config If this config parameter is set to True in Default or System configuration, the remote configuration ( web_cache.json ) will not be loaded. Meta Configuration while mycroft.conf configures the voice assistant, ovos.conf configures the library what this means is that ovos.conf decides what files are loaded by the Configuration class described above, as an end user or skill developer you should never have to worry about this all XDG paths across OpenVoiceOS packages build their paths taking ovos.conf into consideration this feature is what allows downstream voice assistants such as neon-core to change their config files to neon.yaml Using the above example, if Configuration() is called from neon-core , the following configs would be loaded in this order: {core-path} /configuration/neon.yaml os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/neon/neon.yaml os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH /neon/web_cache.json ~/.neon/neon.yaml (Deprecated) XDG_CONFIG_DIRS + /neon/neon.yaml /etc/xdg/neon/neon.yaml XDG_CONFIG_HOME (default ~/.config) + /neon/neon.yaml ovos.conf The ovos_config package determines which config files to load based on ovos.conf . get_ovos_config will return default values that load mycroft.conf unless otherwise configured. ovos.conf files are loaded in the following order, with later files taking priority over earlier ones in the list: /etc/OpenVoiceOS/ovos.conf /etc/mycroft/ovos.conf (Deprecated) XDG_CONFIG_DIRS + /OpenVoiceOS/ovos.conf /etc/xdg/OpenVoiceOS/ovos.conf XDG_CONFIG_HOME (default ~/.config) + /OpenVoiceOS/ovos.conf A simple ovos_config should have a structure like: { \"base_folder\": \"mycroft\", \"config_filename\": \"mycroft.conf\", \"default_config_path\": \"/configuration/mycroft.conf\", \"module_overrides\": {}, \"submodule_mappings\": {} } Note : default_config_path should always be an absolute path. This is generally detected automatically, but any manual override must specify an absolute path to a json or yaml config file. Non-Mycroft modules may specify alternate config paths. A call to get_ovos_config from neon_core or neon_messagebus will return a configuration like: { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\", \"module_overrides\": { \"neon_core\": { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\" } }, \"submodule_mappings\": { \"neon_messagebus\": \"neon_core\", \"neon_speech\": \"neon_core\", \"neon_audio\": \"neon_core\", \"neon_gui\": \"neon_core\" } } If get_ovos_config was called from mycroft with the same configuration file as the last example, the returned configuration would be: { \"base_folder\": \"mycroft\", \"config_filename\": \"mycroft.conf\", \"default_config_path\": \"/configuration/mycroft.conf\", \"module_overrides\": { \"neon_core\": { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\" } }, \"submodule_mappings\": { \"neon_messagebus\": \"neon_core\", \"neon_speech\": \"neon_core\", \"neon_audio\": \"neon_core\", \"neon_gui\": \"neon_core\" } } Configure Skills","title":"Configuration"},{"location":"unused/config_ovos_core/#configuration","text":"ovos-core reads from several config files and is able to combine them into one main configuration to be used by all of the OVOS modules. The default configuration is locatated at /site-packages/ovos-config/mycroft/mycroft.conf In this file, you can see all of the avaliable configuration values and an explination of its use. The images will inclue a file at /etc/mycroft/mycroft.conf and values set there will override the system values. DO NOT EDIT THESE FILES These files are default values, and will be overwritten on an update. Next OVOS checks for a file in ~/.config/mycroft/web_cache.json . This file contains values retrieved from a remote server and will overwrite the previous two values. This one should also NOT be edited, it will be overwritten also. The user configuration file is located in ~/.config/mycroft/mycroft.conf . This is the file that you should use to change default values to custom ones. When this document refers to Add this to config this is the file that should be modified. This file needs to be a valid json or yaml file. OVOS knows how to handle both. Reading Configuration Configuring Configuration protected_keys disable_user_config disable_remote_config Meta Configuration ovos.conf","title":"Configuration"},{"location":"unused/config_ovos_core/#reading-configuration","text":"ovos_config.config.Configuration is a singleton that loads a single config object. The configuration files loaded are determined by ovos.conf as described below and can be in either json or yaml format. if Configuration() is called the following configs would be loaded in this order: {core-path} /configuration/mycroft.conf os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/mycroft/mycroft.conf os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH /neon/web_cache.json ~/.mycroft/mycroft.conf (Deprecated) XDG_CONFIG_DIRS + /mycroft/mycroft.conf /etc/xdg/mycroft/mycroft.conf XDG_CONFIG_HOME (default ~/.config) + /mycroft/mycroft.conf When the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that exist in multiple configuration files will be overridden by the last file to contain the value. This process results in a minimal amount being written for a specific device and user, without modifying default distribution files.","title":"Reading Configuration"},{"location":"unused/config_ovos_core/#configuring-configuration","text":"There are a couple of special configuration keys that change the way the configuration stack loads. Default config refers to the config specified at default_config_path in ovos.conf (#1 {core-path}/configuration/mycroft.conf in the stack above). System config refers to the config at /etc/{base_folder}/{config_filename} (#2 /etc/mycroft/mycroft.conf in the stack above).","title":"Configuring Configuration"},{"location":"unused/config_ovos_core/#protected_keys","text":"A \"protected_keys\" configuration section may be added to a Default or System Config file (default /etc/mycroft/mycroft.conf ). This configuration section specifies other configuration keys that may not be specified in remote or user configurations. Keys may specify nested parameters with . to exclude specific keys within nested dictionaries. An example config could be: { \"protected_keys\": { \"remote\": [ \"gui_websocket.host\", \"websocket.host\" ], \"user\": [ \"gui_websocket.host\" ] } } This example specifies that config['gui_websocket']['host'] may be specified in user configuration, but not remote. config['websocket']['host'] may not be specified in user or remote config, so it will only consider default and system configurations.","title":"protected_keys"},{"location":"unused/config_ovos_core/#disable_user_config","text":"If this config parameter is set to True in Default or System configuration, no user configurations will be loaded (no XDG configuration paths).","title":"disable_user_config"},{"location":"unused/config_ovos_core/#disable_remote_config","text":"If this config parameter is set to True in Default or System configuration, the remote configuration ( web_cache.json ) will not be loaded.","title":"disable_remote_config"},{"location":"unused/config_ovos_core/#meta-configuration","text":"while mycroft.conf configures the voice assistant, ovos.conf configures the library what this means is that ovos.conf decides what files are loaded by the Configuration class described above, as an end user or skill developer you should never have to worry about this all XDG paths across OpenVoiceOS packages build their paths taking ovos.conf into consideration this feature is what allows downstream voice assistants such as neon-core to change their config files to neon.yaml Using the above example, if Configuration() is called from neon-core , the following configs would be loaded in this order: {core-path} /configuration/neon.yaml os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/neon/neon.yaml os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH /neon/web_cache.json ~/.neon/neon.yaml (Deprecated) XDG_CONFIG_DIRS + /neon/neon.yaml /etc/xdg/neon/neon.yaml XDG_CONFIG_HOME (default ~/.config) + /neon/neon.yaml","title":"Meta Configuration"},{"location":"unused/config_ovos_core/#ovosconf","text":"The ovos_config package determines which config files to load based on ovos.conf . get_ovos_config will return default values that load mycroft.conf unless otherwise configured. ovos.conf files are loaded in the following order, with later files taking priority over earlier ones in the list: /etc/OpenVoiceOS/ovos.conf /etc/mycroft/ovos.conf (Deprecated) XDG_CONFIG_DIRS + /OpenVoiceOS/ovos.conf /etc/xdg/OpenVoiceOS/ovos.conf XDG_CONFIG_HOME (default ~/.config) + /OpenVoiceOS/ovos.conf A simple ovos_config should have a structure like: { \"base_folder\": \"mycroft\", \"config_filename\": \"mycroft.conf\", \"default_config_path\": \"/configuration/mycroft.conf\", \"module_overrides\": {}, \"submodule_mappings\": {} } Note : default_config_path should always be an absolute path. This is generally detected automatically, but any manual override must specify an absolute path to a json or yaml config file. Non-Mycroft modules may specify alternate config paths. A call to get_ovos_config from neon_core or neon_messagebus will return a configuration like: { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\", \"module_overrides\": { \"neon_core\": { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\" } }, \"submodule_mappings\": { \"neon_messagebus\": \"neon_core\", \"neon_speech\": \"neon_core\", \"neon_audio\": \"neon_core\", \"neon_gui\": \"neon_core\" } } If get_ovos_config was called from mycroft with the same configuration file as the last example, the returned configuration would be: { \"base_folder\": \"mycroft\", \"config_filename\": \"mycroft.conf\", \"default_config_path\": \"/configuration/mycroft.conf\", \"module_overrides\": { \"neon_core\": { \"base_folder\": \"neon\", \"config_filename\": \"neon.yaml\", \"default_config_path\": \"/etc/example/config/neon.yaml\" } }, \"submodule_mappings\": { \"neon_messagebus\": \"neon_core\", \"neon_speech\": \"neon_core\", \"neon_audio\": \"neon_core\", \"neon_gui\": \"neon_core\" } } Configure Skills","title":"ovos.conf"},{"location":"unused/config_security/","text":"OpenVoiceOS Security Securing SSH Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication. When connecting from a Linux or MacOS client Create a keyfile (you can change ovos to whatever you want) ssh-keygen -t ed25519 -f ~/.ssh/ovos Copy to host (use the same filename as above, specify the user and hostname you are using) ssh-copy-id -i ~/.ssh/ovos ovos@mycroft On your dekstop, edit ~/.ssh/config and add the following lines Host rp2 user ovos IdentityFile ~/.ssh/ovos On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line: PasswordAuthentication no restart sshd or reboot sudo systemctl restart sshd Message Bus Security Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system! You can read more about the security issues over at Nhoya/MycroftAI-RCE in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills you can see a demonstration of this problem with BusBrickerSkill \"shared_connection\": false ensures each skill gets its own websocket connection and avoids this problem Additionally, it is recommended you change \"host\": \"127.0.0.1\" , this will ensure no outside world connections are allowed","title":"OpenVoiceOS Security"},{"location":"unused/config_security/#openvoiceos-security","text":"","title":"OpenVoiceOS Security"},{"location":"unused/config_security/#securing-ssh","text":"Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication.","title":"Securing SSH"},{"location":"unused/config_security/#when-connecting-from-a-linux-or-macos-client","text":"Create a keyfile (you can change ovos to whatever you want) ssh-keygen -t ed25519 -f ~/.ssh/ovos Copy to host (use the same filename as above, specify the user and hostname you are using) ssh-copy-id -i ~/.ssh/ovos ovos@mycroft On your dekstop, edit ~/.ssh/config and add the following lines Host rp2 user ovos IdentityFile ~/.ssh/ovos On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line: PasswordAuthentication no restart sshd or reboot sudo systemctl restart sshd","title":"When connecting from a Linux or MacOS client"},{"location":"unused/config_security/#message-bus-security","text":"Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system! You can read more about the security issues over at Nhoya/MycroftAI-RCE in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills you can see a demonstration of this problem with BusBrickerSkill \"shared_connection\": false ensures each skill gets its own websocket connection and avoids this problem Additionally, it is recommended you change \"host\": \"127.0.0.1\" , this will ensure no outside world connections are allowed","title":"Message Bus Security"},{"location":"unused/config_skillls/","text":"Skill Settings Each skill will have its own config file usually located at ~/.local/share/mycroft/skills/ Skill settings provide the ability for users to configure a Skill using the command line or a web-based interface. This is often used to: Change default behaviors - such as the sound used for users alarms. Authenticate with external services - such as Spotify Enter longer data as text rather than by voice - such as the IP address of the users Home Assistant server. Skill settings are completely optional. Refer to each skill repository for valid configuration values.","title":"Skill Settings"},{"location":"unused/config_skillls/#skill-settings","text":"Each skill will have its own config file usually located at ~/.local/share/mycroft/skills/ Skill settings provide the ability for users to configure a Skill using the command line or a web-based interface. This is often used to: Change default behaviors - such as the sound used for users alarms. Authenticate with external services - such as Spotify Enter longer data as text rather than by voice - such as the IP address of the users Home Assistant server. Skill settings are completely optional. Refer to each skill repository for valid configuration values.","title":"Skill Settings"},{"location":"unused/config_wake_word/","text":"Wake Word Plugins WakeWord plugins classify audio and report if a certain word or sound is present or not. These plugins usually correspond to the name of the voice assistant, \"hey mycroft\", but can also be used for other purposes. Unlike the original Mycroft assistant, OVOS supports multiple wakewords in any combination of engines. List of Wake Word plugins Plugin Type ovos-ww-plugin-pocketsphinx phonemes ovos-ww-plugin-vosk text samples ovos-ww-plugin-snowboy model ovos-ww-plugin-precise model ovos-ww-plugin-precise-lite model ovos-ww-plugin-nyumaya model ovos-ww-plugin-nyumaya-legacy model neon_ww_plugin_efficientwordnet model mycroft-porcupine-plugin model ovos-ww-plugin-hotkeys keyboard Overview of Most Common Plugins The default wake words for OVOS generally use one of two different plugins, Precise-lite (referred to here as Precise) or Vosk. Precise is typically the more accurate of the two because it is trained on recordings and uses an ML model. Vosk translates sounds to phonemes and will generally be more sensitive and prone to error. Vosk The primary benefit of Vosk wakewords is that they require no training or downloaded models. You can simply configure the wakeword and it will work. The downside is that you will usually get many false wakes, especially with short and common phonemes. Something like \"Hey Neon\" will trigger almost every time the \"ee\" sound is pronounced in English, while \"Hey Ziggy\" is much less likely to trigger because the phonemes are less common. Note that Vosk wakewords consume a large amount of memory. Configuring multiple Vosk wakewords on a device with limited memory, like the Mycroft Mark 2, can cause performance issues. To create a Vosk wakeword on your OVOS device, open the user configuration (defaults to ~/.config/mycroft/mycroft.conf ) in your text editor of choice and add the following lines. This will enable wakewords for both \"Hey Mycroft\" and \"Hey Ziggy.\" \"hotwords\": { \"hey_neon\": { \"module\": \"ovos-ww-plugin-vosk\", \"active\": true, \"listen\": true, \"sound\": \"snd/start_listening.wav\", \"debug\": false, \"rule\": \"fuzzy\", \"lang\": \"en\", \"samples\": [\"hey neon\"] }, \"hey_ziggy\": { \"module\": \"ovos-ww-plugin-vosk\", \"listen\": true, \"active\": true, \"sound\": \"snd/start_listening.wav\", \"debug\": false, \"rule\": \"fuzzy\", \"lang\": \"en\", \"samples\": [\"hey ziggy\", \"hay ziggy\"] } } If you already have a hotwords section in your user configuration, the first and last lines are not necessary. Also, the most important section is \"active\": true , which tells the assistant to use the wakeword. If you want to disable a wakeword, you can set this to false . If enabling a wakeword, be sure to also set \"listen\": true . Another important combination is \"debug\": true , which will print the phonemes to the logs when the wakeword is triggered. This can be useful for debugging issues. It can also tell you what combinations the speech-to-text engine is picking up when you try to activate it so you can add them to the samples array. Those are two common default wakewords. You can also create a completely custom wakeword as follows: \"hotwords\": { \"k9\": { \"module\": \"ovos-ww-plugin-vosk\", \"active\": true, \"listen\": true, \"sound\": \"snd/start_listening.wav\", \"debug\": true, \"rule\": \"fuzzy\", \"lang\": \"en\", \"samples\": [\"k9\", \"k 9\", \"kay nine\", \"k nine\", \"kay nein\", \"k nein\"] } } OVOS community members have used Vosk for very creative wakewords. Please feel free to share your custom wakewords in the OVOS Matrix chat! Precise-Lite (Precise) NOTE: The original Precise engine is not actively maintained and is not recommended for new installations. Precise-Lite is a fork of Precise that is actively maintained. Please use that instead. Precise-Lite wakewords require a pre-trained .tflite model to operate. OVOS maintains several pre-trained models of commonly requested wakewords . To use them, try this configuration: \"hotwords\": { \"computer\": { \"module\": \"ovos-ww-plugin-precise-lite\", \"model\": \"https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/computer.tflite\", \"active\": true, \"listen\": true, \"sound\": \"snd/start_listening.wav\", \"expected_duration\": 3, \"trigger_level\": 3, \"sensitivity\": 0.5 } } Your OVOS device will automatically download the model if it isn't already on the device. OVOS maintains the following models: android: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/android.tflite computer: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/computer.tflite hey_chatterbox: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_chatterbox.tflite hey_firefox: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_firefox.tflite hey_k9: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_k9.tflite hey_kit: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_kit.tflite hey_moxie: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_moxie.tflite hey_mycroft: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_mycroft.tflite hey_scout: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_scout.tflite marvin: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/marvin.tflite o_sauro: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/o_sauro.tflite sheila: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/sheila.tflite To use them, replace the model section with the link to the model you want to use. Then replace the key with the name of the model, e.g. instead of computer use android or marvin or whichever model you chose. Community Precise Wakeword Models In addition to the Precise wakeword models that OVOS maintains, the community has created many more models , and additional model requests are welcome! If you have a model you would like to see created, please open an issue with the name of the wakeword. The OVOS team will generate some synthetic samples and add it to the list of models to be created. These synthetic models perform fairly well out of the box, but always work better with community-contributed recordings. Please see the README on the repo above for instructions on how to contribute recordings, and consider contributing to as many as you can!","title":"Wake Word Plugins"},{"location":"unused/config_wake_word/#wake-word-plugins","text":"WakeWord plugins classify audio and report if a certain word or sound is present or not. These plugins usually correspond to the name of the voice assistant, \"hey mycroft\", but can also be used for other purposes. Unlike the original Mycroft assistant, OVOS supports multiple wakewords in any combination of engines.","title":"Wake Word Plugins"},{"location":"unused/config_wake_word/#list-of-wake-word-plugins","text":"Plugin Type ovos-ww-plugin-pocketsphinx phonemes ovos-ww-plugin-vosk text samples ovos-ww-plugin-snowboy model ovos-ww-plugin-precise model ovos-ww-plugin-precise-lite model ovos-ww-plugin-nyumaya model ovos-ww-plugin-nyumaya-legacy model neon_ww_plugin_efficientwordnet model mycroft-porcupine-plugin model ovos-ww-plugin-hotkeys keyboard","title":"List of Wake Word plugins"},{"location":"unused/config_wake_word/#overview-of-most-common-plugins","text":"The default wake words for OVOS generally use one of two different plugins, Precise-lite (referred to here as Precise) or Vosk. Precise is typically the more accurate of the two because it is trained on recordings and uses an ML model. Vosk translates sounds to phonemes and will generally be more sensitive and prone to error.","title":"Overview of Most Common Plugins"},{"location":"unused/config_wake_word/#vosk","text":"The primary benefit of Vosk wakewords is that they require no training or downloaded models. You can simply configure the wakeword and it will work. The downside is that you will usually get many false wakes, especially with short and common phonemes. Something like \"Hey Neon\" will trigger almost every time the \"ee\" sound is pronounced in English, while \"Hey Ziggy\" is much less likely to trigger because the phonemes are less common. Note that Vosk wakewords consume a large amount of memory. Configuring multiple Vosk wakewords on a device with limited memory, like the Mycroft Mark 2, can cause performance issues. To create a Vosk wakeword on your OVOS device, open the user configuration (defaults to ~/.config/mycroft/mycroft.conf ) in your text editor of choice and add the following lines. This will enable wakewords for both \"Hey Mycroft\" and \"Hey Ziggy.\" \"hotwords\": { \"hey_neon\": { \"module\": \"ovos-ww-plugin-vosk\", \"active\": true, \"listen\": true, \"sound\": \"snd/start_listening.wav\", \"debug\": false, \"rule\": \"fuzzy\", \"lang\": \"en\", \"samples\": [\"hey neon\"] }, \"hey_ziggy\": { \"module\": \"ovos-ww-plugin-vosk\", \"listen\": true, \"active\": true, \"sound\": \"snd/start_listening.wav\", \"debug\": false, \"rule\": \"fuzzy\", \"lang\": \"en\", \"samples\": [\"hey ziggy\", \"hay ziggy\"] } } If you already have a hotwords section in your user configuration, the first and last lines are not necessary. Also, the most important section is \"active\": true , which tells the assistant to use the wakeword. If you want to disable a wakeword, you can set this to false . If enabling a wakeword, be sure to also set \"listen\": true . Another important combination is \"debug\": true , which will print the phonemes to the logs when the wakeword is triggered. This can be useful for debugging issues. It can also tell you what combinations the speech-to-text engine is picking up when you try to activate it so you can add them to the samples array. Those are two common default wakewords. You can also create a completely custom wakeword as follows: \"hotwords\": { \"k9\": { \"module\": \"ovos-ww-plugin-vosk\", \"active\": true, \"listen\": true, \"sound\": \"snd/start_listening.wav\", \"debug\": true, \"rule\": \"fuzzy\", \"lang\": \"en\", \"samples\": [\"k9\", \"k 9\", \"kay nine\", \"k nine\", \"kay nein\", \"k nein\"] } } OVOS community members have used Vosk for very creative wakewords. Please feel free to share your custom wakewords in the OVOS Matrix chat!","title":"Vosk"},{"location":"unused/config_wake_word/#precise-lite-precise","text":"NOTE: The original Precise engine is not actively maintained and is not recommended for new installations. Precise-Lite is a fork of Precise that is actively maintained. Please use that instead. Precise-Lite wakewords require a pre-trained .tflite model to operate. OVOS maintains several pre-trained models of commonly requested wakewords . To use them, try this configuration: \"hotwords\": { \"computer\": { \"module\": \"ovos-ww-plugin-precise-lite\", \"model\": \"https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/computer.tflite\", \"active\": true, \"listen\": true, \"sound\": \"snd/start_listening.wav\", \"expected_duration\": 3, \"trigger_level\": 3, \"sensitivity\": 0.5 } } Your OVOS device will automatically download the model if it isn't already on the device. OVOS maintains the following models: android: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/android.tflite computer: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/computer.tflite hey_chatterbox: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_chatterbox.tflite hey_firefox: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_firefox.tflite hey_k9: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_k9.tflite hey_kit: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_kit.tflite hey_moxie: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_moxie.tflite hey_mycroft: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_mycroft.tflite hey_scout: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_scout.tflite marvin: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/marvin.tflite o_sauro: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/o_sauro.tflite sheila: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/sheila.tflite To use them, replace the model section with the link to the model you want to use. Then replace the key with the name of the model, e.g. instead of computer use android or marvin or whichever model you chose.","title":"Precise-Lite (Precise)"},{"location":"unused/config_wake_word/#community-precise-wakeword-models","text":"In addition to the Precise wakeword models that OVOS maintains, the community has created many more models , and additional model requests are welcome! If you have a model you would like to see created, please open an issue with the name of the wakeword. The OVOS team will generate some synthetic samples and add it to the list of models to be created. These synthetic models perform fairly well out of the box, but always work better with community-contributed recordings. Please see the README on the repo above for instructions on how to contribute recordings, and consider contributing to as many as you can!","title":"Community Precise Wakeword Models"},{"location":"unused/context/","text":"Conversational Context NOTE: Conversational context is currently only available with the Adapt Intent Parser, and is not yet available for Padatious How tall is John Cleese? \"John Cleese is 196 centimeters\" Where's he from? \"He's from England\" Context is added manually by the Skill creator using either the self.set_context() method or the @adds_context() decorator. Consider the following intent handlers: @intent_handler(IntentBuilder().require('PythonPerson').require('Length')) def handle_length(self, message): python = message.data.get('PythonPerson') self.speak('{} is {} cm tall'.format(python, length_dict[python])) @intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom')) def handle_from(self, message): python = message.data.get('PythonPerson') self.speak('{} is from {}'.format(python, from_dict[python])) To interact with the above handlers the user would need to say User: How tall is John Cleese? Mycroft: John Cleese is 196 centimeters User: Where is John Cleese from? Mycroft: He's from England To get a more natural response the functions can be changed to let OVOS know which PythonPerson we're talking about by using the self.set_context() method to give context: @intent_handler(IntentBuilder().require('PythonPerson').require('Length')) def handle_length(self, message): # PythonPerson can be any of the Monty Python members python = message.data.get('PythonPerson') self.speak('{} is {} cm tall'.format(python, length_dict[python])) self.set_context('PythonPerson', python) @intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom')) def handle_from(self, message): # PythonPerson can be any of the Monty Python members python = message.data.get('PythonPerson') self.speak('He is from {}'.format(from_dict[python])) self.set_context('PythonPerson', python) When either of the methods are called the PythonPerson keyword is added to OVOS's context, which means that if there is a match with Length but PythonPerson is missing OVOS will assume the last mention of that keyword. The interaction can now become the one described at the top of the page. User: How tall is John Cleese? OVOS detects the Length keyword and the PythonPerson keyword OVOS: 196 centimeters John Cleese is added to the current context User: Where's he from? OVOS detects the WhereFrom keyword but not any PythonPerson keyword. The Context Manager is activated and returns the latest entry of PythonPerson which is John Cleese OVOS: He's from England The context isn't limited by the keywords provided by the current Skill . For example @intent_handler(IntentBuilder().require(PythonPerson).require(WhereFrom)) def handle_from(self, message): # PythonPerson can be any of the Monty Python members python = message.data.get('PythonPerson') self.speak('He is from {}'.format(from_dict[python])) self.set_context('PythonPerson', python) self.set_context('Location', from_dict[python]) Enables conversations with other Skills as well. User: Where is John Cleese from? Mycroft: He's from England User: What's the weather like over there? Mycroft: Raining and 14 degrees... Using context to enable Intents To make sure certain Intents can't be triggered unless some previous stage in a conversation has occurred. Context can be used to create \"bubbles\" of available intent handlers. User: Hey Mycroft, bring me some Tea Mycroft: Of course, would you like Milk with that? User: No Mycroft: How about some Honey? User: All right then Mycroft: Here you go, here's your Tea with Honey from mycroft.skills.context import adds_context, removes_context class TeaSkill(MycroftSkill): @intent_handler(IntentBuilder('TeaIntent').require(\"TeaKeyword\")) @adds_context('MilkContext') def handle_tea_intent(self, message): self.milk = False self.speak('Of course, would you like Milk with that?', expect_response=True) @intent_handler(IntentBuilder('NoMilkIntent').require(\"NoKeyword\"). require('MilkContext').build()) @removes_context('MilkContext') @adds_context('HoneyContext') def handle_no_milk_intent(self, message): self.speak('all right, any Honey?', expect_response=True) @intent_handler(IntentBuilder('YesMilkIntent').require(\"YesKeyword\"). require('MilkContext').build()) @removes_context('MilkContext') @adds_context('HoneyContext') def handle_yes_milk_intent(self, message): self.milk = True self.speak('What about Honey?', expect_response=True) @intent_handler(IntentBuilder('NoHoneyIntent').require(\"NoKeyword\"). require('HoneyContext').build()) @removes_context('HoneyContext') def handle_no_honey_intent(self, message): if self.milk: self.speak('Heres your Tea with a dash of Milk') else: self.speak('Heres your Tea, straight up') @intent_handler(IntentBuilder('YesHoneyIntent').require(\"YesKeyword\"). require('HoneyContext').build()) @removes_context('HoneyContext') def handle_yes_honey_intent(self, message): if self.milk: self.speak('Heres your Tea with Milk and Honey') else: self.speak('Heres your Tea with Honey') When starting up only the TeaIntent will be available. When that has been triggered and MilkContext is added the MilkYesIntent and MilkNoIntent are available since the MilkContext is set. when a yes or no is received the MilkContext is removed and can't be accessed. In it's place the HoneyContext is added making the YesHoneyIntent and NoHoneyIntent available. You can find an example Tea Skill using conversational context on Github . As you can see, Conversational Context lends itself well to implementing a dialog tree or conversation tree .","title":"Conversational Context"},{"location":"unused/context/#conversational-context","text":"NOTE: Conversational context is currently only available with the Adapt Intent Parser, and is not yet available for Padatious How tall is John Cleese? \"John Cleese is 196 centimeters\" Where's he from? \"He's from England\" Context is added manually by the Skill creator using either the self.set_context() method or the @adds_context() decorator. Consider the following intent handlers: @intent_handler(IntentBuilder().require('PythonPerson').require('Length')) def handle_length(self, message): python = message.data.get('PythonPerson') self.speak('{} is {} cm tall'.format(python, length_dict[python])) @intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom')) def handle_from(self, message): python = message.data.get('PythonPerson') self.speak('{} is from {}'.format(python, from_dict[python])) To interact with the above handlers the user would need to say User: How tall is John Cleese? Mycroft: John Cleese is 196 centimeters User: Where is John Cleese from? Mycroft: He's from England To get a more natural response the functions can be changed to let OVOS know which PythonPerson we're talking about by using the self.set_context() method to give context: @intent_handler(IntentBuilder().require('PythonPerson').require('Length')) def handle_length(self, message): # PythonPerson can be any of the Monty Python members python = message.data.get('PythonPerson') self.speak('{} is {} cm tall'.format(python, length_dict[python])) self.set_context('PythonPerson', python) @intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom')) def handle_from(self, message): # PythonPerson can be any of the Monty Python members python = message.data.get('PythonPerson') self.speak('He is from {}'.format(from_dict[python])) self.set_context('PythonPerson', python) When either of the methods are called the PythonPerson keyword is added to OVOS's context, which means that if there is a match with Length but PythonPerson is missing OVOS will assume the last mention of that keyword. The interaction can now become the one described at the top of the page. User: How tall is John Cleese? OVOS detects the Length keyword and the PythonPerson keyword OVOS: 196 centimeters John Cleese is added to the current context User: Where's he from? OVOS detects the WhereFrom keyword but not any PythonPerson keyword. The Context Manager is activated and returns the latest entry of PythonPerson which is John Cleese OVOS: He's from England The context isn't limited by the keywords provided by the current Skill . For example @intent_handler(IntentBuilder().require(PythonPerson).require(WhereFrom)) def handle_from(self, message): # PythonPerson can be any of the Monty Python members python = message.data.get('PythonPerson') self.speak('He is from {}'.format(from_dict[python])) self.set_context('PythonPerson', python) self.set_context('Location', from_dict[python]) Enables conversations with other Skills as well. User: Where is John Cleese from? Mycroft: He's from England User: What's the weather like over there? Mycroft: Raining and 14 degrees...","title":"Conversational Context"},{"location":"unused/context/#using-context-to-enable-intents","text":"To make sure certain Intents can't be triggered unless some previous stage in a conversation has occurred. Context can be used to create \"bubbles\" of available intent handlers. User: Hey Mycroft, bring me some Tea Mycroft: Of course, would you like Milk with that? User: No Mycroft: How about some Honey? User: All right then Mycroft: Here you go, here's your Tea with Honey from mycroft.skills.context import adds_context, removes_context class TeaSkill(MycroftSkill): @intent_handler(IntentBuilder('TeaIntent').require(\"TeaKeyword\")) @adds_context('MilkContext') def handle_tea_intent(self, message): self.milk = False self.speak('Of course, would you like Milk with that?', expect_response=True) @intent_handler(IntentBuilder('NoMilkIntent').require(\"NoKeyword\"). require('MilkContext').build()) @removes_context('MilkContext') @adds_context('HoneyContext') def handle_no_milk_intent(self, message): self.speak('all right, any Honey?', expect_response=True) @intent_handler(IntentBuilder('YesMilkIntent').require(\"YesKeyword\"). require('MilkContext').build()) @removes_context('MilkContext') @adds_context('HoneyContext') def handle_yes_milk_intent(self, message): self.milk = True self.speak('What about Honey?', expect_response=True) @intent_handler(IntentBuilder('NoHoneyIntent').require(\"NoKeyword\"). require('HoneyContext').build()) @removes_context('HoneyContext') def handle_no_honey_intent(self, message): if self.milk: self.speak('Heres your Tea with a dash of Milk') else: self.speak('Heres your Tea, straight up') @intent_handler(IntentBuilder('YesHoneyIntent').require(\"YesKeyword\"). require('HoneyContext').build()) @removes_context('HoneyContext') def handle_yes_honey_intent(self, message): if self.milk: self.speak('Heres your Tea with Milk and Honey') else: self.speak('Heres your Tea with Honey') When starting up only the TeaIntent will be available. When that has been triggered and MilkContext is added the MilkYesIntent and MilkNoIntent are available since the MilkContext is set. when a yes or no is received the MilkContext is removed and can't be accessed. In it's place the HoneyContext is added making the YesHoneyIntent and NoHoneyIntent available. You can find an example Tea Skill using conversational context on Github . As you can see, Conversational Context lends itself well to implementing a dialog tree or conversation tree .","title":"Using context to enable Intents"},{"location":"unused/contributing/","text":"Contributing to OpenVoiceOS If this is your first PR, or you're not sure where to get started, say hi in OpenVoiceOS Chat and a team member would be happy to mentor you. Join the Discussions for questions and answers. Links Latest Release OpenVoiceOS Chat Mycroft Chat Mycroft Forum","title":"Contributing to OpenVoiceOS"},{"location":"unused/contributing/#contributing-to-openvoiceos","text":"If this is your first PR, or you're not sure where to get started, say hi in OpenVoiceOS Chat and a team member would be happy to mentor you. Join the Discussions for questions and answers.","title":"Contributing to OpenVoiceOS"},{"location":"unused/contributing/#links","text":"Latest Release OpenVoiceOS Chat Mycroft Chat Mycroft Forum","title":"Links"},{"location":"unused/converse/","text":"Converse Each Skill may define a converse() method. This method will be called anytime the Skill has been recently active and a new utterance is processed. The converse method expects a single argument which is a standard Mycroft Message object. This is the same object an intent handler receives. Converse methods must return a Boolean value. True if an utterance was handled, otherwise False. Basic usage Let's use a version of the Ice Cream Skill we've been building up and add a converse method to catch any brief statements of thanks that might directly follow an order. from mycroft import MycroftSkill, intent_handler class IceCreamSkill(MycroftSkill): def __init__(self): MycroftSkill.__init__(self) self.flavors = ['vanilla', 'chocolate', 'mint'] @intent_handler('request.icecream.intent') def handle_request_icecream(self): self.speak_dialog('welcome') selection = self.ask_selection(self.flavors, 'what.flavor') self.speak_dialog('coming-right-up', {'flavor': selection}) def converse(self, message): if self.voc_match(message.data['utterances'][0], 'Thankyou'): self.speak_dialog(\"you-are-welcome\") return True def create_skill(): return IceCreamSkill() In this example: A User might request an ice cream which is handled by handle_request_icecream() The Skill would be added to the system Active Skill list for up to 5 minutes. Any utterance received by OVOS would trigger this Skills converse system whilst it is considered active. If the User followed up with a pleasantry such as \"Hey Mycroft, thanks\" - the converse method would match this vocab against the Thankyou.voc file in the Skill and speak the contents of the you-are-welcome.dialog file. The method would return True and the utterance would be consumed meaning the intent parsing service would never be triggered. Any utterance that did not match would be silently ignored and allowed to continue on to other converse methods and finally to the intent parsing service. Active Skill List A Skill is considered active if it has been called in the last 5 minutes. Skills are called in order of when they were last active. For example, if a user spoke the following commands: Hey Mycroft, set a timer for 10 minutes Hey Mycroft, what's the weather Then the utterance \"what's the weather\" would first be sent to the Timer Skill's converse() method, then to the intent service for normal handling where the Weather Skill would be called. As the Weather Skill was called it has now been added to the front of the Active Skills List. Hence, the next utterance received will be directed to: WeatherSkill.converse() TimerSkill.converse() Normal intent parsing service Making a Skill Active There are occasions where a Skill has not been triggered by the User, but it should still be considered \"Active\". In the case of our Ice Cream Skill - we might have a function that will execute when the customers order is ready. At this point, we also want to be responsive to the customers thanks, so we call self.make_active() to manually add our Skill to the front of the Active Skills List.","title":"Converse"},{"location":"unused/converse/#converse","text":"Each Skill may define a converse() method. This method will be called anytime the Skill has been recently active and a new utterance is processed. The converse method expects a single argument which is a standard Mycroft Message object. This is the same object an intent handler receives. Converse methods must return a Boolean value. True if an utterance was handled, otherwise False.","title":"Converse"},{"location":"unused/converse/#basic-usage","text":"Let's use a version of the Ice Cream Skill we've been building up and add a converse method to catch any brief statements of thanks that might directly follow an order. from mycroft import MycroftSkill, intent_handler class IceCreamSkill(MycroftSkill): def __init__(self): MycroftSkill.__init__(self) self.flavors = ['vanilla', 'chocolate', 'mint'] @intent_handler('request.icecream.intent') def handle_request_icecream(self): self.speak_dialog('welcome') selection = self.ask_selection(self.flavors, 'what.flavor') self.speak_dialog('coming-right-up', {'flavor': selection}) def converse(self, message): if self.voc_match(message.data['utterances'][0], 'Thankyou'): self.speak_dialog(\"you-are-welcome\") return True def create_skill(): return IceCreamSkill() In this example: A User might request an ice cream which is handled by handle_request_icecream() The Skill would be added to the system Active Skill list for up to 5 minutes. Any utterance received by OVOS would trigger this Skills converse system whilst it is considered active. If the User followed up with a pleasantry such as \"Hey Mycroft, thanks\" - the converse method would match this vocab against the Thankyou.voc file in the Skill and speak the contents of the you-are-welcome.dialog file. The method would return True and the utterance would be consumed meaning the intent parsing service would never be triggered. Any utterance that did not match would be silently ignored and allowed to continue on to other converse methods and finally to the intent parsing service.","title":"Basic usage"},{"location":"unused/converse/#active-skill-list","text":"A Skill is considered active if it has been called in the last 5 minutes. Skills are called in order of when they were last active. For example, if a user spoke the following commands: Hey Mycroft, set a timer for 10 minutes Hey Mycroft, what's the weather Then the utterance \"what's the weather\" would first be sent to the Timer Skill's converse() method, then to the intent service for normal handling where the Weather Skill would be called. As the Weather Skill was called it has now been added to the front of the Active Skills List. Hence, the next utterance received will be directed to: WeatherSkill.converse() TimerSkill.converse() Normal intent parsing service","title":"Active Skill List"},{"location":"unused/converse/#making-a-skill-active","text":"There are occasions where a Skill has not been triggered by the User, but it should still be considered \"Active\". In the case of our Ice Cream Skill - we might have a function that will execute when the customers order is ready. At this point, we also want to be responsive to the customers thanks, so we call self.make_active() to manually add our Skill to the front of the Active Skills List.","title":"Making a Skill Active"},{"location":"unused/core/","text":"ovos-core OpenVoiceOS is an open source platform for smart speakers and other voice-centric devices. OVOS-core is a backwards-compatible descendant of Mycroft-core , the central component of Mycroft. It contains extensions and features not present upstream. All Mycroft Skills and Plugins should work normally with OVOS-core. OVOS-core is fully modular. Furthermore, common components have been repackaged as plugins. That means it isn't just a great assistant on its own, but also a pretty small library! Getting Started ovos-core is very modular, depending on where you are running ovos-core you may want to run only a subset of the services by default ovos-core only installs the minimum components common to all services, for the purposes of this document we will assume you want a full install if you want to finetune the components please replace [all] in commands below with the subset of desired extras, eg [skills,bus] Installing ovos-core ovos-core can be installed from pypi or from source if install fails you may need to install some system dependencies, how to do this will depend on your distro sudo apt install build-essential python3-dev swig libssl-dev libfann-dev portaudio19-dev libpulse-dev Note : MycroftAI's dev_setup.sh does not exist in OVOS-core. from source We suggest you do this in a virtualenv: pip install git+https://github.com/OpenVoiceOS/ovos-core[all] from pypi pip install ovos-core[all] Running ovos-core Developer launcher script start-mycroft.sh is available to perform common tasks. Assuming you installed ovos-core in your home directory, run: cd ~/ovos-core ./start-mycroft.sh debug The \"debug\" command will start the background services (microphone listener, skill, messagebus, and audio subsystems) as well as bringing up a text-based Command Line Interface (CLI) you can use to interact with Mycroft and see the contents of the various logs. Alternatively you can run ./start-mycroft.sh all to begin the services without the command line interface. Later you can bring up the CLI using ./start-mycroft.sh cli . The background services can be stopped as a group with: ./stop-mycroft.sh Automatically on boot We recommend you create system services to manage ovos instead of depending on the launcher script above A good explanation can be found here https://github.com/j1nx/mycroft-systemd A reference implementation can be found in ovos-buildroot","title":"ovos-core"},{"location":"unused/core/#ovos-core","text":"OpenVoiceOS is an open source platform for smart speakers and other voice-centric devices. OVOS-core is a backwards-compatible descendant of Mycroft-core , the central component of Mycroft. It contains extensions and features not present upstream. All Mycroft Skills and Plugins should work normally with OVOS-core. OVOS-core is fully modular. Furthermore, common components have been repackaged as plugins. That means it isn't just a great assistant on its own, but also a pretty small library!","title":"ovos-core"},{"location":"unused/core/#getting-started","text":"ovos-core is very modular, depending on where you are running ovos-core you may want to run only a subset of the services by default ovos-core only installs the minimum components common to all services, for the purposes of this document we will assume you want a full install if you want to finetune the components please replace [all] in commands below with the subset of desired extras, eg [skills,bus]","title":"Getting Started"},{"location":"unused/core/#installing-ovos-core","text":"ovos-core can be installed from pypi or from source if install fails you may need to install some system dependencies, how to do this will depend on your distro sudo apt install build-essential python3-dev swig libssl-dev libfann-dev portaudio19-dev libpulse-dev Note : MycroftAI's dev_setup.sh does not exist in OVOS-core.","title":"Installing ovos-core"},{"location":"unused/core/#from-source","text":"We suggest you do this in a virtualenv: pip install git+https://github.com/OpenVoiceOS/ovos-core[all]","title":"from source"},{"location":"unused/core/#from-pypi","text":"pip install ovos-core[all]","title":"from pypi"},{"location":"unused/core/#running-ovos-core","text":"","title":"Running ovos-core"},{"location":"unused/core/#developer-launcher-script","text":"start-mycroft.sh is available to perform common tasks. Assuming you installed ovos-core in your home directory, run: cd ~/ovos-core ./start-mycroft.sh debug The \"debug\" command will start the background services (microphone listener, skill, messagebus, and audio subsystems) as well as bringing up a text-based Command Line Interface (CLI) you can use to interact with Mycroft and see the contents of the various logs. Alternatively you can run ./start-mycroft.sh all to begin the services without the command line interface. Later you can bring up the CLI using ./start-mycroft.sh cli . The background services can be stopped as a group with: ./stop-mycroft.sh","title":"Developer launcher script"},{"location":"unused/core/#automatically-on-boot","text":"We recommend you create system services to manage ovos instead of depending on the launcher script above A good explanation can be found here https://github.com/j1nx/mycroft-systemd A reference implementation can be found in ovos-buildroot","title":"Automatically on boot"},{"location":"unused/dev/","text":"Developer FAQ How do I know what is currently happening in the GUI? How do I stop an intent mid execution? How do I send files over the bus? How do I use OAuth in a skill? How do I manage remote skill settings? How do I share data between devices? How do I use Geolocation backend services? How do I use Weather backend services? How do I use WolframAlpha backend services? How do I know what is currently happening in the GUI? from ovos_utils.gui import GUITracker from ovos_workshop.skills import OVOSSkill from mycroft import intent_handler class MyGUIEventTracker(GUITracker): # GUI event handlers # skill can/should subclass this def on_idle(self, namespace): print(\"IDLE\", namespace) timestamp = self.idle_ts def on_active(self, namespace): # NOTE: page has not been loaded yet # event will fire right after this one print(\"ACTIVE\", namespace) # check namespace values, they should all be set before this event values = self.gui_values[namespace] def on_new_page(self, page, namespace, index): print(\"NEW PAGE\", namespace, index, namespace) # check all loaded pages for n in self.gui_pages: # list of named tuples nspace = n.name # namespace / skill_id pages = n.pages # ordered list of page uris def on_gui_value(self, namespace, key, value): # WARNING this will pollute logs quite a lot, and you will get # duplicates, better to check values on a different event, # demonstrated in on_active print(\"VALUE\", namespace, key, value) class MySkill(OVOSSkill): def initialize(self): self.tracker = MyGUIEventTracker(bus=self.bus) @intent_handler(\"gui.status.intent\") def handle_status_intent(self, message): print(\"device has screen:\", self.tracker.can_display()) print(\"mycroft-gui installed:\", self.tracker.is_gui_installed()) print(\"gui connected:\", self.tracker.is_gui_connected()) # TODO - speak or something @intent_handler(\"list.idle.screens.intent\") def handle_idle_screens_intent(self, message): # check registered idle screens print(\"Registered idle screens:\") for name in self.tracker.idle_screens: skill_id = self.tracker.idle_screens[name] print(\" - \", name, \":\", skill_id) # TODO - speak or something How do I stop an intent mid execution? Sometimes you want to abort a running intent immediately, the stop method may not be enough in some circumstances we provide a killable_intent decorator in ovos_workshop that can be used to abort a running intent immediately a common use case is for GUI interfaces where the same action may be done by voice or clicking buttons, in this case you may need to abort a running get_response loop from ovos_workshop.skills import OVOSSkill from ovos_workshop.decorators import killable_intent from mycroft import intent_handler from time import sleep class Test(OVOSSkill): \"\"\" send \"mycroft.skills.abort_question\" and confirm only get_response is aborted send \"mycroft.skills.abort_execution\" and confirm the full intent is aborted, except intent3 send \"my.own.abort.msg\" and confirm intent3 is aborted say \"stop\" and confirm all intents are aborted \"\"\" def __init__(self): super(Test, self).__init__(\"KillableSkill\") self.my_special_var = \"default\" def handle_intent_aborted(self): self.speak(\"I am dead\") # handle any cleanup the skill might need, since intent was killed # at an arbitrary place of code execution some variables etc. might # end up in unexpected states self.my_special_var = \"default\" @killable_intent(callback=handle_intent_aborted) @intent_handler(\"test.intent\") def handle_test_abort_intent(self, message): self.my_special_var = \"changed\" while True: sleep(1) self.speak(\"still here\") @intent_handler(\"test2.intent\") @killable_intent(callback=handle_intent_aborted) def handle_test_get_response_intent(self, message): self.my_special_var = \"CHANGED\" ans = self.get_response(\"question\", num_retries=99999) self.log.debug(\"get_response returned: \" + str(ans)) if ans is None: self.speak(\"question aborted\") @killable_intent(msg=\"my.own.abort.msg\", callback=handle_intent_aborted) @intent_handler(\"test3.intent\") def handle_test_msg_intent(self, message): if self.my_special_var != \"default\": self.speak(\"someone forgot to cleanup\") while True: sleep(1) self.speak(\"you can't abort me\") How do I send files over the bus? Sometimes you may want to send files or binary data over the messagebus, ovos_utils provides some tools to make this easy Sending a file from ovos_utils.messagebus import send_binary_file_message, decode_binary_message from ovos_workshop.skills import OVOSSkill class MySkill(OVOSSkill): def initialize(self): self.add_event(\"mycroft.binary.file\", self.receive_file) def receive_file(self, message): print(\"Receiving file\") path = message.data[\"path\"] # file path, extract filename if needed binary_data = decode_binary_message(message) # TODO process data somehow def send_file(self, my_file_path): send_binary_file_message(my_file_path) Sending binary data directly from ovos_utils.messagebus import send_binary_data_message, decode_binary_message from ovos_workshop.skills import OVOSSkill class MySkill(OVOSSkill): def initialize(self): self.add_event(\"mycroft.binary.data\", self.receive_binary) def send_data(self, binary_data): send_binary_data_message(binary_data) def receive_binary(self, message): print(\"Receiving binary data\") binary_data = decode_binary_message(message) # TODO process data somehow How do I manage remote skill settings? To interact with skill settings via DeviceApi from ovos_backend_client.settings import RemoteSkillSettings # in ovos-core skill_id is deterministic and safe s = RemoteSkillSettings(\"skill.author\") # in mycroft-core please ensure a valid remote_id # in MycroftSkill class you can use # remote_id = self.settings_meta.skill_gid # s = RemoteSkillSettings(\"skill.author\", remote_id=\"@|whatever_msm_decided\") s.download() s.settings[\"existing_value\"] = True s.settings[\"new_value\"] = \"will NOT show up in UI\" s.upload() # auto generate new settings meta for all new values before uploading s.settings[\"new_value\"] = \"will show up in UI\" s.generate_meta() # now \"new_value\" is in meta s.upload() How do I share data between devices? by hijacking skill settings we allow storing arbitrary data via DeviceApi and use it across devices and skills from ovos_backend_client.cloud import SeleneCloud cloud = SeleneCloud() cloud.add_entry(\"test\", {\"secret\": \"NOT ENCRYPTED MAN\"}) data = cloud.get_entry(\"test\") an encrypted version is also supported if you don't trust the backend! from ovos_backend_client.cloud import SecretSeleneCloud k = \"D8fmXEP5VqzVw2HE\" # you need this to read back the data cloud = SecretSeleneCloud(k) cloud.add_entry(\"test\", {\"secret\": \"secret data, selene cant read this\"}) data = cloud.get_entry(\"test\") How do I use Geolocation backend services? from ovos_backend_client.api import GeolocationApi geo = GeolocationApi() data = geo.get_geolocation(\"Lisbon Portugal\")","title":"Developer FAQ"},{"location":"unused/dev/#developer-faq","text":"How do I know what is currently happening in the GUI? How do I stop an intent mid execution? How do I send files over the bus? How do I use OAuth in a skill? How do I manage remote skill settings? How do I share data between devices? How do I use Geolocation backend services? How do I use Weather backend services? How do I use WolframAlpha backend services?","title":"Developer FAQ"},{"location":"unused/dev/#how-do-i-know-what-is-currently-happening-in-the-gui","text":"from ovos_utils.gui import GUITracker from ovos_workshop.skills import OVOSSkill from mycroft import intent_handler class MyGUIEventTracker(GUITracker): # GUI event handlers # skill can/should subclass this def on_idle(self, namespace): print(\"IDLE\", namespace) timestamp = self.idle_ts def on_active(self, namespace): # NOTE: page has not been loaded yet # event will fire right after this one print(\"ACTIVE\", namespace) # check namespace values, they should all be set before this event values = self.gui_values[namespace] def on_new_page(self, page, namespace, index): print(\"NEW PAGE\", namespace, index, namespace) # check all loaded pages for n in self.gui_pages: # list of named tuples nspace = n.name # namespace / skill_id pages = n.pages # ordered list of page uris def on_gui_value(self, namespace, key, value): # WARNING this will pollute logs quite a lot, and you will get # duplicates, better to check values on a different event, # demonstrated in on_active print(\"VALUE\", namespace, key, value) class MySkill(OVOSSkill): def initialize(self): self.tracker = MyGUIEventTracker(bus=self.bus) @intent_handler(\"gui.status.intent\") def handle_status_intent(self, message): print(\"device has screen:\", self.tracker.can_display()) print(\"mycroft-gui installed:\", self.tracker.is_gui_installed()) print(\"gui connected:\", self.tracker.is_gui_connected()) # TODO - speak or something @intent_handler(\"list.idle.screens.intent\") def handle_idle_screens_intent(self, message): # check registered idle screens print(\"Registered idle screens:\") for name in self.tracker.idle_screens: skill_id = self.tracker.idle_screens[name] print(\" - \", name, \":\", skill_id) # TODO - speak or something","title":"How do I know what is currently happening in the GUI?"},{"location":"unused/dev/#how-do-i-stop-an-intent-mid-execution","text":"Sometimes you want to abort a running intent immediately, the stop method may not be enough in some circumstances we provide a killable_intent decorator in ovos_workshop that can be used to abort a running intent immediately a common use case is for GUI interfaces where the same action may be done by voice or clicking buttons, in this case you may need to abort a running get_response loop from ovos_workshop.skills import OVOSSkill from ovos_workshop.decorators import killable_intent from mycroft import intent_handler from time import sleep class Test(OVOSSkill): \"\"\" send \"mycroft.skills.abort_question\" and confirm only get_response is aborted send \"mycroft.skills.abort_execution\" and confirm the full intent is aborted, except intent3 send \"my.own.abort.msg\" and confirm intent3 is aborted say \"stop\" and confirm all intents are aborted \"\"\" def __init__(self): super(Test, self).__init__(\"KillableSkill\") self.my_special_var = \"default\" def handle_intent_aborted(self): self.speak(\"I am dead\") # handle any cleanup the skill might need, since intent was killed # at an arbitrary place of code execution some variables etc. might # end up in unexpected states self.my_special_var = \"default\" @killable_intent(callback=handle_intent_aborted) @intent_handler(\"test.intent\") def handle_test_abort_intent(self, message): self.my_special_var = \"changed\" while True: sleep(1) self.speak(\"still here\") @intent_handler(\"test2.intent\") @killable_intent(callback=handle_intent_aborted) def handle_test_get_response_intent(self, message): self.my_special_var = \"CHANGED\" ans = self.get_response(\"question\", num_retries=99999) self.log.debug(\"get_response returned: \" + str(ans)) if ans is None: self.speak(\"question aborted\") @killable_intent(msg=\"my.own.abort.msg\", callback=handle_intent_aborted) @intent_handler(\"test3.intent\") def handle_test_msg_intent(self, message): if self.my_special_var != \"default\": self.speak(\"someone forgot to cleanup\") while True: sleep(1) self.speak(\"you can't abort me\")","title":"How do I stop an intent mid execution?"},{"location":"unused/dev/#how-do-i-send-files-over-the-bus","text":"Sometimes you may want to send files or binary data over the messagebus, ovos_utils provides some tools to make this easy Sending a file from ovos_utils.messagebus import send_binary_file_message, decode_binary_message from ovos_workshop.skills import OVOSSkill class MySkill(OVOSSkill): def initialize(self): self.add_event(\"mycroft.binary.file\", self.receive_file) def receive_file(self, message): print(\"Receiving file\") path = message.data[\"path\"] # file path, extract filename if needed binary_data = decode_binary_message(message) # TODO process data somehow def send_file(self, my_file_path): send_binary_file_message(my_file_path) Sending binary data directly from ovos_utils.messagebus import send_binary_data_message, decode_binary_message from ovos_workshop.skills import OVOSSkill class MySkill(OVOSSkill): def initialize(self): self.add_event(\"mycroft.binary.data\", self.receive_binary) def send_data(self, binary_data): send_binary_data_message(binary_data) def receive_binary(self, message): print(\"Receiving binary data\") binary_data = decode_binary_message(message) # TODO process data somehow","title":"How do I send files over the bus?"},{"location":"unused/dev/#how-do-i-manage-remote-skill-settings","text":"To interact with skill settings via DeviceApi from ovos_backend_client.settings import RemoteSkillSettings # in ovos-core skill_id is deterministic and safe s = RemoteSkillSettings(\"skill.author\") # in mycroft-core please ensure a valid remote_id # in MycroftSkill class you can use # remote_id = self.settings_meta.skill_gid # s = RemoteSkillSettings(\"skill.author\", remote_id=\"@|whatever_msm_decided\") s.download() s.settings[\"existing_value\"] = True s.settings[\"new_value\"] = \"will NOT show up in UI\" s.upload() # auto generate new settings meta for all new values before uploading s.settings[\"new_value\"] = \"will show up in UI\" s.generate_meta() # now \"new_value\" is in meta s.upload()","title":"How do I manage remote skill settings?"},{"location":"unused/dev/#how-do-i-share-data-between-devices","text":"by hijacking skill settings we allow storing arbitrary data via DeviceApi and use it across devices and skills from ovos_backend_client.cloud import SeleneCloud cloud = SeleneCloud() cloud.add_entry(\"test\", {\"secret\": \"NOT ENCRYPTED MAN\"}) data = cloud.get_entry(\"test\") an encrypted version is also supported if you don't trust the backend! from ovos_backend_client.cloud import SecretSeleneCloud k = \"D8fmXEP5VqzVw2HE\" # you need this to read back the data cloud = SecretSeleneCloud(k) cloud.add_entry(\"test\", {\"secret\": \"secret data, selene cant read this\"}) data = cloud.get_entry(\"test\")","title":"How do I share data between devices?"},{"location":"unused/dev/#how-do-i-use-geolocation-backend-services","text":"from ovos_backend_client.api import GeolocationApi geo = GeolocationApi() data = geo.get_geolocation(\"Lisbon Portugal\")","title":"How do I use Geolocation backend services?"},{"location":"unused/dev_ocp_skill/","text":"Developing OCP Skills OVOS Common Play (OCP) is a full-fledged media player, compatible with the MPRIS standard . Developing a skill for OCP is similar to writing any other OVOS-compatible skill except basic intents and playing media are handled for the developer. This documentation is a quick start guide for developers hoping to write an OCP skill. General Steps Create a skill class extending the OCP base class In the __init__ method indicate the media types you want to handle self.voc_match(phrase, \"skill_name\") to handle specific requests for your skill self.remove_voc(phrase, \"skill_name\") to remove matched phrases from the search request Implement the ocp_search decorator, as many as you want (they run in parallel) The decorated method can return a list or be an iterator of result_dict (track or playlist) The search function can be entirely inline or call another Python library, like pandorinha or plexapi self.extend_timeout() to not let OCP call for a Generic search too soon Place one in each search function so it's extended every time the skill is called Implement a confidence score formula Values are between 0 and 100 High confidence scores cancel other OCP skill searches ocp_featured_media , return a playlist for the OCP menu if selected from GUI Create a requirements.txt file with third-party package requirements Create a skills.json file for skill metadata The general interface that OCP expects to receive looks something like the following: class OVOSAudioTrack(TypedDict): uri: str # URL/URI of media, OCP will handle formatting and file handling title: str media_type: ovos_plugin_common_play.MediaType playback: ovos_plugin_common_play.PlaybackType match_confidence: int # 0-100 album: str | None # Parsed even for movies and TV shows artist: str | None # Parsed even for movies and TV shows length: int | str | None # in milliseconds, if present image: str | None bg_image: str | None skill_icon: str | None # Optional filename for skill icon skill_id: str | None # Optional ID of skill to distinguish where results came from OCP Skill Template from os.path import join, dirname from ovos_plugin_common_play.ocp import MediaType, PlaybackType from ovos_utils.parse import fuzzy_match from ovos_workshop.skills.common_play import OVOSCommonPlaybackSkill, \\ ocp_search class MySkill(OVOSCommonPlaybackSkill): def __init__(...): super(....) self.supported_media = [MediaType.GENERIC, MediaType.MUSIC] # <- these are the only media_types that will be sent to your skill self.skill_icon = join(dirname(__file__), \"ui\", \"pandora.jpeg\") # score @staticmethod def calc_score(phrase, match, base_score=0, exact=False): # implement your own logic here, assing a val from 0 - 100 per result if exact: # this requires that the result is related if phrase.lower() in match[\"title\"].lower(): match[\"match_confidence\"] = max(match[\"match_confidence\"], 80) elif phrase.lower() in match[\"artist\"].lower(): match[\"match_confidence\"] = max(match[\"match_confidence\"], 85) elif phrase.lower() == match[\"station\"].lower(): match[\"match_confidence\"] = max(match[\"match_confidence\"], 70) else: return 0 title_score = 100 * fuzzy_match(phrase.lower(), match[\"title\"].lower()) artist_score = 100 * fuzzy_match(phrase.lower(), match[\"artist\"].lower()) if artist_score > 85: score += artist_score * 0.85 + title_score * 0.15 elif artist_score > 70: score += artist_score * 0.6 + title_score * 0.4 elif artist_score > 50: score += title_score * 0.5 + artist_score * 0.5 else: score += title_score * 0.8 + artist_score * 0.2 score = min((100, score)) return score @ocp_search() def search_my_skill(self, phrase, media_type=MediaType.GENERIC): # match the request media_type base_score = 0 if media_type == MediaType.MUSIC: base_score += 10 else: base_score -= 15 # some penalty for proof of concept explicit_request = False if self.voc_match(phrase, \"mySkillNameVoc\"): # explicitly requested our skill base_score += 50 phrase = self.remove_voc(phrase, \"mySkillNameVoc\") # clean up search str explicit_request = True self.extend_timeout(1) # we know our skill is slow, ask OCP for more time for r in self.search_my_results(phrase): yield { \"match_confidence\": self.calc_score(phrase, r, base_score, exact=not explicit_request), \"media_type\": MediaType.MUSIC, \"length\": r[\"duration\"] * 1000, # seconds to milliseconds \"uri\": r[\"uri\"], \"playback\": PlaybackType.AUDIO, \"image\": r[\"image\"], \"bg_image\": r[\"bg_image\"], \"skill_icon\": self.skill_icon, \"title\": r[\"title\"], \"artist\": r[\"artist\"], \"album\": r[\"album\"], \"skill_id\": self.skill_id } skill.json template { \"title\": \"Plex OCP Skill\", \"url\": \"https://github.com/d-mcknight/skill-plex\", \"summary\": \"[OCP](https://github.com/OpenVoiceOS/ovos-ocp-audio-plugin) skill to play media from [Plex](https://plex.tv).\", \"short_description\": \"[OCP](https://github.com/OpenVoiceOS/ovos-ocp-audio-plugin) skill to play media from [Plex](https://plex.tv).\", \"description\": \"\", \"examples\": [ \"Play Charles Mingus\", \"Play Jamie Cullum on Plex\", \"Play the movie Ghostbusters\", \"Play the movie Ghostbusters on Plex\", \"Play Star Trek the Next Generation on Plex\", \"Play the tv show Star Trek the Next Generation on Plex\" ], \"desktopFile\": false, \"warning\": \"\", \"systemDeps\": false, \"requirements\": { \"python\": [\"plexapi~=4.13\", \"ovos-workshop~=0.0.11\"], \"system\": {}, \"skill\": [] }, \"incompatible_skills\": [], \"platforms\": [\"i386\", \"x86_64\", \"ia64\", \"arm64\", \"arm\"], \"branch\": \"master\", \"license\": \"BSD-3-Clause\", \"icon\": \"https://freemusicarchive.org/legacy/fma-smaller.jpg\", \"category\": \"Music\", \"categories\": [\"Music\", \"Daily\"], \"tags\": [\"music\", \"NeonAI\", \"NeonGecko Original\", \"OCP\", \"Common Play\"], \"credits\": [\"NeonGeckoCom\", \"NeonDaniel\"], \"skillname\": \"skill-plex\", \"authorname\": \"d-mcknight\", \"foldername\": null } Installing an OCP Skill OCP Skills are installed like any other OVOS skill . The preferred pattern is to release a pip package for your OCP skill and install it directly, but skills may also be installed directly from any pip-supported source such as git+https://github.com/OpenVoiceOS/skill-ovos-youtube-music . Once a skill has been installed a restart of the mycroft-skills , ovos-skills , or neon-skills service will be required. Need Help? Say hi in OpenVoiceOS Chat and a team member would be happy to assist you.","title":"Developing OCP Skills"},{"location":"unused/dev_ocp_skill/#developing-ocp-skills","text":"OVOS Common Play (OCP) is a full-fledged media player, compatible with the MPRIS standard . Developing a skill for OCP is similar to writing any other OVOS-compatible skill except basic intents and playing media are handled for the developer. This documentation is a quick start guide for developers hoping to write an OCP skill.","title":"Developing OCP Skills"},{"location":"unused/dev_ocp_skill/#general-steps","text":"Create a skill class extending the OCP base class In the __init__ method indicate the media types you want to handle self.voc_match(phrase, \"skill_name\") to handle specific requests for your skill self.remove_voc(phrase, \"skill_name\") to remove matched phrases from the search request Implement the ocp_search decorator, as many as you want (they run in parallel) The decorated method can return a list or be an iterator of result_dict (track or playlist) The search function can be entirely inline or call another Python library, like pandorinha or plexapi self.extend_timeout() to not let OCP call for a Generic search too soon Place one in each search function so it's extended every time the skill is called Implement a confidence score formula Values are between 0 and 100 High confidence scores cancel other OCP skill searches ocp_featured_media , return a playlist for the OCP menu if selected from GUI Create a requirements.txt file with third-party package requirements Create a skills.json file for skill metadata The general interface that OCP expects to receive looks something like the following: class OVOSAudioTrack(TypedDict): uri: str # URL/URI of media, OCP will handle formatting and file handling title: str media_type: ovos_plugin_common_play.MediaType playback: ovos_plugin_common_play.PlaybackType match_confidence: int # 0-100 album: str | None # Parsed even for movies and TV shows artist: str | None # Parsed even for movies and TV shows length: int | str | None # in milliseconds, if present image: str | None bg_image: str | None skill_icon: str | None # Optional filename for skill icon skill_id: str | None # Optional ID of skill to distinguish where results came from","title":"General Steps"},{"location":"unused/dev_ocp_skill/#ocp-skill-template","text":"from os.path import join, dirname from ovos_plugin_common_play.ocp import MediaType, PlaybackType from ovos_utils.parse import fuzzy_match from ovos_workshop.skills.common_play import OVOSCommonPlaybackSkill, \\ ocp_search class MySkill(OVOSCommonPlaybackSkill): def __init__(...): super(....) self.supported_media = [MediaType.GENERIC, MediaType.MUSIC] # <- these are the only media_types that will be sent to your skill self.skill_icon = join(dirname(__file__), \"ui\", \"pandora.jpeg\") # score @staticmethod def calc_score(phrase, match, base_score=0, exact=False): # implement your own logic here, assing a val from 0 - 100 per result if exact: # this requires that the result is related if phrase.lower() in match[\"title\"].lower(): match[\"match_confidence\"] = max(match[\"match_confidence\"], 80) elif phrase.lower() in match[\"artist\"].lower(): match[\"match_confidence\"] = max(match[\"match_confidence\"], 85) elif phrase.lower() == match[\"station\"].lower(): match[\"match_confidence\"] = max(match[\"match_confidence\"], 70) else: return 0 title_score = 100 * fuzzy_match(phrase.lower(), match[\"title\"].lower()) artist_score = 100 * fuzzy_match(phrase.lower(), match[\"artist\"].lower()) if artist_score > 85: score += artist_score * 0.85 + title_score * 0.15 elif artist_score > 70: score += artist_score * 0.6 + title_score * 0.4 elif artist_score > 50: score += title_score * 0.5 + artist_score * 0.5 else: score += title_score * 0.8 + artist_score * 0.2 score = min((100, score)) return score @ocp_search() def search_my_skill(self, phrase, media_type=MediaType.GENERIC): # match the request media_type base_score = 0 if media_type == MediaType.MUSIC: base_score += 10 else: base_score -= 15 # some penalty for proof of concept explicit_request = False if self.voc_match(phrase, \"mySkillNameVoc\"): # explicitly requested our skill base_score += 50 phrase = self.remove_voc(phrase, \"mySkillNameVoc\") # clean up search str explicit_request = True self.extend_timeout(1) # we know our skill is slow, ask OCP for more time for r in self.search_my_results(phrase): yield { \"match_confidence\": self.calc_score(phrase, r, base_score, exact=not explicit_request), \"media_type\": MediaType.MUSIC, \"length\": r[\"duration\"] * 1000, # seconds to milliseconds \"uri\": r[\"uri\"], \"playback\": PlaybackType.AUDIO, \"image\": r[\"image\"], \"bg_image\": r[\"bg_image\"], \"skill_icon\": self.skill_icon, \"title\": r[\"title\"], \"artist\": r[\"artist\"], \"album\": r[\"album\"], \"skill_id\": self.skill_id }","title":"OCP Skill Template"},{"location":"unused/dev_ocp_skill/#skilljson-template","text":"{ \"title\": \"Plex OCP Skill\", \"url\": \"https://github.com/d-mcknight/skill-plex\", \"summary\": \"[OCP](https://github.com/OpenVoiceOS/ovos-ocp-audio-plugin) skill to play media from [Plex](https://plex.tv).\", \"short_description\": \"[OCP](https://github.com/OpenVoiceOS/ovos-ocp-audio-plugin) skill to play media from [Plex](https://plex.tv).\", \"description\": \"\", \"examples\": [ \"Play Charles Mingus\", \"Play Jamie Cullum on Plex\", \"Play the movie Ghostbusters\", \"Play the movie Ghostbusters on Plex\", \"Play Star Trek the Next Generation on Plex\", \"Play the tv show Star Trek the Next Generation on Plex\" ], \"desktopFile\": false, \"warning\": \"\", \"systemDeps\": false, \"requirements\": { \"python\": [\"plexapi~=4.13\", \"ovos-workshop~=0.0.11\"], \"system\": {}, \"skill\": [] }, \"incompatible_skills\": [], \"platforms\": [\"i386\", \"x86_64\", \"ia64\", \"arm64\", \"arm\"], \"branch\": \"master\", \"license\": \"BSD-3-Clause\", \"icon\": \"https://freemusicarchive.org/legacy/fma-smaller.jpg\", \"category\": \"Music\", \"categories\": [\"Music\", \"Daily\"], \"tags\": [\"music\", \"NeonAI\", \"NeonGecko Original\", \"OCP\", \"Common Play\"], \"credits\": [\"NeonGeckoCom\", \"NeonDaniel\"], \"skillname\": \"skill-plex\", \"authorname\": \"d-mcknight\", \"foldername\": null }","title":"skill.json template"},{"location":"unused/dev_ocp_skill/#installing-an-ocp-skill","text":"OCP Skills are installed like any other OVOS skill . The preferred pattern is to release a pip package for your OCP skill and install it directly, but skills may also be installed directly from any pip-supported source such as git+https://github.com/OpenVoiceOS/skill-ovos-youtube-music . Once a skill has been installed a restart of the mycroft-skills , ovos-skills , or neon-skills service will be required.","title":"Installing an OCP Skill"},{"location":"unused/dev_ocp_skill/#need-help","text":"Say hi in OpenVoiceOS Chat and a team member would be happy to assist you.","title":"Need Help?"},{"location":"unused/dinkum/","text":"Dinkum What is Dinkum Mycroft Mark2 shipped with a new version of mycroft called \"dinkum\", this is a total overhaul of mycroft-core and incompatible mycroft-core is now referred to as \"Classic Core\" by MycroftAI MycroftAI now provides what they call sandbox images, to add to the confusion those only work in the mark 2 and \"Classic Core\" means the mark-ii/latest branch of mycroft-core, this is a derivative version of the branch that was used in the dev kits ( mark-ii/qa ) and is also backwards incompatible, changes in this branch were not done via PRs and had no review or community input Mark2 useful links: mark 2 docs dinkum source code sandbox images mark2/latest (core for sandbox images) mark2/qa (core for dev kit image) Dinkum vs ovos-core you can find mycroft's guide to porting skills to dinkum here https://mycroft-ai.gitbook.io/mark-ii/differences-to-classic-core/porting-classic-core-skills mark2/qa brought some changes to mycroft-core, not all of them backwards compatible and some that were contentious within the community. VAD - VAD has been added to the mark-ii, but it is hardcoded to silero, this feature has been adopted via OPM, it is an important part of ovos-core listening modes introduced in version 0.0.5 self.resources - resource file loading was overhauled, this feature has been improved ( ovos-core/pull/130 + ovos-core/pull/131 + ovos-core/pull/135 + ovos-core/pull/170 ) and ported to OVOS and is also available in OVOSkill class ( OVOS-workshop/pull/30 ) for usage in classic core audio hal - audio playback was rewritten from scratch, audio plugin support has been removed, OVOS will not adopt this new approach but keep improving the previous one skill states - converse method introduced skill states, this changed some core assumptions behind converse method and active skills, OVOS will not adopt skill states, see community discussion here mycroft-core/pull/2901 + mycroft-core/pull/2906 pure regex intents - pure regex intents have been introduced, we strongly recommend you use padatious instead if this is desired, regex makes language support really hard, let the intent engines do their jobs adapt fork - a fork of adapt is used in the mark2, it introduces the exactly and excludes methods. excludes will be added upstream in adapt/pull/156 . Any skill using these new methods will be incompatible with most core versions activities - an activity is just a set of bus messages to indicate something started and ended, it is a reimplementation of an already existing feature, in ovos we use the native events from the self.add_event skill method dinkum contains all changes above and also brought further changes to the table sessions - in dinkum session handling is done by skills, it completely ignores the message.context mechanism and existing session_id, in ovos we believe session should come in the message and handled by the clients (eg, a chat user or a hivemind client....), in ovos we are expanding the original session concept ovos-core/pull/160 dbus-hal - a dbus service specific to the mk2 has been introduced, in ovos we have a generic PHAL service and companion plugins to interface with mk2 hardware instead, this component is mark2 specific and should be ignored in the ovos ecosystem Any skills using these new \"features\" will not work outside the mark2 FAQ Do OVOS skills run in dinkum? No, not even classic core skills run in dinkum. We have no plans to support this Do Dinkum skills run in ovos? No, dinkum is designed in a very incompatible way, the mycroft module is not always mycroft-core and the MycroftSkill class is not always a MycroftSkill, we have no intention of transparently loading dinkum skills in ovos-core We have a small proof of concept tool to convert a dinkum skill into an ovos/classic core compatible skill, see https://github.com/OpenVoiceOS/undinkumfier Does OCP work in dinkum? No, Audio plugin support has been removed, you can run OCP standalone but will be missing the compatibility layers and can't load OCP skills anyway It could be made to work but this is not in the roadmap, PRs will be accepted and reviewed Does PHAL work in dinkum? It should! We don't explicitly target or test it with dinkum, but it is a fairly standalone component Does OPM work in dinkum? STT , TTS and WW plugins should work, We don't explicitly target or test compatibility, PRs will be accepted and reviewed","title":"Dinkum"},{"location":"unused/dinkum/#dinkum","text":"","title":"Dinkum"},{"location":"unused/dinkum/#what-is-dinkum","text":"Mycroft Mark2 shipped with a new version of mycroft called \"dinkum\", this is a total overhaul of mycroft-core and incompatible mycroft-core is now referred to as \"Classic Core\" by MycroftAI MycroftAI now provides what they call sandbox images, to add to the confusion those only work in the mark 2 and \"Classic Core\" means the mark-ii/latest branch of mycroft-core, this is a derivative version of the branch that was used in the dev kits ( mark-ii/qa ) and is also backwards incompatible, changes in this branch were not done via PRs and had no review or community input Mark2 useful links: mark 2 docs dinkum source code sandbox images mark2/latest (core for sandbox images) mark2/qa (core for dev kit image)","title":"What is Dinkum"},{"location":"unused/dinkum/#dinkum-vs-ovos-core","text":"you can find mycroft's guide to porting skills to dinkum here https://mycroft-ai.gitbook.io/mark-ii/differences-to-classic-core/porting-classic-core-skills mark2/qa brought some changes to mycroft-core, not all of them backwards compatible and some that were contentious within the community. VAD - VAD has been added to the mark-ii, but it is hardcoded to silero, this feature has been adopted via OPM, it is an important part of ovos-core listening modes introduced in version 0.0.5 self.resources - resource file loading was overhauled, this feature has been improved ( ovos-core/pull/130 + ovos-core/pull/131 + ovos-core/pull/135 + ovos-core/pull/170 ) and ported to OVOS and is also available in OVOSkill class ( OVOS-workshop/pull/30 ) for usage in classic core audio hal - audio playback was rewritten from scratch, audio plugin support has been removed, OVOS will not adopt this new approach but keep improving the previous one skill states - converse method introduced skill states, this changed some core assumptions behind converse method and active skills, OVOS will not adopt skill states, see community discussion here mycroft-core/pull/2901 + mycroft-core/pull/2906 pure regex intents - pure regex intents have been introduced, we strongly recommend you use padatious instead if this is desired, regex makes language support really hard, let the intent engines do their jobs adapt fork - a fork of adapt is used in the mark2, it introduces the exactly and excludes methods. excludes will be added upstream in adapt/pull/156 . Any skill using these new methods will be incompatible with most core versions activities - an activity is just a set of bus messages to indicate something started and ended, it is a reimplementation of an already existing feature, in ovos we use the native events from the self.add_event skill method dinkum contains all changes above and also brought further changes to the table sessions - in dinkum session handling is done by skills, it completely ignores the message.context mechanism and existing session_id, in ovos we believe session should come in the message and handled by the clients (eg, a chat user or a hivemind client....), in ovos we are expanding the original session concept ovos-core/pull/160 dbus-hal - a dbus service specific to the mk2 has been introduced, in ovos we have a generic PHAL service and companion plugins to interface with mk2 hardware instead, this component is mark2 specific and should be ignored in the ovos ecosystem Any skills using these new \"features\" will not work outside the mark2","title":"Dinkum vs ovos-core"},{"location":"unused/dinkum/#faq","text":"","title":"FAQ"},{"location":"unused/dinkum/#do-ovos-skills-run-in-dinkum","text":"No, not even classic core skills run in dinkum. We have no plans to support this","title":"Do OVOS skills run in dinkum?"},{"location":"unused/dinkum/#do-dinkum-skills-run-in-ovos","text":"No, dinkum is designed in a very incompatible way, the mycroft module is not always mycroft-core and the MycroftSkill class is not always a MycroftSkill, we have no intention of transparently loading dinkum skills in ovos-core We have a small proof of concept tool to convert a dinkum skill into an ovos/classic core compatible skill, see https://github.com/OpenVoiceOS/undinkumfier","title":"Do Dinkum skills run in ovos?"},{"location":"unused/dinkum/#does-ocp-work-in-dinkum","text":"No, Audio plugin support has been removed, you can run OCP standalone but will be missing the compatibility layers and can't load OCP skills anyway It could be made to work but this is not in the roadmap, PRs will be accepted and reviewed","title":"Does OCP work in dinkum?"},{"location":"unused/dinkum/#does-phal-work-in-dinkum","text":"It should! We don't explicitly target or test it with dinkum, but it is a fairly standalone component","title":"Does PHAL work in dinkum?"},{"location":"unused/dinkum/#does-opm-work-in-dinkum","text":"STT , TTS and WW plugins should work, We don't explicitly target or test compatibility, PRs will be accepted and reviewed","title":"Does OPM work in dinkum?"},{"location":"unused/first_boot/","text":"Booting your OpenVoiceOS device. Depending on which image you downloaded you will first see the boot splash which indicates the Operating System is booting. For the buildroot edition the below boot splash will be shown. If this is the very first time you boot your device, booting might take a bit longer as normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device. Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown. Again, if this is the first time you boot your device this might take a bit longer as the ovos-core configuration is populated and skills are being setup.","title":"Booting your OpenVoiceOS device."},{"location":"unused/first_boot/#booting-your-openvoiceos-device","text":"Depending on which image you downloaded you will first see the boot splash which indicates the Operating System is booting. For the buildroot edition the below boot splash will be shown. If this is the very first time you boot your device, booting might take a bit longer as normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device. Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown. Again, if this is the first time you boot your device this might take a bit longer as the ovos-core configuration is populated and skills are being setup.","title":"Booting your OpenVoiceOS device."},{"location":"unused/g2p_plugins/","text":"Grapheme to Phoneme Plugins Grapheme to Phoneme is the process of converting text into a set of \"sound units\" called phonemes These plugins are used to auto generate mouth movements / visemes in the TTS stage, they can also be used to help configuring wake words or to facilitate training of TTS systems These plugins can provide phonemes either in ARPA or IPA alphabets, an automatic conversion will happen behind the scenes when needed Mouth movements are generated via a mapping of ARPA to VISEMES, Visemes are predefined mouth positions, timing per phonemes will default to 0.4 seconds if the plugin does not report a duration Mapping based on Jeffers phoneme to viseme map, seen in table 1 , partially based on the \"12 mouth shapes visuals seen here List of G2P plugins Plugin Type neon-g2p-cmudict-plugin ARPA neon-g2p-phoneme-guesser-plugin ARPA neon-g2p-mimic-plugin ARPA neon-g2p-mimic2-plugin ARPA neon-g2p-espeak-plugin IPA neon-g2p-gruut-plugin IPA Standalone Usage All G2P plugins can be used as follows utterance = \"hello world\" word = \"hello\" lang=\"en-us\" plug = G2pPlugin() # convert a word into a list of phonemes phones = plug.get_ipa(word, lang) assert phones == ['h', '\u028c', 'l', 'o\u028a'] phones = plug.get_arpa(word, lang) assert phones == ['HH', 'AH', 'L', 'OW'] # convert a utterance into a list of phonemes phones = plug.utterance2arpa(utterance, lang) assert phones == ['HH', 'AH', 'L', 'OW', '.', 'W', 'ER', 'L', 'D'] phones = plug.utterance2ipa(utterance, lang) assert phones == ['h', '\u028c', 'l', 'o\u028a', '.', 'w', '\u025d', 'l', 'd'] # convert a utterance into a list of viseme, duration pairs visemes = plug.utterance2visemes(utterance, lang) assert visemes == [('0', 0.0775), ('0', 0.155), ('3', 0.2325), ('2', 0.31), ('2', 0.434), ('2', 0.558), ('3', 0.682), ('3', 0.806)] Plugin Template from ovos_plugin_manager.templates.g2p import Grapheme2PhonemePlugin from ovos_utils.lang.visimes import VISIMES # base plugin class class MyARPAG2PPlugin(Grapheme2PhonemePlugin): def __init__(self, config=None): self.config = config or {} def get_arpa(self, word, lang, ignore_oov=False): phones = [] # TODO implement return phones def get_durations(self, utterance, lang=\"en\", default_dur=0.4): words = utterance.split() phones = [self.get_arpa(w, lang) for w in utterance.split()] dur = default_dur # TODO this is plugin specific return [(pho, dur) for pho in phones] def utterance2visemes(self, utterance, lang=\"en\", default_dur=0.4): phonemes = self.get_durations(utterance, lang, default_dur) return [(VISIMES.get(pho[0].lower(), '4'), float(pho[1])) for pho in phonemes] If your plugin uses IPA instead of ARPA simply replace get_arpa with get_ipa from ovos_plugin_manager.templates.g2p import Grapheme2PhonemePlugin from ovos_utils.lang.visimes import VISIMES # base plugin class class MyIPAG2PPlugin(Grapheme2PhonemePlugin): def __init__(self, config=None): self.config = config or {} def get_ipa(self, word, lang, ignore_oov=False): phones = [] # TODO implement return phones def get_durations(self, utterance, lang=\"en\", default_dur=0.4): # auto converted to arpa if ipa is implemented phones = [self.get_arpa(w, lang) for w in utterance.split()] dur = default_dur # TODO this is plugin specific return [(pho, dur) for pho in phones] def utterance2visemes(self, utterance, lang=\"en\", default_dur=0.4): phonemes = self.get_durations(utterance, lang, default_dur) return [(VISIMES.get(pho[0].lower(), '4'), float(pho[1])) for pho in phonemes]","title":"Grapheme to Phoneme Plugins"},{"location":"unused/g2p_plugins/#grapheme-to-phoneme-plugins","text":"Grapheme to Phoneme is the process of converting text into a set of \"sound units\" called phonemes These plugins are used to auto generate mouth movements / visemes in the TTS stage, they can also be used to help configuring wake words or to facilitate training of TTS systems These plugins can provide phonemes either in ARPA or IPA alphabets, an automatic conversion will happen behind the scenes when needed Mouth movements are generated via a mapping of ARPA to VISEMES, Visemes are predefined mouth positions, timing per phonemes will default to 0.4 seconds if the plugin does not report a duration Mapping based on Jeffers phoneme to viseme map, seen in table 1 , partially based on the \"12 mouth shapes visuals seen here","title":"Grapheme to Phoneme Plugins"},{"location":"unused/g2p_plugins/#list-of-g2p-plugins","text":"Plugin Type neon-g2p-cmudict-plugin ARPA neon-g2p-phoneme-guesser-plugin ARPA neon-g2p-mimic-plugin ARPA neon-g2p-mimic2-plugin ARPA neon-g2p-espeak-plugin IPA neon-g2p-gruut-plugin IPA","title":"List of G2P plugins"},{"location":"unused/g2p_plugins/#standalone-usage","text":"All G2P plugins can be used as follows utterance = \"hello world\" word = \"hello\" lang=\"en-us\" plug = G2pPlugin() # convert a word into a list of phonemes phones = plug.get_ipa(word, lang) assert phones == ['h', '\u028c', 'l', 'o\u028a'] phones = plug.get_arpa(word, lang) assert phones == ['HH', 'AH', 'L', 'OW'] # convert a utterance into a list of phonemes phones = plug.utterance2arpa(utterance, lang) assert phones == ['HH', 'AH', 'L', 'OW', '.', 'W', 'ER', 'L', 'D'] phones = plug.utterance2ipa(utterance, lang) assert phones == ['h', '\u028c', 'l', 'o\u028a', '.', 'w', '\u025d', 'l', 'd'] # convert a utterance into a list of viseme, duration pairs visemes = plug.utterance2visemes(utterance, lang) assert visemes == [('0', 0.0775), ('0', 0.155), ('3', 0.2325), ('2', 0.31), ('2', 0.434), ('2', 0.558), ('3', 0.682), ('3', 0.806)]","title":"Standalone Usage"},{"location":"unused/g2p_plugins/#plugin-template","text":"from ovos_plugin_manager.templates.g2p import Grapheme2PhonemePlugin from ovos_utils.lang.visimes import VISIMES # base plugin class class MyARPAG2PPlugin(Grapheme2PhonemePlugin): def __init__(self, config=None): self.config = config or {} def get_arpa(self, word, lang, ignore_oov=False): phones = [] # TODO implement return phones def get_durations(self, utterance, lang=\"en\", default_dur=0.4): words = utterance.split() phones = [self.get_arpa(w, lang) for w in utterance.split()] dur = default_dur # TODO this is plugin specific return [(pho, dur) for pho in phones] def utterance2visemes(self, utterance, lang=\"en\", default_dur=0.4): phonemes = self.get_durations(utterance, lang, default_dur) return [(VISIMES.get(pho[0].lower(), '4'), float(pho[1])) for pho in phonemes] If your plugin uses IPA instead of ARPA simply replace get_arpa with get_ipa from ovos_plugin_manager.templates.g2p import Grapheme2PhonemePlugin from ovos_utils.lang.visimes import VISIMES # base plugin class class MyIPAG2PPlugin(Grapheme2PhonemePlugin): def __init__(self, config=None): self.config = config or {} def get_ipa(self, word, lang, ignore_oov=False): phones = [] # TODO implement return phones def get_durations(self, utterance, lang=\"en\", default_dur=0.4): # auto converted to arpa if ipa is implemented phones = [self.get_arpa(w, lang) for w in utterance.split()] dur = default_dur # TODO this is plugin specific return [(pho, dur) for pho in phones] def utterance2visemes(self, utterance, lang=\"en\", default_dur=0.4): phonemes = self.get_durations(utterance, lang, default_dur) return [(VISIMES.get(pho[0].lower(), '4'), float(pho[1])) for pho in phonemes]","title":"Plugin Template"},{"location":"unused/gs_first_boot/","text":"Booting your OpenVoiceOS device. Each image has it's own first boot process. Buildroot When you first boot the Buildroot image, you will be greated with an OVOS splash screen as shown below. As this is the first time you have booted your device, it might take a bit longer than normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device. Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown. Again, since this is your first boot, this might take a bit longer as the ovos-core configuration is populated and skills are being setup. Raspbian The Raspbian image is headless, and therefore you will not see these images. You can still monitor the boot procees by attaching a screen and follow the boot process from the command line. Setting up your Wi-Fi network connection The buildroot image supports setting up the network via two options. On a mobile device On the OpenVoiceOS device itself. You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work. This includes the date and time as a Raspberry Pi does not have a Real Time Clock and therefore does not know this data without being told) On Mobile Setup This is the defult option for the headless images Choosing this option will create a temporarily open network - hotspot called \"OVOS\" to which you can connect from your mobile device. The Raspbian image will give a voice prompt to connect to the hotspot and direct you to a webpage that will allow you to connect your device to WiFi. On your mobile device go into Settings -> Wi-Fi Settings and the \"OVOS\" open network will appear in its list. Connect your device with the OVOS network and a webpage will open to configure the Wi-Fi network on your OpenVoiceOS device. (NOTE: On newer mobile operating systems the captive portal capture has changed and the website will not automatically be opened. If this is the case you can open a browser manually and go to http://start.OpenVoiceOS.com) The following webpage will be shown; Select your Wi-Fi network from the list, insert your password and press the \"Connect\" button. If everything went fine, you will soon see the green \"connected\" screen, Buildroot only, on your OpenVoiceOS device. The Raspbian image does NOT have a completed prompt On Device Setup Not avaliable on headless images Choosing this option allows you to set up your Wi-Fi network straight on your OpenVoiceOS device. If selected a screen with the available networks will be shown on your OpenVoiceOS device. Select your network from the list and tap / click on it to allow you to insert your password. If you have a touch screen an on-screen keyboard will appear when you tap the password field. If not use a keyboard. When you have inserted your password, click / tap the connect button and after a short connecting animation, if all went fine you will see the green \"connected\" screen on your OpenVoiceOS device. (Re)configure your network from the drop-down menu If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the \"Wi-Fi\" icon. This brings you to the same on-device configuration screen. From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks. Selecting Your Backend What is a backend ? No backend - No calling home This is the suggested method and is default with the headless images Only the Buildroot image will have these options no further action is required for the headless images Select A Text To Speech (TTS) Engine: A text-to-speech (TTS) system converts normal language text into speech, select an engine from the list Select A Speech To Text (STT) Engine: A speech-to-text (STT) system converts human speech to text, select an engine from the list Personal backend - Host your own Personal backend is a reverse engineered alternative to selene and requires the backend to be hosted locally. Install and Configure your personal backend, information available on: https://github.com/OpenVoiceOS/ovos-personal-backend The gui on device will display a setup page to enter the host address of your hosted backend on your device Pairing with your personal backend happens automatically once you hit the confirm button with the correct host address","title":"Booting your OpenVoiceOS device."},{"location":"unused/gs_first_boot/#booting-your-openvoiceos-device","text":"Each image has it's own first boot process.","title":"Booting your OpenVoiceOS device."},{"location":"unused/gs_first_boot/#buildroot","text":"When you first boot the Buildroot image, you will be greated with an OVOS splash screen as shown below. As this is the first time you have booted your device, it might take a bit longer than normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device. Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown. Again, since this is your first boot, this might take a bit longer as the ovos-core configuration is populated and skills are being setup.","title":"Buildroot"},{"location":"unused/gs_first_boot/#raspbian","text":"The Raspbian image is headless, and therefore you will not see these images. You can still monitor the boot procees by attaching a screen and follow the boot process from the command line.","title":"Raspbian"},{"location":"unused/gs_first_boot/#setting-up-your-wi-fi-network-connection","text":"The buildroot image supports setting up the network via two options. On a mobile device On the OpenVoiceOS device itself. You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work. This includes the date and time as a Raspberry Pi does not have a Real Time Clock and therefore does not know this data without being told)","title":"Setting up your Wi-Fi network connection"},{"location":"unused/gs_first_boot/#on-mobile-setup","text":"This is the defult option for the headless images Choosing this option will create a temporarily open network - hotspot called \"OVOS\" to which you can connect from your mobile device. The Raspbian image will give a voice prompt to connect to the hotspot and direct you to a webpage that will allow you to connect your device to WiFi. On your mobile device go into Settings -> Wi-Fi Settings and the \"OVOS\" open network will appear in its list. Connect your device with the OVOS network and a webpage will open to configure the Wi-Fi network on your OpenVoiceOS device. (NOTE: On newer mobile operating systems the captive portal capture has changed and the website will not automatically be opened. If this is the case you can open a browser manually and go to http://start.OpenVoiceOS.com) The following webpage will be shown; Select your Wi-Fi network from the list, insert your password and press the \"Connect\" button. If everything went fine, you will soon see the green \"connected\" screen, Buildroot only, on your OpenVoiceOS device. The Raspbian image does NOT have a completed prompt","title":"On Mobile Setup"},{"location":"unused/gs_first_boot/#on-device-setup","text":"Not avaliable on headless images Choosing this option allows you to set up your Wi-Fi network straight on your OpenVoiceOS device. If selected a screen with the available networks will be shown on your OpenVoiceOS device. Select your network from the list and tap / click on it to allow you to insert your password. If you have a touch screen an on-screen keyboard will appear when you tap the password field. If not use a keyboard. When you have inserted your password, click / tap the connect button and after a short connecting animation, if all went fine you will see the green \"connected\" screen on your OpenVoiceOS device.","title":"On Device Setup"},{"location":"unused/gs_first_boot/#reconfigure-your-network-from-the-drop-down-menu","text":"If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the \"Wi-Fi\" icon. This brings you to the same on-device configuration screen. From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks.","title":"(Re)configure your network from the drop-down menu"},{"location":"unused/gs_first_boot/#selecting-your-backend","text":"","title":"Selecting Your Backend"},{"location":"unused/gs_first_boot/#what-is-a-backend","text":"","title":"What is a backend ?"},{"location":"unused/gs_first_boot/#no-backend-no-calling-home","text":"This is the suggested method and is default with the headless images Only the Buildroot image will have these options no further action is required for the headless images Select A Text To Speech (TTS) Engine: A text-to-speech (TTS) system converts normal language text into speech, select an engine from the list Select A Speech To Text (STT) Engine: A speech-to-text (STT) system converts human speech to text, select an engine from the list","title":"No backend - No calling home"},{"location":"unused/gs_first_boot/#personal-backend-host-your-own","text":"Personal backend is a reverse engineered alternative to selene and requires the backend to be hosted locally. Install and Configure your personal backend, information available on: https://github.com/OpenVoiceOS/ovos-personal-backend The gui on device will display a setup page to enter the host address of your hosted backend on your device Pairing with your personal backend happens automatically once you hit the confirm button with the correct host address","title":"Personal backend - Host your own"},{"location":"unused/gs_installation_choices/","text":"Welcome To OpenVoiceOS Where to get OVOS? OVOS is in early stages, we publish our Raspberry Pi images for download but expect new bugs and new fixes on every release, we are not yet stable! These images are development images in alpha stage, bugs and incomplete features are guaranteed. Installation choices You can install OVOS either as an image, container, or manually. There are currently two image choices for OVOS. Buildroot, and Raspbian. You can also build images from scratch for both options. (Details in the works) Docker containers are avaliable Windows, Macs, and Linux In most cases images provide for the easiest install if your hardware is supported. Check the Quick Start Guide for help getting runnin quickly. Building your own image can provide for a complete custom build in a package, but is more of an involved process. If you're familiar with Docker, then that option can provide a quick install. Guides on building images is located in our technical docs","title":"Welcome To OpenVoiceOS"},{"location":"unused/gs_installation_choices/#welcome-to-openvoiceos","text":"","title":"Welcome To OpenVoiceOS"},{"location":"unused/gs_installation_choices/#where-to-get-ovos","text":"OVOS is in early stages, we publish our Raspberry Pi images for download but expect new bugs and new fixes on every release, we are not yet stable! These images are development images in alpha stage, bugs and incomplete features are guaranteed.","title":"Where to get OVOS?"},{"location":"unused/gs_installation_choices/#installation-choices","text":"You can install OVOS either as an image, container, or manually. There are currently two image choices for OVOS. Buildroot, and Raspbian. You can also build images from scratch for both options. (Details in the works) Docker containers are avaliable Windows, Macs, and Linux In most cases images provide for the easiest install if your hardware is supported. Check the Quick Start Guide for help getting runnin quickly. Building your own image can provide for a complete custom build in a package, but is more of an involved process. If you're familiar with Docker, then that option can provide a quick install. Guides on building images is located in our technical docs","title":"Installation choices"},{"location":"unused/gs_installing_image/","text":"Welcome To OpenVoice OS Getting Started User Guide (Images) Which Image should I choose? OVOS provides a couple of different images specificaly for the Raspberry Pi. The most advanced and featureful is the Buildroot image. If you want a GUI, this is currently your best choice. Buildroot Latest Image RPi4-64 Raspbian Latest Images headless images Default users BuildRoot: Username: mycroft | password: mycroft Raspbian: Username: ovos | password: ovos Flashing your image Flashing your image to your sdcard or USB drive is not different from flashing any other image. For the non-technical users we advise to use the flashing utility from the Raspberry Pi Foundation which you can find here . Under \"CHOOSE OS\" select custom at the very bottom of the list and browse to the downloaded image file. It is not required to unzip / unpack the image as the Raspberry Pi imager software can do that for you on the fly. Under \"CHOOSE STORAGE\" select your sdcard or USB device. Specific instructions for each image can be found on thier respective Github pages If you have a Raspberry Pi 4 we recommend to use a good USB3.1 device. If you have a Raspberry Pi 3, use a proper sdcard. (From fast to slow: USB3.1 - sdcard - USB2)","title":"Welcome To OpenVoice OS Getting Started User Guide (Images)"},{"location":"unused/gs_installing_image/#welcome-to-openvoice-os-getting-started-user-guide-images","text":"","title":"Welcome To OpenVoice OS Getting Started User Guide (Images)"},{"location":"unused/gs_installing_image/#which-image-should-i-choose","text":"OVOS provides a couple of different images specificaly for the Raspberry Pi. The most advanced and featureful is the Buildroot image. If you want a GUI, this is currently your best choice.","title":"Which Image should I choose?"},{"location":"unused/gs_installing_image/#buildroot-latest-image","text":"RPi4-64","title":"Buildroot Latest Image"},{"location":"unused/gs_installing_image/#raspbian-latest-images","text":"headless images","title":"Raspbian Latest Images"},{"location":"unused/gs_installing_image/#default-users","text":"BuildRoot: Username: mycroft | password: mycroft Raspbian: Username: ovos | password: ovos","title":"Default users"},{"location":"unused/gs_installing_image/#flashing-your-image","text":"Flashing your image to your sdcard or USB drive is not different from flashing any other image. For the non-technical users we advise to use the flashing utility from the Raspberry Pi Foundation which you can find here . Under \"CHOOSE OS\" select custom at the very bottom of the list and browse to the downloaded image file. It is not required to unzip / unpack the image as the Raspberry Pi imager software can do that for you on the fly. Under \"CHOOSE STORAGE\" select your sdcard or USB device. Specific instructions for each image can be found on thier respective Github pages If you have a Raspberry Pi 4 we recommend to use a good USB3.1 device. If you have a Raspberry Pi 3, use a proper sdcard. (From fast to slow: USB3.1 - sdcard - USB2)","title":"Flashing your image"},{"location":"unused/gs_next_steps/","text":"OpenVoiceOS Quickstart - Next Steps Woo Woo!! You Have A Running OVOS Device!! Now what? Prebuilt images come with a default set of skills installed, including, but not limited to the date/time, and weather. Give them a shot. Speak these commands and enjoy the spoils Hey Mycroft, what time is it? Hey Mycroft, what is today's date? Hey Mycroft, what is the weather today? Hey Mycroft, will it rain today? While there are several default skills installed, there are many more avaliabe to be used. The link below will show you how to find and install more skills. Installing Skills But wait, there's more!! OVOS ships with a default TTS (Text to Speech) engine which speaks in the origional Alan-Pope voice that Mycroft used. There are MANY more to choose from. The following link will help you choose and configure a different voice for your assistant. Configuring the TTS engine WIP Your OVOS assistant uses a \"wake word\" which lets it know it is time to start listening to your commands. By default, the phrase to wake OVOS is Hey Mycroft . This, like most things in OVOS is totally configurable. Follow the link to learn more. Changing the Wake Word WIP OVOS ships with default services avaliabe to the public to use. These include public TTS and STT servers, a weather API provided by link to weather provider , access to Wolfram, and more. Part of being an open and private system, you can also change these to your own prefrences. Install your own Services WIP","title":"OpenVoiceOS Quickstart - Next Steps"},{"location":"unused/gs_next_steps/#openvoiceos-quickstart-next-steps","text":"","title":"OpenVoiceOS Quickstart - Next Steps"},{"location":"unused/gs_next_steps/#woo-woo-you-have-a-running-ovos-device-now-what","text":"Prebuilt images come with a default set of skills installed, including, but not limited to the date/time, and weather. Give them a shot. Speak these commands and enjoy the spoils Hey Mycroft, what time is it? Hey Mycroft, what is today's date? Hey Mycroft, what is the weather today? Hey Mycroft, will it rain today? While there are several default skills installed, there are many more avaliabe to be used. The link below will show you how to find and install more skills. Installing Skills But wait, there's more!! OVOS ships with a default TTS (Text to Speech) engine which speaks in the origional Alan-Pope voice that Mycroft used. There are MANY more to choose from. The following link will help you choose and configure a different voice for your assistant. Configuring the TTS engine WIP Your OVOS assistant uses a \"wake word\" which lets it know it is time to start listening to your commands. By default, the phrase to wake OVOS is Hey Mycroft . This, like most things in OVOS is totally configurable. Follow the link to learn more. Changing the Wake Word WIP OVOS ships with default services avaliabe to the public to use. These include public TTS and STT servers, a weather API provided by link to weather provider , access to Wolfram, and more. Part of being an open and private system, you can also change these to your own prefrences. Install your own Services WIP","title":"Woo Woo!! You Have A Running OVOS Device!! Now what?"},{"location":"unused/gui_extensions/","text":"GUI Extensions OVOS Core supports a GUI Extension framework which allows the GUI service to incorporate additional behaviour for a specific platform. GUI Extensions currently supported: Smartspeaker Extension This extension is responsible for managing the smartspeaker GUI interface behaviour, it supports homescreens and homescreen management. Enabling the smartspeaker GUI extension: \"gui\": { \"extension\": \"smartspeaker\", \"idle_display_skill\": \"skill-ovos-homescreen.openvoiceos\" } Bigscreen Extension This extension is responsible for managing the plasma bigscreen GUI interface behaviour, it supports window management and window behaviour control on specific window managers like Kwin. Enabling the Bigscreen GUI extension: \"gui\": { \"extension\": \"bigscreen\" } Mobile Extension This extension is responsible for managing the mobile GUI interface behaviour, it supports homescreens and additionally adds support for global page back navigation. Enabling the Mobile GUI extension: \"gui\": { \"extension\": \"mobile\", \"idle_display_skill\": \"skill-android-homescreen.openvoiceos\", } Generic Extension This extension provides a generic GUI interface and does not add any additional behaviour, it optionally supports homescreens if the platform or user manually enables it. This extension is enabled by default when no other extension is specified. \"gui\": { \"idle_display_skill\": \"skill-ovos-homescreen.openvoiceos\", \"extension\": \"generic\", \"generic\": { \"homescreen_supported\": false } }","title":"GUI Extensions"},{"location":"unused/gui_extensions/#gui-extensions","text":"OVOS Core supports a GUI Extension framework which allows the GUI service to incorporate additional behaviour for a specific platform. GUI Extensions currently supported:","title":"GUI Extensions"},{"location":"unused/gui_extensions/#smartspeaker-extension","text":"This extension is responsible for managing the smartspeaker GUI interface behaviour, it supports homescreens and homescreen management. Enabling the smartspeaker GUI extension: \"gui\": { \"extension\": \"smartspeaker\", \"idle_display_skill\": \"skill-ovos-homescreen.openvoiceos\" }","title":"Smartspeaker Extension"},{"location":"unused/gui_extensions/#bigscreen-extension","text":"This extension is responsible for managing the plasma bigscreen GUI interface behaviour, it supports window management and window behaviour control on specific window managers like Kwin. Enabling the Bigscreen GUI extension: \"gui\": { \"extension\": \"bigscreen\" }","title":"Bigscreen Extension"},{"location":"unused/gui_extensions/#mobile-extension","text":"This extension is responsible for managing the mobile GUI interface behaviour, it supports homescreens and additionally adds support for global page back navigation. Enabling the Mobile GUI extension: \"gui\": { \"extension\": \"mobile\", \"idle_display_skill\": \"skill-android-homescreen.openvoiceos\", }","title":"Mobile Extension"},{"location":"unused/gui_extensions/#generic-extension","text":"This extension provides a generic GUI interface and does not add any additional behaviour, it optionally supports homescreens if the platform or user manually enables it. This extension is enabled by default when no other extension is specified. \"gui\": { \"idle_display_skill\": \"skill-ovos-homescreen.openvoiceos\", \"extension\": \"generic\", \"generic\": { \"homescreen_supported\": false } }","title":"Generic Extension"},{"location":"unused/gui_intro/","text":"Introduction OVOS devices with displays provide skill developers the opportunity to create skills that can be empowered by both voice and screen interaction. The display interaction technology is based on the QML user interface markup language that gives you complete freedom to create in-depth innovative nteractions without boundaries or provide you with simple templates within the Mycroft GUI framework that allow minimalistic display of text and images based on your skill development specifics and preferences. Framework Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt. Introduction to QML QML user interface markup language is a declarative language built on top of Qt's existing strengths designed to describe the user interface of a program: both what it looks like, and how it behaves. QML provides modules that consist of sophisticated set of graphical and behavioral building elements. A collection of resources to familiarize you with QML and Kirigami Framework. Introduction to QML Introduction to Kirigami","title":"Introduction"},{"location":"unused/gui_intro/#introduction","text":"OVOS devices with displays provide skill developers the opportunity to create skills that can be empowered by both voice and screen interaction. The display interaction technology is based on the QML user interface markup language that gives you complete freedom to create in-depth innovative nteractions without boundaries or provide you with simple templates within the Mycroft GUI framework that allow minimalistic display of text and images based on your skill development specifics and preferences.","title":"Introduction"},{"location":"unused/gui_intro/#framework","text":"Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt.","title":"Framework"},{"location":"unused/gui_intro/#introduction-to-qml","text":"QML user interface markup language is a declarative language built on top of Qt's existing strengths designed to describe the user interface of a program: both what it looks like, and how it behaves. QML provides modules that consist of sophisticated set of graphical and behavioral building elements. A collection of resources to familiarize you with QML and Kirigami Framework. Introduction to QML Introduction to Kirigami","title":"Introduction to QML"},{"location":"unused/gui_protocol/","text":"Protocol The gui service in ovos-core will expose a websocket to the GUI clients following the protocol outlined here The transport protocol works between gui service and the gui clients, mycroft does not directly use the protocol but instead communicates with the gui service via the standard mycroft bus OVOS images are powered by ovos-shell , the client side implementation of the gui protocol The GUI library which implements the protocol lives in the mycroft-gui repository.","title":"Gui protocol"},{"location":"unused/gui_protocol/#protocol","text":"The gui service in ovos-core will expose a websocket to the GUI clients following the protocol outlined here The transport protocol works between gui service and the gui clients, mycroft does not directly use the protocol but instead communicates with the gui service via the standard mycroft bus OVOS images are powered by ovos-shell , the client side implementation of the gui protocol The GUI library which implements the protocol lives in the mycroft-gui repository.","title":"Protocol"},{"location":"unused/gui_service/","text":"GUI Service OVOS uses the standard mycroft-gui framework, you can find the official documentation here The GUI service provides a websocket for gui clients to connect to, it is responsible for implementing the gui protocol under ovos-core. You can find indepth documentation in the dedicated GUI section of these docs Configuration The gui service has a few sections in mycroft.conf \"gui\": { \"idle_display_skill\": \"skill-ovos-homescreen.openvoiceos\", \"extension\": \"generic\", \"generic\": { \"homescreen_supported\": false } }, \"gui_websocket\": { \"host\": \"0.0.0.0\", \"base_port\": 18181, \"route\": \"/gui\", \"ssl\": false },","title":"GUI Service"},{"location":"unused/gui_service/#gui-service","text":"OVOS uses the standard mycroft-gui framework, you can find the official documentation here The GUI service provides a websocket for gui clients to connect to, it is responsible for implementing the gui protocol under ovos-core. You can find indepth documentation in the dedicated GUI section of these docs","title":"GUI Service"},{"location":"unused/gui_service/#configuration","text":"The gui service has a few sections in mycroft.conf \"gui\": { \"idle_display_skill\": \"skill-ovos-homescreen.openvoiceos\", \"extension\": \"generic\", \"generic\": { \"homescreen_supported\": false } }, \"gui_websocket\": { \"host\": \"0.0.0.0\", \"base_port\": 18181, \"route\": \"/gui\", \"ssl\": false },","title":"Configuration"},{"location":"unused/guidelines/","text":"Voice User Interface Design Guidelines Through these guidelines you will learn how to use principles of Voice User Interface Design to build more effective skills. These tools will help define and validate the features of the skill before diving deep into development. This guide will cover some methods to use that can help plan, prototype and test your skill during the early design stages. The full original guide can be found over at the mycroft documentation Interactions Intents Let's start with an example. A user in Melbourne, Australia might want to know about the weather. To ask for this information, they might say: \"Hey Mycroft, what's today's weather like?\" \"Hey Mycroft, what's the weather like in Melbourne?\" \"Hey Mycroft, weather\" Even though these are three different expressions, for most of us they probably have roughly the same meaning. In each case we would assume the user expects OVOS to respond with today's weather for their current location. It is up us as Skill creators to teach OVOS the variety of ways that a user might express the same intent. This is a key part of the design process. It is the key difference between a Skill that kind of works if you know what to say, and a Skill that feels intuitive and natural to talk to. This is handled by an intent parser whose job it is to learn from your Skill what intents it can handle, and extract from the user's speech and key information that might be useful for your Skill. In this case it might include the specified date and location. Statements and Prompts You can think of Prompts as questions and Statements as providing information to the user that does not need a follow-up response. For example a weather forecast like this would be considered a statement: Today\u2019s forecast is sunny with a high of 60 and a low of 45. Statements For a lot of skills the conversation might end with a simple statement from OVOS, and no further action is necessary. Try to imagine what the user is trying to accomplish, if a simple statement gets the job done there is no reason to keep the conversation rolling, and in fact a follow-up might annoy the user with unnecessary interaction. Prompts It may be tempting to always give users specific instructions like traditional automated phones systems (Interactive Voice Response). Many phone systems are notorious for being too verbose and difficult to follow. With OVOS we\u2019re trying to break that mold and make the interaction natural. If you follow the phone system method you may be giving the user the exact phrasing to say, but you\u2019re also taking up valuable time and training them to think the system is very narrow in capability. In the event that the user does give a response that your skill can not handle, create follow-up prompts that steer the user back on track. Remember, there are no errors in a cooperative conversation. Avoid Speaker Mycroft How many people are playing? For example, you can say 2 players. Better Speaker Mycroft How many players? User My Brother and Myself Mycroft I\u2019m sorry, what was the number of players? User Two In the first example the user is told explicitly what they can say, but the prompt is unnatural, we don\u2019t typically suggest responses to a conversation partner in real life. These long-winded prompts can become tiresome if they are used repeatedly throughout the skill. Remember the phone system example, typically poorly designed automated phone systems inundate the user with many options and additional instructions at every step of the interaction. In the second example we see a better prompt, although the user gives a response that is easy for a human to understand it is more difficult to design a skill to understand. Instead, the skill designer can apply a re-prompt strategy that steers the user back on track and doesn't require them to start the interaction over. In this case the re-prompt changes the wording slightly to make it clear that the user needs to say a number. The next time the user interacts with this Skill, they will likely say a number in their first interaction. Determining whether to respond to the user with a statement or a prompt can be a bit tricky. When the user is somewhat vague it is tempting to assume the user\u2019s intent in order to speed along the interaction. Sometimes this is the right approach and sometimes it is not. If you are very confident in the user\u2019s input then it is alright to assume, and respond with a statement. For example in the Wikipedia Skill the Wikipedia API is used to confidently select the best response. Wikipedia Skill Speaker User Tell me about Abraham Lincoln Mycroft Abraham Lincoln was an American statesman and lawyer who served as the 16th president of the UnitedStates from 1861 to 1865. Lincoln led the nation through its greatest moral, constitutional, and political crisis in the American Civil War . In contrast, let\u2019s look at an example of where a follow-up prompt is a better approach. You\u2019re working on a skill that gives important information about birds, such as wingspan, laden and unladen airspeed, etc.. Avoid Speaker **** User what is the airspeed velocity of an unladen swallow? Mycroft The unladen airspeed of the european swallow is 20.1 miles per hour. Better Speaker User What is the airspeed velocity of an unladen Swallow? Mycroft What do you mean, African or European Swallow? User European Swallow. Mycroft The unladen airspeed of the European Swallow is 20.1 miles per hour. In the first example Mycroft assumes the user is referring to the European Swallow, however there is nothing to give confidence to that assumption. In this case it\u2019s dangerous to assume the User meant the European Swallow since it is just as likely they meant the African Swallow. When there is no confidence in one answer versus another, OVOS should follow up with a prompt as demonstrated in the second example. Another approach to solving this problem is to offer the most likely answer and inform the user that there are other answers available with a statement. Confirmations Confirmation approaches can also be defined by Statements or Prompts , but when we talk about them in the context of confirmations we call them Implicit and Explicit. Implicit Confirmation This type of confirmation is also a statement. The idea is to parrot the information back to the user to confirm that it was correct, but not require additional input from the user. The implicit confirmation can be used in a majority of situations. For example Mycroft\u2019s Timer Skill uses implicit confirmation when creating new timers. Timer Skill Speaker User Hey Mycroft set a timer for 5 minutes Mycroft Alright, I\u2019m starting a 5 minute timer Avoid Speaker User Hey Mycroft, add bananas to my shopping list Mycroft done Better Speaker User Hey Mycroft, add bananas to my shopping list Mycroft Adding bananas Explicit Confirmation This type of confirmation requires an input from the user to verify everything is correct. Only use explicit confirmation when needed as it can be tedious for the user to always confirm their input. This strategy is best when there are multiple steps, and it would be difficult to fix a misstep without starting over, or when deleting or modifying an existing multistep input. Avoid Speaker User Hey Mycroft, set an alarm for 9:00 am tomorrow Mycroft Ok, I\u2019ve set an alarm for 9:00 am tomorrow, is that correct? User Yes Mycroft 9:00 am alarm set Better Speaker User Hey Mycroft, set an alarm for 9:00 am tomorrow Mycroft I've set an alarm for tomorrow at 9:00 am, 16 hours from now The first example requires the user to confirm the alarm. Since the interaction is quite short it isn\u2019t necessary to use explicit confirmation. Avoid Speaker User Hey Mycroft, set a calendar event for a meeting with Chris Stevens Mycroft When is the event? User Tomorrow at 11:00 am Mycroft should I send an invite to Chris Stevens? User Yes Mycroft Alright, I've created a meeting for Tomorrow Mycroft Alright, I\u2019ve created a meeting for Tomorrow at 11:00 am with Chris Stevens User Cancel my 11:00 am meeting tomorrow Mycroft Done Better Speaker User Hey Mycroft, set a calendar event for a meeting with Chris Stevens Mycroft When is the event? User Tomorrow at 11:00 am Mycroft should I send an invite to Chris Stevens? User Yes Mycroft Alright, I\u2019ve created a meeting for Tomorrow at 11:00 am with Chris Stevens User Cancel my 11:00 am meeting tomorrow Mycroft Are you sure you want to delete your meeting tomorrow at 11:00 am? User Yes Both examples use explicit and implicit confirmation correctly until the end of the interaction when the user wants to cancel the event. Unlike setting a simple timer or an alarm creating this calendar event requires multiple steps and would require reasonable effort to recreate. There may also be significant consequences if the request was misinterpreted and the meeting was cancelled unintentionally. Conversations Any time the user needs to input a lot of information or the user needs to sort through a variety of options a conversation will be needed. Users may be used to systems that require them to separate input into different chunks. For example, many personal navigation devices (GPS units) require users to input the street number, street name, city, and state separately, like this: Speaker Navigation Device What is the street number User 123 Navigation Device What is the street name User Main Street Navigation Device What is the city User Springfield Navigation Device Oregon This was a limitation of the technology, both the STT and intent parser were not capable of handling an entire address. With OVOS you do not need to separate the address into multiple inputs. It\u2019s also easy to think like a traditional GUI and translate what are typically separate fields into prompts for the user, but that will only slow down the interaction. The best solution is to allow the user to input information both ways, all at once, or in pieces. For example the Alarm skill allows the user to set a complicated alarm all in one go, or in steps. In Steps: Speaker User Hey Mycroft, set a recurring alarm Mycroft For what day? User weekdays Mycroft For what time? User 7 am Mycroft I've set a recurring Alarm for 7 am on weekdays All at once: Speaker User Hey Mycroft, set a recurring alarm for weekdays at 7 am Mycroft I've set a recurring Alarm for 7 am on weekdays Context Conversational context is a given when speaking to another person, but not always when interacting with a computer based system. Many skills are designed with only a quick back and forth interaction in mind, and can operate well without conversational context, but conversational context can be a powerful addition to a skill and can make the interaction much more life like. Here are some examples of how to use context in conversation. Speaker User Who is Shaquille O'Neal Mycroft Shaquille O'Neal is an American former professional basketball player who is a sports analyst on the television program Inside the NBA on TNT. He is considered one of the greatest players in National Basketball Association history User How tall is he? Mycroft seven foot 1 inches tall User when did he retire Mycroft 2011 Error Handling Inevitably, the user will say something that your skill can\u2019t handle. It\u2019s best not to think of these as errors on the part of the user, remember there aren\u2019t really errors in conversations. Conversations are naturally cooperative with both parties seeking to continue to a successful outcome. Keeping that in mind you should do your best to avoid Error Handling that is a dead-end. Avoid Speaker User Timer Mycroft I'm sorry I can't help you with that Better Speaker Use r Timer Mycroft A timer for how long? User 5 minutes In the first example OVOS does not give the user any options to finish the job they set out to do. Try to avoid situations where the user has to start the conversation over.","title":"Voice User Interface Design Guidelines"},{"location":"unused/guidelines/#voice-user-interface-design-guidelines","text":"Through these guidelines you will learn how to use principles of Voice User Interface Design to build more effective skills. These tools will help define and validate the features of the skill before diving deep into development. This guide will cover some methods to use that can help plan, prototype and test your skill during the early design stages. The full original guide can be found over at the mycroft documentation","title":"Voice User Interface Design Guidelines"},{"location":"unused/guidelines/#interactions","text":"","title":"Interactions"},{"location":"unused/guidelines/#intents","text":"Let's start with an example. A user in Melbourne, Australia might want to know about the weather. To ask for this information, they might say: \"Hey Mycroft, what's today's weather like?\" \"Hey Mycroft, what's the weather like in Melbourne?\" \"Hey Mycroft, weather\" Even though these are three different expressions, for most of us they probably have roughly the same meaning. In each case we would assume the user expects OVOS to respond with today's weather for their current location. It is up us as Skill creators to teach OVOS the variety of ways that a user might express the same intent. This is a key part of the design process. It is the key difference between a Skill that kind of works if you know what to say, and a Skill that feels intuitive and natural to talk to. This is handled by an intent parser whose job it is to learn from your Skill what intents it can handle, and extract from the user's speech and key information that might be useful for your Skill. In this case it might include the specified date and location.","title":"Intents"},{"location":"unused/guidelines/#statements-and-prompts","text":"You can think of Prompts as questions and Statements as providing information to the user that does not need a follow-up response. For example a weather forecast like this would be considered a statement: Today\u2019s forecast is sunny with a high of 60 and a low of 45.","title":"Statements and Prompts"},{"location":"unused/guidelines/#statements","text":"For a lot of skills the conversation might end with a simple statement from OVOS, and no further action is necessary. Try to imagine what the user is trying to accomplish, if a simple statement gets the job done there is no reason to keep the conversation rolling, and in fact a follow-up might annoy the user with unnecessary interaction.","title":"Statements"},{"location":"unused/guidelines/#prompts","text":"It may be tempting to always give users specific instructions like traditional automated phones systems (Interactive Voice Response). Many phone systems are notorious for being too verbose and difficult to follow. With OVOS we\u2019re trying to break that mold and make the interaction natural. If you follow the phone system method you may be giving the user the exact phrasing to say, but you\u2019re also taking up valuable time and training them to think the system is very narrow in capability. In the event that the user does give a response that your skill can not handle, create follow-up prompts that steer the user back on track. Remember, there are no errors in a cooperative conversation. Avoid Speaker Mycroft How many people are playing? For example, you can say 2 players. Better Speaker Mycroft How many players? User My Brother and Myself Mycroft I\u2019m sorry, what was the number of players? User Two In the first example the user is told explicitly what they can say, but the prompt is unnatural, we don\u2019t typically suggest responses to a conversation partner in real life. These long-winded prompts can become tiresome if they are used repeatedly throughout the skill. Remember the phone system example, typically poorly designed automated phone systems inundate the user with many options and additional instructions at every step of the interaction. In the second example we see a better prompt, although the user gives a response that is easy for a human to understand it is more difficult to design a skill to understand. Instead, the skill designer can apply a re-prompt strategy that steers the user back on track and doesn't require them to start the interaction over. In this case the re-prompt changes the wording slightly to make it clear that the user needs to say a number. The next time the user interacts with this Skill, they will likely say a number in their first interaction. Determining whether to respond to the user with a statement or a prompt can be a bit tricky. When the user is somewhat vague it is tempting to assume the user\u2019s intent in order to speed along the interaction. Sometimes this is the right approach and sometimes it is not. If you are very confident in the user\u2019s input then it is alright to assume, and respond with a statement. For example in the Wikipedia Skill the Wikipedia API is used to confidently select the best response. Wikipedia Skill Speaker User Tell me about Abraham Lincoln Mycroft Abraham Lincoln was an American statesman and lawyer who served as the 16th president of the UnitedStates from 1861 to 1865. Lincoln led the nation through its greatest moral, constitutional, and political crisis in the American Civil War . In contrast, let\u2019s look at an example of where a follow-up prompt is a better approach. You\u2019re working on a skill that gives important information about birds, such as wingspan, laden and unladen airspeed, etc.. Avoid Speaker **** User what is the airspeed velocity of an unladen swallow? Mycroft The unladen airspeed of the european swallow is 20.1 miles per hour. Better Speaker User What is the airspeed velocity of an unladen Swallow? Mycroft What do you mean, African or European Swallow? User European Swallow. Mycroft The unladen airspeed of the European Swallow is 20.1 miles per hour. In the first example Mycroft assumes the user is referring to the European Swallow, however there is nothing to give confidence to that assumption. In this case it\u2019s dangerous to assume the User meant the European Swallow since it is just as likely they meant the African Swallow. When there is no confidence in one answer versus another, OVOS should follow up with a prompt as demonstrated in the second example. Another approach to solving this problem is to offer the most likely answer and inform the user that there are other answers available with a statement.","title":"Prompts"},{"location":"unused/guidelines/#confirmations","text":"Confirmation approaches can also be defined by Statements or Prompts , but when we talk about them in the context of confirmations we call them Implicit and Explicit.","title":"Confirmations"},{"location":"unused/guidelines/#implicit-confirmation","text":"This type of confirmation is also a statement. The idea is to parrot the information back to the user to confirm that it was correct, but not require additional input from the user. The implicit confirmation can be used in a majority of situations. For example Mycroft\u2019s Timer Skill uses implicit confirmation when creating new timers. Timer Skill Speaker User Hey Mycroft set a timer for 5 minutes Mycroft Alright, I\u2019m starting a 5 minute timer Avoid Speaker User Hey Mycroft, add bananas to my shopping list Mycroft done Better Speaker User Hey Mycroft, add bananas to my shopping list Mycroft Adding bananas","title":"Implicit Confirmation"},{"location":"unused/guidelines/#explicit-confirmation","text":"This type of confirmation requires an input from the user to verify everything is correct. Only use explicit confirmation when needed as it can be tedious for the user to always confirm their input. This strategy is best when there are multiple steps, and it would be difficult to fix a misstep without starting over, or when deleting or modifying an existing multistep input. Avoid Speaker User Hey Mycroft, set an alarm for 9:00 am tomorrow Mycroft Ok, I\u2019ve set an alarm for 9:00 am tomorrow, is that correct? User Yes Mycroft 9:00 am alarm set Better Speaker User Hey Mycroft, set an alarm for 9:00 am tomorrow Mycroft I've set an alarm for tomorrow at 9:00 am, 16 hours from now The first example requires the user to confirm the alarm. Since the interaction is quite short it isn\u2019t necessary to use explicit confirmation. Avoid Speaker User Hey Mycroft, set a calendar event for a meeting with Chris Stevens Mycroft When is the event? User Tomorrow at 11:00 am Mycroft should I send an invite to Chris Stevens? User Yes Mycroft Alright, I've created a meeting for Tomorrow Mycroft Alright, I\u2019ve created a meeting for Tomorrow at 11:00 am with Chris Stevens User Cancel my 11:00 am meeting tomorrow Mycroft Done Better Speaker User Hey Mycroft, set a calendar event for a meeting with Chris Stevens Mycroft When is the event? User Tomorrow at 11:00 am Mycroft should I send an invite to Chris Stevens? User Yes Mycroft Alright, I\u2019ve created a meeting for Tomorrow at 11:00 am with Chris Stevens User Cancel my 11:00 am meeting tomorrow Mycroft Are you sure you want to delete your meeting tomorrow at 11:00 am? User Yes Both examples use explicit and implicit confirmation correctly until the end of the interaction when the user wants to cancel the event. Unlike setting a simple timer or an alarm creating this calendar event requires multiple steps and would require reasonable effort to recreate. There may also be significant consequences if the request was misinterpreted and the meeting was cancelled unintentionally.","title":"Explicit Confirmation"},{"location":"unused/guidelines/#conversations","text":"Any time the user needs to input a lot of information or the user needs to sort through a variety of options a conversation will be needed. Users may be used to systems that require them to separate input into different chunks. For example, many personal navigation devices (GPS units) require users to input the street number, street name, city, and state separately, like this: Speaker Navigation Device What is the street number User 123 Navigation Device What is the street name User Main Street Navigation Device What is the city User Springfield Navigation Device Oregon This was a limitation of the technology, both the STT and intent parser were not capable of handling an entire address. With OVOS you do not need to separate the address into multiple inputs. It\u2019s also easy to think like a traditional GUI and translate what are typically separate fields into prompts for the user, but that will only slow down the interaction. The best solution is to allow the user to input information both ways, all at once, or in pieces. For example the Alarm skill allows the user to set a complicated alarm all in one go, or in steps. In Steps: Speaker User Hey Mycroft, set a recurring alarm Mycroft For what day? User weekdays Mycroft For what time? User 7 am Mycroft I've set a recurring Alarm for 7 am on weekdays All at once: Speaker User Hey Mycroft, set a recurring alarm for weekdays at 7 am Mycroft I've set a recurring Alarm for 7 am on weekdays","title":"Conversations"},{"location":"unused/guidelines/#context","text":"Conversational context is a given when speaking to another person, but not always when interacting with a computer based system. Many skills are designed with only a quick back and forth interaction in mind, and can operate well without conversational context, but conversational context can be a powerful addition to a skill and can make the interaction much more life like. Here are some examples of how to use context in conversation. Speaker User Who is Shaquille O'Neal Mycroft Shaquille O'Neal is an American former professional basketball player who is a sports analyst on the television program Inside the NBA on TNT. He is considered one of the greatest players in National Basketball Association history User How tall is he? Mycroft seven foot 1 inches tall User when did he retire Mycroft 2011","title":"Context"},{"location":"unused/guidelines/#error-handling","text":"Inevitably, the user will say something that your skill can\u2019t handle. It\u2019s best not to think of these as errors on the part of the user, remember there aren\u2019t really errors in conversations. Conversations are naturally cooperative with both parties seeking to continue to a successful outcome. Keeping that in mind you should do your best to avoid Error Handling that is a dead-end. Avoid Speaker User Timer Mycroft I'm sorry I can't help you with that Better Speaker Use r Timer Mycroft A timer for how long? User 5 minutes In the first example OVOS does not give the user any options to finish the job they set out to do. Try to avoid situations where the user has to start the conversation over.","title":"Error Handling"},{"location":"unused/howto_secure_ssh/","text":"How to Secure SSH Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication. When connecting from a Linux or MacOS client Create a keyfile (you can change ovos to whatever you want) ssh-keygen -t ed25519 -f ~/.ssh/ovos Copy to host (use the same filename as above, specify the user and hostname you are using) ssh-copy-id -i ~/.ssh/ovos ovos@mycroft On your dekstop, edit ~/.ssh/config and add the following lines Host rp2 user ovos IdentityFile ~/.ssh/ovos On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line: PasswordAuthentication no restart sshd or reboot sudo systemctl restart sshd","title":"How to Secure SSH"},{"location":"unused/howto_secure_ssh/#how-to-secure-ssh","text":"Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication.","title":"How to Secure SSH"},{"location":"unused/howto_secure_ssh/#when-connecting-from-a-linux-or-macos-client","text":"Create a keyfile (you can change ovos to whatever you want) ssh-keygen -t ed25519 -f ~/.ssh/ovos Copy to host (use the same filename as above, specify the user and hostname you are using) ssh-copy-id -i ~/.ssh/ovos ovos@mycroft On your dekstop, edit ~/.ssh/config and add the following lines Host rp2 user ovos IdentityFile ~/.ssh/ovos On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line: PasswordAuthentication no restart sshd or reboot sudo systemctl restart sshd","title":"When connecting from a Linux or MacOS client"},{"location":"unused/images_manjaro/","text":"ovos-image-arch-recipe NOTE This repo has been archived and will no longer be updated Make a manjaro based OpenVoiceOS image source code: https://github.com/OpenVoiceOS/ovos-image-arch-recipe/ Building Docker Automated Image Building The included Dockerfile can be used to build a default image in a Docker environment. The following dependencies must be installed on the build system before running the container: chroot qemu-user-static First, create the Docker container: docker build . -t ovos-image-builder Then, run the container to create a OVOS Image. Set CORE_REF to the branch of ovos-core that you want to build and RECIPE_REF to the branch of ovos-image-recipe you want to use. Set MAKE_THREADS to the number of threads to use for make processes. docker run \\ -v /home/${USER}/output:/output:rw \\ -v /run/systemd/resolve:/run/systemd/resolve \\ -e CORE_REF=${CORE_REF:-dev} \\ -e RECIPE_REF=${RECIPE_REF:-master} \\ -e MAKE_THREADS=${MAKE_THREADS:-4} \\ --privileged \\ --network=host \\ --name=ovos-image-builder \\ ovos-image-builder The entire build process will generally take several hours; it takes 1-2 hours on a build server with 2x Xeon Gold 5118 CPUs (48T Total). Interactive Image Building The scripts in the automation directory are available to help automate building a default image. For building an image interactively: bash automation/prepare.sh bash /tmp/run_scripts.sh The below documentation describes how to manually build an image using the individual scripts in this repository. Steps Getting Started The scripts and overlay files in this repository are designed to be applied to a base image as the root user. It is recommended to apply these scripts to a clean base image. Instructions are available at opensource.com . Note : The GUI shell is not installable under some base images For each step except boot_overlay , the directory corresponding to the step should be copied to the mounted image and the script run from a terminal chroot -ed to the image. If running scripts from a booted image, they should be run as root . Preparation From the host system where this repository is cloned, running prepare.sh will copy boot overlay files, mount the image, mount DNS resolver config from the host system, copy all other image overlay files to /tmp , and chroot into the image. From here, you can run any/all of the following scripts to prepare the image before cleaning up core_configuration Configures user accounts and base functionality for RPi. ovos user is created with proper permissions here. At this stage, a booted image should resize its file system to fill the drive it is flashed to. Local login and ssh connections should use ovos / ovos to authenticate and be prompted to change password on login. network_manager Adds Balena wifi-connect to enable a portal for connecting the Pi device to a Wi-Fi network. A booted image will now be ready to connect to a network via SSID OVOS . sj201 For SJ201 board support, the included script will build/install drivers, add required overlays, install required system packages, and add a systemd service to flash the SJ201 chip on boot. This will modify pulseaudio and potentially overwrite any previous settings. Note: Running this scripts grants GPIO permissions to the gpio group. Any user that interfaces with the SJ201 board should be a member of the gpio group. Group permissions are not modified by this script Audio devices should now show up with pactl list . Audio devices can be tested in the image by recording a short audio clip and playing it back. parecord test.wav paplay test.wav embedded_shell Installs ovos-shell and mycroft-gui-app. Adds and enables ovos-gui.service to start the shell on system boot. The image should now boot to the GUI shell. ovos_core Installs ovos-core and dependencies. Configures services for core modules. At this stage, the image is complete and when booted should start OVOS. dashboard Installs the OVOS Dashboard and service to start the dashboard from the GUI. From the GUI Settings -> Developer Settings menu, Enable Dashboard will now start the dashboard for remote access to device diagnostics. camera Installs libcamera and other dependencies for using a CSI camera. The default camera skill can be used to take a photo; libcamera-apps are also installed for testing via CLI. splash_screen Enables a custom splash screen and disables on-device TTY at boot. On boot, a static image should be shown until the GUI Shell starts. Clean Up cleanup.sh removes any temporary files from the mounted image before unmounting it. After running cleanup.sh , the image is ready to burn to a drive and boot.","title":"ovos-image-arch-recipe"},{"location":"unused/images_manjaro/#ovos-image-arch-recipe","text":"NOTE This repo has been archived and will no longer be updated Make a manjaro based OpenVoiceOS image source code: https://github.com/OpenVoiceOS/ovos-image-arch-recipe/","title":"ovos-image-arch-recipe"},{"location":"unused/images_manjaro/#building","text":"","title":"Building"},{"location":"unused/images_manjaro/#docker-automated-image-building","text":"The included Dockerfile can be used to build a default image in a Docker environment. The following dependencies must be installed on the build system before running the container: chroot qemu-user-static First, create the Docker container: docker build . -t ovos-image-builder Then, run the container to create a OVOS Image. Set CORE_REF to the branch of ovos-core that you want to build and RECIPE_REF to the branch of ovos-image-recipe you want to use. Set MAKE_THREADS to the number of threads to use for make processes. docker run \\ -v /home/${USER}/output:/output:rw \\ -v /run/systemd/resolve:/run/systemd/resolve \\ -e CORE_REF=${CORE_REF:-dev} \\ -e RECIPE_REF=${RECIPE_REF:-master} \\ -e MAKE_THREADS=${MAKE_THREADS:-4} \\ --privileged \\ --network=host \\ --name=ovos-image-builder \\ ovos-image-builder The entire build process will generally take several hours; it takes 1-2 hours on a build server with 2x Xeon Gold 5118 CPUs (48T Total).","title":"Docker Automated Image Building"},{"location":"unused/images_manjaro/#interactive-image-building","text":"The scripts in the automation directory are available to help automate building a default image. For building an image interactively: bash automation/prepare.sh bash /tmp/run_scripts.sh The below documentation describes how to manually build an image using the individual scripts in this repository.","title":"Interactive Image Building"},{"location":"unused/images_manjaro/#steps","text":"","title":"Steps"},{"location":"unused/images_manjaro/#getting-started","text":"The scripts and overlay files in this repository are designed to be applied to a base image as the root user. It is recommended to apply these scripts to a clean base image. Instructions are available at opensource.com . Note : The GUI shell is not installable under some base images For each step except boot_overlay , the directory corresponding to the step should be copied to the mounted image and the script run from a terminal chroot -ed to the image. If running scripts from a booted image, they should be run as root .","title":"Getting Started"},{"location":"unused/images_manjaro/#preparation","text":"From the host system where this repository is cloned, running prepare.sh will copy boot overlay files, mount the image, mount DNS resolver config from the host system, copy all other image overlay files to /tmp , and chroot into the image. From here, you can run any/all of the following scripts to prepare the image before cleaning up","title":"Preparation"},{"location":"unused/images_manjaro/#core_configuration","text":"Configures user accounts and base functionality for RPi. ovos user is created with proper permissions here. At this stage, a booted image should resize its file system to fill the drive it is flashed to. Local login and ssh connections should use ovos / ovos to authenticate and be prompted to change password on login.","title":"core_configuration"},{"location":"unused/images_manjaro/#network_manager","text":"Adds Balena wifi-connect to enable a portal for connecting the Pi device to a Wi-Fi network. A booted image will now be ready to connect to a network via SSID OVOS .","title":"network_manager"},{"location":"unused/images_manjaro/#sj201","text":"For SJ201 board support, the included script will build/install drivers, add required overlays, install required system packages, and add a systemd service to flash the SJ201 chip on boot. This will modify pulseaudio and potentially overwrite any previous settings. Note: Running this scripts grants GPIO permissions to the gpio group. Any user that interfaces with the SJ201 board should be a member of the gpio group. Group permissions are not modified by this script Audio devices should now show up with pactl list . Audio devices can be tested in the image by recording a short audio clip and playing it back. parecord test.wav paplay test.wav","title":"sj201"},{"location":"unused/images_manjaro/#embedded_shell","text":"Installs ovos-shell and mycroft-gui-app. Adds and enables ovos-gui.service to start the shell on system boot. The image should now boot to the GUI shell.","title":"embedded_shell"},{"location":"unused/images_manjaro/#ovos_core","text":"Installs ovos-core and dependencies. Configures services for core modules. At this stage, the image is complete and when booted should start OVOS.","title":"ovos_core"},{"location":"unused/images_manjaro/#dashboard","text":"Installs the OVOS Dashboard and service to start the dashboard from the GUI. From the GUI Settings -> Developer Settings menu, Enable Dashboard will now start the dashboard for remote access to device diagnostics.","title":"dashboard"},{"location":"unused/images_manjaro/#camera","text":"Installs libcamera and other dependencies for using a CSI camera. The default camera skill can be used to take a photo; libcamera-apps are also installed for testing via CLI.","title":"camera"},{"location":"unused/images_manjaro/#splash_screen","text":"Enables a custom splash screen and disables on-device TTY at boot. On boot, a static image should be shown until the GUI Shell starts.","title":"splash_screen"},{"location":"unused/images_manjaro/#clean-up","text":"cleanup.sh removes any temporary files from the mounted image before unmounting it. After running cleanup.sh , the image is ready to burn to a drive and boot.","title":"Clean Up"},{"location":"unused/install_image/","text":"OpenVoiceOS Architetcture Work in Progress","title":"OpenVoiceOS Architetcture"},{"location":"unused/install_image/#openvoiceos-architetcture","text":"Work in Progress","title":"OpenVoiceOS Architetcture"},{"location":"unused/install_images/","text":"Installing OVOS using a prebuilt image Prebuilt images OVOS provides a couple of prebuilt images to use with a Raspberry Pi Buildroot This is the most advanced image that OVOS provides and is intended to be used as a complete system with a GUI. This image has auto-detection of a large amount of hardware, including but not limited to respeaker microphones, and the sj201 sound board that is used by the Mark2. Buildroot images are avaliable for download here . Decompress this file and continue to the next section Burning the image to a SD card or USB drive Raspbian-ovos This is a new image created as a headless device. This is supposed to be optimized to run on less powerful devices such as a RPi3 which does not have the power to run a GUI. This image is still in heavy development and expect some bugs while using. Raspbian-ovos images are avaliable for download here . Unzip this file and continue to the next section Burning the image to a SD card or USB drive Burning an image There are a few ways to burn your image to a drive to use in a Raspberry PI. Both methods described below will work for either OVOS image that you would like to try. Raspberry PI Imager This method can be used with a Linux or Windows host machine. The people at raspberry pi provide a great little program made for burning an image to a device. You can get it here . The team at OVOS has tried to make the setup as simple as possible, and if you follow the steps below, you should have no problems on your first boot. Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with sudo raspi-imager . For \"Choose OS\", select \"Use custom\" and select the OVOS image file downloaded from the step above. For \"Choose Storage\", select your removable boot media, probably something like \"Internal SD Card Reader\". Then select \"Write\". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot. Linux dd command Be careful with the dd command, you can easily render your computer useless if the command is entered wrong Find the location of your boot medium with the lsblk command. It should show up as something like sdb . If you are unsure what drive it is, remove it, and run the command again. The one that is missing, is the drive you want to use. Write the image to the disk with the dd command. WARNING: Please make sure you are writing to the correct location, as this is the step that can screw everything up sudo dd if= of= bs=4M status=progress This step will take several minutes to complete. When the command is done, and the prompt apears again, finish the process with the command sudo sync With either method used, you should now have a bootable disk to use with your Raspberry PI","title":"Installing OVOS using a prebuilt image"},{"location":"unused/install_images/#installing-ovos-using-a-prebuilt-image","text":"","title":"Installing OVOS using a prebuilt image"},{"location":"unused/install_images/#prebuilt-images","text":"OVOS provides a couple of prebuilt images to use with a Raspberry Pi","title":"Prebuilt images"},{"location":"unused/install_images/#buildroot","text":"This is the most advanced image that OVOS provides and is intended to be used as a complete system with a GUI. This image has auto-detection of a large amount of hardware, including but not limited to respeaker microphones, and the sj201 sound board that is used by the Mark2. Buildroot images are avaliable for download here . Decompress this file and continue to the next section Burning the image to a SD card or USB drive","title":"Buildroot"},{"location":"unused/install_images/#raspbian-ovos","text":"This is a new image created as a headless device. This is supposed to be optimized to run on less powerful devices such as a RPi3 which does not have the power to run a GUI. This image is still in heavy development and expect some bugs while using. Raspbian-ovos images are avaliable for download here . Unzip this file and continue to the next section Burning the image to a SD card or USB drive","title":"Raspbian-ovos"},{"location":"unused/install_images/#burning-an-image","text":"There are a few ways to burn your image to a drive to use in a Raspberry PI. Both methods described below will work for either OVOS image that you would like to try.","title":"Burning an image"},{"location":"unused/install_images/#raspberry-pi-imager","text":"This method can be used with a Linux or Windows host machine. The people at raspberry pi provide a great little program made for burning an image to a device. You can get it here . The team at OVOS has tried to make the setup as simple as possible, and if you follow the steps below, you should have no problems on your first boot. Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with sudo raspi-imager . For \"Choose OS\", select \"Use custom\" and select the OVOS image file downloaded from the step above. For \"Choose Storage\", select your removable boot media, probably something like \"Internal SD Card Reader\". Then select \"Write\". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot.","title":"Raspberry PI Imager"},{"location":"unused/install_images/#linux-dd-command","text":"Be careful with the dd command, you can easily render your computer useless if the command is entered wrong Find the location of your boot medium with the lsblk command. It should show up as something like sdb . If you are unsure what drive it is, remove it, and run the command again. The one that is missing, is the drive you want to use. Write the image to the disk with the dd command. WARNING: Please make sure you are writing to the correct location, as this is the step that can screw everything up sudo dd if= of= bs=4M status=progress This step will take several minutes to complete. When the command is done, and the prompt apears again, finish the process with the command sudo sync With either method used, you should now have a bootable disk to use with your Raspberry PI","title":"Linux dd command"},{"location":"unused/install_ovos_images/","text":"Installing OVOS using a prebuilt image Prebuilt images OVOS provides a couple of prebuilt images to use with a Raspberry Pi Buildroot This is the most advanced image that OVOS provides and is intended to be used as a complete system with a GUI. This image has auto-detection of a large amount of hardware, including but not limited to respeaker microphones, and the sj201 sound board that is used by the Mark2. Buildroot images are avaliable for download here . Decompress this file and continue to the next section Burning the image to a SD card or USB drive Raspbian-ovos This is a new image created as a headless device. This is supposed to be optimized to run on less powerful devices such as a RPi3 which does not have the power to run a GUI. This image is still in heavy development and expect some bugs while using. Raspbian-ovos images are avaliable for download here . Unzip this file and continue to the next section Burning the image to a SD card or USB drive Burning an image There are a few ways to burn your image to a drive to use in a Raspberry PI. Both methods described below will work for either OVOS image that you would like to try. Raspberry PI Imager This method can be used with a Linux or Windows host machine. The people at raspberry pi provide a great little program made for burning an image to a device. You can get it here . The team at OVOS has tried to make the setup as simple as possible, and if you follow the steps below, you should have no problems on your first boot. Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with sudo raspi-imager . For \"Choose OS\", select \"Use custom\" and select the OVOS image file downloaded from the step above. For \"Choose Storage\", select your removable boot media, probably something like \"Internal SD Card Reader\". Then select \"Write\". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot. Linux dd command Be careful with the dd command, you can easily render your computer useless if the command is entered wrong Find the location of your boot medium with the lsblk command. It should show up as something like sdb . If you are unsure what drive it is, remove it, and run the command again. The one that is missing, is the drive you want to use. Write the image to the disk with the dd command. WARNING: Please make sure you are writing to the correct location, as this is the step that can screw everything up sudo dd if= of= bs=4M status=progress This step will take several minutes to complete. When the command is done, and the prompt apears again, finish the process with the command sudo sync With either method used, you should now have a bootable disk to use with your Raspberry PI","title":"Installing OVOS using a prebuilt image"},{"location":"unused/install_ovos_images/#installing-ovos-using-a-prebuilt-image","text":"","title":"Installing OVOS using a prebuilt image"},{"location":"unused/install_ovos_images/#prebuilt-images","text":"OVOS provides a couple of prebuilt images to use with a Raspberry Pi","title":"Prebuilt images"},{"location":"unused/install_ovos_images/#buildroot","text":"This is the most advanced image that OVOS provides and is intended to be used as a complete system with a GUI. This image has auto-detection of a large amount of hardware, including but not limited to respeaker microphones, and the sj201 sound board that is used by the Mark2. Buildroot images are avaliable for download here . Decompress this file and continue to the next section Burning the image to a SD card or USB drive","title":"Buildroot"},{"location":"unused/install_ovos_images/#raspbian-ovos","text":"This is a new image created as a headless device. This is supposed to be optimized to run on less powerful devices such as a RPi3 which does not have the power to run a GUI. This image is still in heavy development and expect some bugs while using. Raspbian-ovos images are avaliable for download here . Unzip this file and continue to the next section Burning the image to a SD card or USB drive","title":"Raspbian-ovos"},{"location":"unused/install_ovos_images/#burning-an-image","text":"There are a few ways to burn your image to a drive to use in a Raspberry PI. Both methods described below will work for either OVOS image that you would like to try.","title":"Burning an image"},{"location":"unused/install_ovos_images/#raspberry-pi-imager","text":"This method can be used with a Linux or Windows host machine. The people at raspberry pi provide a great little program made for burning an image to a device. You can get it here . The team at OVOS has tried to make the setup as simple as possible, and if you follow the steps below, you should have no problems on your first boot. Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with sudo raspi-imager . For \"Choose OS\", select \"Use custom\" and select the OVOS image file downloaded from the step above. For \"Choose Storage\", select your removable boot media, probably something like \"Internal SD Card Reader\". Then select \"Write\". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot.","title":"Raspberry PI Imager"},{"location":"unused/install_ovos_images/#linux-dd-command","text":"Be careful with the dd command, you can easily render your computer useless if the command is entered wrong Find the location of your boot medium with the lsblk command. It should show up as something like sdb . If you are unsure what drive it is, remove it, and run the command again. The one that is missing, is the drive you want to use. Write the image to the disk with the dd command. WARNING: Please make sure you are writing to the correct location, as this is the step that can screw everything up sudo dd if= of= bs=4M status=progress This step will take several minutes to complete. When the command is done, and the prompt apears again, finish the process with the command sudo sync With either method used, you should now have a bootable disk to use with your Raspberry PI","title":"Linux dd command"},{"location":"unused/install_phal_plugins/","text":"","title":"Install phal plugins"},{"location":"unused/install_raspbian/","text":"raspbian-ovos (headless) Purpose of this guide This guide describes two ways to create a headless OVOS system suitable for running on a Raspberry Pi 3 or 4. You can either download and burn a prebuilt image to an installation medium like an SD card, or you can use your own installation of the Raspberry PI OS and run an OVOS install script. The RPi3 does not have the processing power to reliably run ovos-shell , the GUI system for OVOS, but has plenty to run the rest of the stack. By the end of the guide, you should have a running OVOS stack, (messagebus, phal, skills, listener, and audio), along with a \"lite\" version of RaspberryPiOS. Which means you also have a package manager (apt) available to you. OVOS source files used by this guide can be found at raspbian-ovos . Any issues or pull requests should be made in this repository. Raspberry Pi Imager is available here . There have been issues when using Raspberry Pi Imager to burn pre-built images. From Linux we have had success starting the imager with the command sudo raspi-imager . Use a pre-built OVOS/PI Environment image saved to an SD card Download a pre-built OVOS/PI image from our raspbian-ovos download site . Here are two methods to install your OVOS/PI image file onto your SD card. Method 1: Write the pre-built OVOS/PI Environment image using Raspberry Pi Imager Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with \" sudo raspi-imager\". For \"Choose OS\", select \"Use custom\" and select the OVOS/PI image file downloaded from the OpenVoiceOS Downloads Site. For \"Choose Storage\", select your removable boot media, probably something like \"Internal SD Card Reader\". Then select \"Write\". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot. Upon completion, you should have a bootable SD card or USB drive. Method 2: Write the OVOS/PI Environment image using the Linux dd command Be careful with the dd command, you can easily render your computer useless If your downloaded image is zipped, unzip the image to the directory of your choice unzip Check where your sd card or usb drive is located using the lsusb command. It should be visible as sdxx Write the unzipped image to the disk sudo dd if= of= bs=4M status=progress Upon completion, you should have a bootable SD card or USB drive. First Boot and Login to the pre-built OVOV/PI Environment image Insert the SD card, hook up your audio, and turn on your OVOS Pi. This image comes with a predefined user, ovos with password ovos . It is recommended that you change your password on first login. sudo passwd ovos Enter your new password twice. On first boot, you will be voice-prompted to connect to SSID OVOS and go to the website start.openvoiceos.com . This is not the official OVOS website but a local hotspot that the image has created on your Raspberry Pi. Then from a computer that supports wireless networking, connect to the SSID 'OVOS' and go to the website 'start.openvoiceos.com'. There you can enter the credentials of your WiFi network. If your sound isn't working, no worries, you can keep scanning your computer's list of nearby SSIDs until you see OVOS, and then connect to the network without hearing the verbal prompt. This image on a RPi3B takes several minutes to boot before you hear the audio prompt, and several more minutes to finish booting. If you don't think it's working, please wait up to 3 minutes each time before thinking something went wrong. You can also follow progress in the OVOS log files found in ~/.local/state/mycroft/*.log. If for some reason this method does not work, sudo raspi-config and nmtui are also available. The section below is for advanced usage and is NOT currently recommended Use an OVOS/PI environment created by cloning a git repository into your own Raspberry Pi OS environment Step 1: Two ways to create a Raspberry Pi OS Lite (64 bit) boot SD card There are lots of guides, but this one is the official guide Our experience with Linux is to invoke the raspi-imager with sudo raspi-imager . Insert your boot media into your PC. Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with \" sudo raspi-imager\". For \"Choose OS\", select \"Raspberry Pi OS Other\" and then \"Raspberry Pi OS Lite (64 bit)\". For \"Choose Storage\", select your removable boot media, probably something like \"Internal SD Card Reader\" or \"USB Flash Disk\". From here you must choose one of two methods: Method 1: Tried, tested, and reliable, Pi needs a monitor to complete the install Here we use the Raspberry Pi Imager to write your media without selecting any Advanced Options . Click \"Write\". After the write completes, insert your SD card in your Pi and boot your newly created medium. Create user ovos with a password of your choosing. The system will reboot and ask you to log in. Log in as ovos. Run the command sudo raspi-config to enter the Pi setup utility. We will be running the OVOS environment as a regular user, and we want OVOS to start at boot time, so we want to autologin. Enter the System Options page. Enter the Boot / Autologin page. - Use the second option in the menu, Console Autologin . - This enables OVOS to start up automatically at boot time. Now we will enable a few interface options. This will allow us to access our device from a ssh shell and prep the PI for other devices that may be used. Some microphone hats require SPI, or I2C (Respeaker, AIY-Voicebonnet, etc). Go back to the main menu and enter the Interface Options page. - Enable SSH, SPI, and I2C. - After SSH is enabled, the rest of the guide can be done from a remote computer. Go back to the main menu and enter the Localisation Options page. - Configure Locale, Timezone, WLAN Country. You will need an internet connection to complete the rest of the guide ** Optional: Setup WIFI ** Return to the main menu and enter System Options again. Enter the Wireless LAN section and follow the prompts. Further down from Wireless Lan is Hostname . Choose a name for your OVOS device and enter it there. Exit out of the raspi-config tool. Next find your IP address. The command is the same if you used the WiFi setup or have a LAN connected. Enter the command ip addr . In the output, if things were configured correctly, there will be one or more lines that are relevant. Find the device that you used to connect, WiFi will start with something like wlan and a LAN connection should begin with eth or enp or something similar. In the device section, there is an inet entry. The number located there is your local IP address. It should be in the format 192.168.x.xxx or something similar. Write this down, or remember it. You will be using it to log in with an SSH shell . Now the device setup is done. Exit raspi-config and reboot. sudo reboot now Method 2: Use Raspberry Pi Imager advanced options to install without a monitor Here we use the Raspberry Pi Imager to write your media and let the Imager handle your network and SSH setup . If your network cannot locate computers by their hostnames, this method will not easily work for you. In other words, if you cannot ping a network connection with a host name, and you need to use an IP address to ping other network computers, use Method 1 described above. If you are comfortable looking up the OVOS computer's IP address using your router or other network software, Method 2 will still work. Instead of selecting \"Write\", click on the cog in the lower right of the Imager panel to open Raspberry Pi Imager advanced options. In this new panel, check the boxes for: \"Set hostname\" - hostname for your OVOS device goes here. \"Enable SSH\" - this is how you will log into your headless Raspberry Pi. Use password authentication. \"Enter a username and password\". For this installation we are using ovos as the username. \"Configure wireless LAN\" (if you're using a wireless connection). Enter your SSID and wireless password. Click \"Save\", then click \"Write\". Once writing is complete, move the SD card to your OVOS device and boot up. After logging in as ovos, run the command sudo raspi-config to enter the Pi setup utility. We will be running the OVOS environment as a regular user, and we want OVOS to start at boot time, so we want to autologin. Enter the System Options page. Enter the Boot / Autologin page. Use the second option in the menu, Console Autologin . This enables OVOS to start up automatically at boot time. Now we will enable a few interface options. This will allow us to access our device from a ssh shell and prep the PI for other devices that may be used. Some microphone hats require SPI, or I2C (Respeaker, AIY-Voicebonnet, etc). Go back to the main menu and enter the Interface Options page. Enable SPI, and I2C. Go back to the main menu and enter the Localisation Options page. Configure Locale, Timezone, WLAN Country. Now the device setup is done. Exit raspi-config and reboot. sudo reboot now Step 2: ssh to your OVOS device *** From this point on, you should be able to access your device from any SSH terminal. *** For guide for how to do this, see raspberrypi documentation remote-access From a linux machine, open a terminal and enter the command ssh ovos@ or ssh ovos@ . There will be a warning making sure you want to connect to this device. Enter yes, and when asked, enter the password for ovos that you made earlier in the setup. ovos As a final configuration step, make sure your system is up to date. sudo apt -y update && sudo apt -y upgrade We should be done with the basic setup now. You should have a running RaspberryPiOS device with the user ovos Step 3: Install ovos-core There are some recommendations to use a venv for OVOS. This guide DOES NOT do that. The OVOS headless stack on a RPi3 is about all it can handle. It is assumed that this is a dedicated OVOS device, therefore no venv is required. We will be cloning code from a git repository, so before starting we need to install git. sudo apt install git We will also be installing everything to the user environment instead of system wide. As ovos is the only user, this should be fine. Although not strictly necessary, we assume that we're starting in the ovos home directory. cd ~ Clone the repository git clone https://github.com/OpenVoiceOS/raspbian-ovos.git cd raspbian-ovos Run the install script and follow the prompts. It's fine to say yes \"Y\" to everything. ./manual_user_install.sh You should now have a running OVOS device!! Check your installation with systemctl --user status ovos-* The full OVOS can take a few minutes to load (especially on a Pi 3), but the processes should all eventually say active (running) , except for ovos.service which should say active (exited) You can also track progress by watching the files in ~/.local/state/mycroft/*.log. Once things slow down you can try saying \"Hey Mycroft\". In a few seconds (the first time is slow) you should hear a 'ding' from the system. Then say \"What day is it\". After a delay you should hear information about today's date. Final thoughts Often the audio can take some tuning, and in general is not covered here. Pulseaudio should be running, check with systemctl --user status pulseaudio . Each piece of hardware is different to set up. I am sure there is a guide somewhere for your hardware. One thing to mention, this is a full raspbian install, so installing drivers should work also. Once the OVOS processes are running, if you don't hear a 'ding' after two or three times saying \"Hey Mycroft\", start up alsamixer and make sure your microphone is recognized and the volume is turned up. At least one USB microphone (mine) defaults to \"Auto Gain Control\" which needs to be turned off and replaced by turning up the microphone volume. You may also need to turn up the speaker volume. This installation of ovos-core only has a few default skills shipped with it. Check this page for more information on skills. Please enter suggestions and support requests on our raspbian-ovos github page. Thank you!","title":"raspbian-ovos (headless)"},{"location":"unused/install_raspbian/#raspbian-ovos-headless","text":"","title":"raspbian-ovos (headless)"},{"location":"unused/install_raspbian/#purpose-of-this-guide","text":"This guide describes two ways to create a headless OVOS system suitable for running on a Raspberry Pi 3 or 4. You can either download and burn a prebuilt image to an installation medium like an SD card, or you can use your own installation of the Raspberry PI OS and run an OVOS install script. The RPi3 does not have the processing power to reliably run ovos-shell , the GUI system for OVOS, but has plenty to run the rest of the stack. By the end of the guide, you should have a running OVOS stack, (messagebus, phal, skills, listener, and audio), along with a \"lite\" version of RaspberryPiOS. Which means you also have a package manager (apt) available to you. OVOS source files used by this guide can be found at raspbian-ovos . Any issues or pull requests should be made in this repository. Raspberry Pi Imager is available here . There have been issues when using Raspberry Pi Imager to burn pre-built images. From Linux we have had success starting the imager with the command sudo raspi-imager .","title":"Purpose of this guide"},{"location":"unused/install_raspbian/#use-a-pre-built-ovospi-environment-image-saved-to-an-sd-card","text":"Download a pre-built OVOS/PI image from our raspbian-ovos download site . Here are two methods to install your OVOS/PI image file onto your SD card.","title":"Use a pre-built OVOS/PI Environment image saved to an SD card"},{"location":"unused/install_raspbian/#method-1-write-the-pre-built-ovospi-environment-image-using-raspberry-pi-imager","text":"Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with \" sudo raspi-imager\". For \"Choose OS\", select \"Use custom\" and select the OVOS/PI image file downloaded from the OpenVoiceOS Downloads Site. For \"Choose Storage\", select your removable boot media, probably something like \"Internal SD Card Reader\". Then select \"Write\". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot. Upon completion, you should have a bootable SD card or USB drive.","title":"Method 1: Write the pre-built OVOS/PI Environment image using Raspberry Pi Imager"},{"location":"unused/install_raspbian/#method-2-write-the-ovospi-environment-image-using-the-linux-dd-command","text":"Be careful with the dd command, you can easily render your computer useless If your downloaded image is zipped, unzip the image to the directory of your choice unzip Check where your sd card or usb drive is located using the lsusb command. It should be visible as sdxx Write the unzipped image to the disk sudo dd if= of= bs=4M status=progress Upon completion, you should have a bootable SD card or USB drive.","title":"Method 2: Write the OVOS/PI Environment image using the Linux dd command"},{"location":"unused/install_raspbian/#first-boot-and-login-to-the-pre-built-ovovpi-environment-image","text":"Insert the SD card, hook up your audio, and turn on your OVOS Pi. This image comes with a predefined user, ovos with password ovos . It is recommended that you change your password on first login. sudo passwd ovos Enter your new password twice. On first boot, you will be voice-prompted to connect to SSID OVOS and go to the website start.openvoiceos.com . This is not the official OVOS website but a local hotspot that the image has created on your Raspberry Pi. Then from a computer that supports wireless networking, connect to the SSID 'OVOS' and go to the website 'start.openvoiceos.com'. There you can enter the credentials of your WiFi network. If your sound isn't working, no worries, you can keep scanning your computer's list of nearby SSIDs until you see OVOS, and then connect to the network without hearing the verbal prompt. This image on a RPi3B takes several minutes to boot before you hear the audio prompt, and several more minutes to finish booting. If you don't think it's working, please wait up to 3 minutes each time before thinking something went wrong. You can also follow progress in the OVOS log files found in ~/.local/state/mycroft/*.log. If for some reason this method does not work, sudo raspi-config and nmtui are also available.","title":"First Boot and Login to the pre-built OVOV/PI Environment image"},{"location":"unused/install_raspbian/#the-section-below-is-for-advanced-usage-and-is-not-currently-recommended","text":"","title":"The section below is for advanced usage and is NOT currently recommended"},{"location":"unused/install_raspbian/#use-an-ovospi-environment-created-by-cloning-a-git-repository-into-your-own-raspberry-pi-os-environment","text":"","title":"Use an OVOS/PI environment created by cloning a git repository into your own Raspberry Pi OS environment"},{"location":"unused/install_raspbian/#step-1-two-ways-to-create-a-raspberry-pi-os-lite-64-bit-boot-sd-card","text":"There are lots of guides, but this one is the official guide Our experience with Linux is to invoke the raspi-imager with sudo raspi-imager . Insert your boot media into your PC. Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with \" sudo raspi-imager\". For \"Choose OS\", select \"Raspberry Pi OS Other\" and then \"Raspberry Pi OS Lite (64 bit)\". For \"Choose Storage\", select your removable boot media, probably something like \"Internal SD Card Reader\" or \"USB Flash Disk\". From here you must choose one of two methods:","title":"Step 1: Two ways to create a Raspberry Pi OS Lite (64 bit) boot SD card"},{"location":"unused/install_raspbian/#method-1-tried-tested-and-reliable-pi-needs-a-monitor-to-complete-the-install","text":"Here we use the Raspberry Pi Imager to write your media without selecting any Advanced Options . Click \"Write\". After the write completes, insert your SD card in your Pi and boot your newly created medium. Create user ovos with a password of your choosing. The system will reboot and ask you to log in. Log in as ovos. Run the command sudo raspi-config to enter the Pi setup utility. We will be running the OVOS environment as a regular user, and we want OVOS to start at boot time, so we want to autologin. Enter the System Options page. Enter the Boot / Autologin page. - Use the second option in the menu, Console Autologin . - This enables OVOS to start up automatically at boot time. Now we will enable a few interface options. This will allow us to access our device from a ssh shell and prep the PI for other devices that may be used. Some microphone hats require SPI, or I2C (Respeaker, AIY-Voicebonnet, etc). Go back to the main menu and enter the Interface Options page. - Enable SSH, SPI, and I2C. - After SSH is enabled, the rest of the guide can be done from a remote computer. Go back to the main menu and enter the Localisation Options page. - Configure Locale, Timezone, WLAN Country. You will need an internet connection to complete the rest of the guide ** Optional: Setup WIFI ** Return to the main menu and enter System Options again. Enter the Wireless LAN section and follow the prompts. Further down from Wireless Lan is Hostname . Choose a name for your OVOS device and enter it there. Exit out of the raspi-config tool. Next find your IP address. The command is the same if you used the WiFi setup or have a LAN connected. Enter the command ip addr . In the output, if things were configured correctly, there will be one or more lines that are relevant. Find the device that you used to connect, WiFi will start with something like wlan and a LAN connection should begin with eth or enp or something similar. In the device section, there is an inet entry. The number located there is your local IP address. It should be in the format 192.168.x.xxx or something similar. Write this down, or remember it. You will be using it to log in with an SSH shell . Now the device setup is done. Exit raspi-config and reboot. sudo reboot now","title":"Method 1: Tried, tested, and reliable, Pi needs a monitor to complete the install"},{"location":"unused/install_raspbian/#method-2-use-raspberry-pi-imager-advanced-options-to-install-without-a-monitor","text":"Here we use the Raspberry Pi Imager to write your media and let the Imager handle your network and SSH setup . If your network cannot locate computers by their hostnames, this method will not easily work for you. In other words, if you cannot ping a network connection with a host name, and you need to use an IP address to ping other network computers, use Method 1 described above. If you are comfortable looking up the OVOS computer's IP address using your router or other network software, Method 2 will still work. Instead of selecting \"Write\", click on the cog in the lower right of the Imager panel to open Raspberry Pi Imager advanced options. In this new panel, check the boxes for: \"Set hostname\" - hostname for your OVOS device goes here. \"Enable SSH\" - this is how you will log into your headless Raspberry Pi. Use password authentication. \"Enter a username and password\". For this installation we are using ovos as the username. \"Configure wireless LAN\" (if you're using a wireless connection). Enter your SSID and wireless password. Click \"Save\", then click \"Write\". Once writing is complete, move the SD card to your OVOS device and boot up. After logging in as ovos, run the command sudo raspi-config to enter the Pi setup utility. We will be running the OVOS environment as a regular user, and we want OVOS to start at boot time, so we want to autologin. Enter the System Options page. Enter the Boot / Autologin page. Use the second option in the menu, Console Autologin . This enables OVOS to start up automatically at boot time. Now we will enable a few interface options. This will allow us to access our device from a ssh shell and prep the PI for other devices that may be used. Some microphone hats require SPI, or I2C (Respeaker, AIY-Voicebonnet, etc). Go back to the main menu and enter the Interface Options page. Enable SPI, and I2C. Go back to the main menu and enter the Localisation Options page. Configure Locale, Timezone, WLAN Country. Now the device setup is done. Exit raspi-config and reboot. sudo reboot now","title":"Method 2: Use Raspberry Pi Imager advanced options to install without a monitor"},{"location":"unused/install_raspbian/#step-2-ssh-to-your-ovos-device","text":"*** From this point on, you should be able to access your device from any SSH terminal. *** For guide for how to do this, see raspberrypi documentation remote-access From a linux machine, open a terminal and enter the command ssh ovos@ or ssh ovos@ . There will be a warning making sure you want to connect to this device. Enter yes, and when asked, enter the password for ovos that you made earlier in the setup. ovos As a final configuration step, make sure your system is up to date. sudo apt -y update && sudo apt -y upgrade We should be done with the basic setup now. You should have a running RaspberryPiOS device with the user ovos","title":"Step 2: ssh to your OVOS device"},{"location":"unused/install_raspbian/#step-3-install-ovos-core","text":"There are some recommendations to use a venv for OVOS. This guide DOES NOT do that. The OVOS headless stack on a RPi3 is about all it can handle. It is assumed that this is a dedicated OVOS device, therefore no venv is required. We will be cloning code from a git repository, so before starting we need to install git. sudo apt install git We will also be installing everything to the user environment instead of system wide. As ovos is the only user, this should be fine. Although not strictly necessary, we assume that we're starting in the ovos home directory. cd ~ Clone the repository git clone https://github.com/OpenVoiceOS/raspbian-ovos.git cd raspbian-ovos Run the install script and follow the prompts. It's fine to say yes \"Y\" to everything. ./manual_user_install.sh You should now have a running OVOS device!! Check your installation with systemctl --user status ovos-* The full OVOS can take a few minutes to load (especially on a Pi 3), but the processes should all eventually say active (running) , except for ovos.service which should say active (exited) You can also track progress by watching the files in ~/.local/state/mycroft/*.log. Once things slow down you can try saying \"Hey Mycroft\". In a few seconds (the first time is slow) you should hear a 'ding' from the system. Then say \"What day is it\". After a delay you should hear information about today's date.","title":"Step 3: Install ovos-core"},{"location":"unused/install_raspbian/#final-thoughts","text":"Often the audio can take some tuning, and in general is not covered here. Pulseaudio should be running, check with systemctl --user status pulseaudio . Each piece of hardware is different to set up. I am sure there is a guide somewhere for your hardware. One thing to mention, this is a full raspbian install, so installing drivers should work also. Once the OVOS processes are running, if you don't hear a 'ding' after two or three times saying \"Hey Mycroft\", start up alsamixer and make sure your microphone is recognized and the volume is turned up. At least one USB microphone (mine) defaults to \"Auto Gain Control\" which needs to be turned off and replaced by turning up the microphone volume. You may also need to turn up the speaker volume. This installation of ovos-core only has a few default skills shipped with it. Check this page for more information on skills. Please enter suggestions and support requests on our raspbian-ovos github page. Thank you!","title":"Final thoughts"},{"location":"unused/install_skills/","text":"Installing New Skills There are a few ways to install skills in ovos. The preferred way is with pip and a setup.py file. Finding Skills Most skills are found throughout github. The official skills can be found with a simple search in the OVOS github page . There are a few other places they can be found. Neon AI has several skills, and a search through github will for sure find more. Installing a found skill pip install The preferred method is with pip . If a skill has a setup.py file, it can be installed this way. The syntax is pip install git+ . ex. pip install git+https://github.com/OpenVoiceOS/skill-ovos-date-time.git should install the ovos-date-time skill They can be installed locally also. Clone the repository git clone https://github.com/OpenVoiceOS/skill-ovos-date-time pip install ./skill-ovos-date-time After installing skills this way, ovos skills service needs to be restarted systemctl --user restart ovos-skills git install This is NOT the preferred method and is here for backward compatabity with the origional mycroft-core skills. Skills can also be directly cloned to the skill directory, usually located at ~/.local/share/mycroft/skills/ enter the skill directory cd ~/.local/share/mycroft/skills and clone the found skill here with git git clone ex. git clone https://github.com/OpenVoiceOS/skill-ovos-date-time.git will install the ovos-date-time skill. A restart of the ovos-skills service is not required when installing this way. Information on Skills Configuring Skills Depreciated The OVOS skills manager is in need of some love, and when official skill stores are created, this will be updated to use the new methods. Until then, this method is NOT recomended, and NOT supported. The following is included just as refrence. OVOS skills manager Install skills from any appstore! The mycroft-skills-manager alternative that is not vendor locked , this means you must use it responsibly! Do not install random skills, different appstores have different policies! Keep in mind any skill you install can modify mycroft-core at runtime , and very likely has root access if you are running on a raspberry pi Supported stores OVOS - this one is really a proof of concept for now, stay tuned! Mycroft Marketplace - the official mycroft skills store, all skills are reviewed by humans! Pling - the official plasma bigscreen skills store, skills are accepted by default and only removed if flagged as malicious Andlo's skill list - not a real appstore, this is a web scrapped automatically generated list of 900+ skills from all over github, there is no review at all, it will catch malicious skills OpenVoiceOS Skill Manager pip install ovos-skills-manager Enable a skill store osm enable --appstore [ovos|mycroft|pling|andlo|all] Search for a skill and install it osm install --search See more osm commands osm --help osm install --help More Information","title":"Installing New Skills"},{"location":"unused/install_skills/#installing-new-skills","text":"There are a few ways to install skills in ovos. The preferred way is with pip and a setup.py file.","title":"Installing New Skills"},{"location":"unused/install_skills/#finding-skills","text":"Most skills are found throughout github. The official skills can be found with a simple search in the OVOS github page . There are a few other places they can be found. Neon AI has several skills, and a search through github will for sure find more.","title":"Finding Skills"},{"location":"unused/install_skills/#installing-a-found-skill","text":"","title":"Installing a found skill"},{"location":"unused/install_skills/#pip-install","text":"The preferred method is with pip . If a skill has a setup.py file, it can be installed this way. The syntax is pip install git+ . ex. pip install git+https://github.com/OpenVoiceOS/skill-ovos-date-time.git should install the ovos-date-time skill They can be installed locally also. Clone the repository git clone https://github.com/OpenVoiceOS/skill-ovos-date-time pip install ./skill-ovos-date-time After installing skills this way, ovos skills service needs to be restarted systemctl --user restart ovos-skills","title":"pip install"},{"location":"unused/install_skills/#git-install","text":"This is NOT the preferred method and is here for backward compatabity with the origional mycroft-core skills. Skills can also be directly cloned to the skill directory, usually located at ~/.local/share/mycroft/skills/ enter the skill directory cd ~/.local/share/mycroft/skills and clone the found skill here with git git clone ex. git clone https://github.com/OpenVoiceOS/skill-ovos-date-time.git will install the ovos-date-time skill. A restart of the ovos-skills service is not required when installing this way. Information on Skills Configuring Skills","title":"git install"},{"location":"unused/install_skills/#depreciated","text":"The OVOS skills manager is in need of some love, and when official skill stores are created, this will be updated to use the new methods. Until then, this method is NOT recomended, and NOT supported. The following is included just as refrence.","title":"Depreciated"},{"location":"unused/install_skills/#ovos-skills-manager","text":"Install skills from any appstore! The mycroft-skills-manager alternative that is not vendor locked , this means you must use it responsibly! Do not install random skills, different appstores have different policies! Keep in mind any skill you install can modify mycroft-core at runtime , and very likely has root access if you are running on a raspberry pi","title":"OVOS skills manager"},{"location":"unused/install_skills/#supported-stores","text":"OVOS - this one is really a proof of concept for now, stay tuned! Mycroft Marketplace - the official mycroft skills store, all skills are reviewed by humans! Pling - the official plasma bigscreen skills store, skills are accepted by default and only removed if flagged as malicious Andlo's skill list - not a real appstore, this is a web scrapped automatically generated list of 900+ skills from all over github, there is no review at all, it will catch malicious skills","title":"Supported stores"},{"location":"unused/install_skills/#openvoiceos-skill-manager","text":"pip install ovos-skills-manager Enable a skill store osm enable --appstore [ovos|mycroft|pling|andlo|all] Search for a skill and install it osm install --search See more osm commands osm --help osm install --help More Information","title":"OpenVoiceOS Skill Manager"},{"location":"unused/intents/","text":"Intents A user can accomplish the same task by expressing their intent in multiple ways. The role of the intent parser is to extract from the user's speech key data elements that specify their intent in more detail. This data can then be passed to other services, such as Skills to help the user accomplish their intended task. Example : Julie wants to know about today's weather in her current location, which is Melbourne, Australia. \"hey mycroft, what's today's weather like?\" \"hey mycroft, what's the weather like in Melbourne?\" \"hey mycroft, weather\" Even though these are three different expressions, for most of us they probably have roughly the same meaning. In each case we would assume the user expects OVOS to respond with today's weather for their current location. The role of an intent parser is to determine what this intent is. In the example above, we might extract data elements like: weather - we know that Julie wants to know about the weather, but she has not been specific about the type of weather, such as wind , precipitation , snowfall or the risk of fire danger from bushfires. Melbourne, Australia rarely experiences snowfall, but falls under bushfire risk every summer. location - Julie has stipulated her location as Melbourne, but she does not state that she means Melbourne, Australia. How do we distinguish this from Melbourne, Florida, United States? date - Julie has been specific about the timeframe she wants weather data for - today. But how do we know what today means in Julie's timezone. Melbourne, Australia is between 14-18 hours ahead of the United States. We don't want to give Julie yesterday's weather, particularly as Melbourne is renowned for having changeable weather. OVOS has two separate Intent parsing engines each with their own strengths. Each of these can be used in most situations, however they will process the utterance in different ways. Example based intents are trained on whole phrases. These intents are generally more accurate however require you to include sample phrases that cover the breadth of ways that a User may ask about something. **Keyword / Rule based ** these intents look for specific required keywords. They are more flexible, but since these are essentially rule based this can result in a lot of false matches. A badly designed intent may totally throw the intent parser off guard. The main advantage of keyword based intents is the integration with conversational context , they facilitate continuous dialogs OVOS is moving towards a plugin system for intent engines, currently only the default MycroftAI intent parsers are supported Padatious is a light-weight neural network that is trained on whole phrases. You can find the official documentation here Adapt is a keyword based parser. You can find the official documentation here","title":"Intents"},{"location":"unused/intents/#intents","text":"A user can accomplish the same task by expressing their intent in multiple ways. The role of the intent parser is to extract from the user's speech key data elements that specify their intent in more detail. This data can then be passed to other services, such as Skills to help the user accomplish their intended task. Example : Julie wants to know about today's weather in her current location, which is Melbourne, Australia. \"hey mycroft, what's today's weather like?\" \"hey mycroft, what's the weather like in Melbourne?\" \"hey mycroft, weather\" Even though these are three different expressions, for most of us they probably have roughly the same meaning. In each case we would assume the user expects OVOS to respond with today's weather for their current location. The role of an intent parser is to determine what this intent is. In the example above, we might extract data elements like: weather - we know that Julie wants to know about the weather, but she has not been specific about the type of weather, such as wind , precipitation , snowfall or the risk of fire danger from bushfires. Melbourne, Australia rarely experiences snowfall, but falls under bushfire risk every summer. location - Julie has stipulated her location as Melbourne, but she does not state that she means Melbourne, Australia. How do we distinguish this from Melbourne, Florida, United States? date - Julie has been specific about the timeframe she wants weather data for - today. But how do we know what today means in Julie's timezone. Melbourne, Australia is between 14-18 hours ahead of the United States. We don't want to give Julie yesterday's weather, particularly as Melbourne is renowned for having changeable weather. OVOS has two separate Intent parsing engines each with their own strengths. Each of these can be used in most situations, however they will process the utterance in different ways. Example based intents are trained on whole phrases. These intents are generally more accurate however require you to include sample phrases that cover the breadth of ways that a User may ask about something. **Keyword / Rule based ** these intents look for specific required keywords. They are more flexible, but since these are essentially rule based this can result in a lot of false matches. A badly designed intent may totally throw the intent parser off guard. The main advantage of keyword based intents is the integration with conversational context , they facilitate continuous dialogs OVOS is moving towards a plugin system for intent engines, currently only the default MycroftAI intent parsers are supported Padatious is a light-weight neural network that is trained on whole phrases. You can find the official documentation here Adapt is a keyword based parser. You can find the official documentation here","title":"Intents"},{"location":"unused/lang_plugins/","text":"Language Detection/Translation Plugins These plugins can be used to detect the language of text and to translate it They are not used internally by ovos-core but are integrated with external tools neon-core also makes heavy use of OPM language plugins List of Language plugins Plugin Detect Tx Offline Type neon-lang-plugin-cld2 yes no yes FOSS neon-lang-plugin-cld3 yes no yes FOSS neon-lang-plugin-langdetect yes no yes FOSS neon-lang-plugin-fastlang yes no yes FOSS neon-lang-plugin-lingua_podre yes no yes FOSS neon-lang-plugin-libretranslate yes yes no API (self hosted) neon-lang-plugin-apertium no yes no API (self hosted) neon-lang-plugin-amazon_translate yes yes no API (key) neon-lang-plugin-google_translate yes yes no API (key) Open Linguistika Open Linguistika is a tool to allow Mycroft Skill developers working on GUI\u2019s to easily translate their GUI\u2019s to other languages. For Mycroft\u2019s GUI, the UI interface used currently by Mycroft for which you can find QML files under the UI directory of skills, is based on Qt. Mycroft GUI uses Qt\u2019s translation mechanism to translate GUI\u2019s to other languages. To get your skills GUI translated and ready for other languages involves several manual steps from running Qt tools like lupdate against each QML UI file for each translatable language to running Qt\u2019s tool lrelease for specific language targets to compile a language for the QT environment to understand. To make your developer experience smarter and easier the OpenVoiceOS team is introducing an all-in-one toolkit for GUI language translations. The Open Linguistika toolkit allows developers to use auto-translate from various supported translator providers, and additionally support more languages, with the possibility for manual translations without having to go through the different Qt tools and command chain required to manually support a skill GUI for a different language. As a GUI skill Developer, the only know-how you need is to add the translation calls to your skill QML files, Developers can get more information about how to add them here Internationalization and Localization with Qt Quick | Qt 6.3. The \u201cTLDR\u201d version is that for every hard-coded string in your QML UI skill file you need to decorate your strings with the qsTr() decorator and your model list elements with the QT_TR_NOOP() decorator. Open Linguistika when installed by default on your distribution by choice, currently supports 6 European languages and 2 auto-translation providers. The tool provides extensibility through its JSON configuration interface to add more language support, where using a simple JSON language addition mechanism you can extend the tool to support a number of additional languages you would like to support for your skills UI. You can read more about adding additional languages on the tool\u2019s GitHub repository. How-To-Use Demo:","title":"Language Detection/Translation Plugins"},{"location":"unused/lang_plugins/#language-detectiontranslation-plugins","text":"These plugins can be used to detect the language of text and to translate it They are not used internally by ovos-core but are integrated with external tools neon-core also makes heavy use of OPM language plugins","title":"Language Detection/Translation Plugins"},{"location":"unused/lang_plugins/#list-of-language-plugins","text":"Plugin Detect Tx Offline Type neon-lang-plugin-cld2 yes no yes FOSS neon-lang-plugin-cld3 yes no yes FOSS neon-lang-plugin-langdetect yes no yes FOSS neon-lang-plugin-fastlang yes no yes FOSS neon-lang-plugin-lingua_podre yes no yes FOSS neon-lang-plugin-libretranslate yes yes no API (self hosted) neon-lang-plugin-apertium no yes no API (self hosted) neon-lang-plugin-amazon_translate yes yes no API (key) neon-lang-plugin-google_translate yes yes no API (key)","title":"List of Language plugins"},{"location":"unused/lang_plugins/#open-linguistika","text":"Open Linguistika is a tool to allow Mycroft Skill developers working on GUI\u2019s to easily translate their GUI\u2019s to other languages. For Mycroft\u2019s GUI, the UI interface used currently by Mycroft for which you can find QML files under the UI directory of skills, is based on Qt. Mycroft GUI uses Qt\u2019s translation mechanism to translate GUI\u2019s to other languages. To get your skills GUI translated and ready for other languages involves several manual steps from running Qt tools like lupdate against each QML UI file for each translatable language to running Qt\u2019s tool lrelease for specific language targets to compile a language for the QT environment to understand. To make your developer experience smarter and easier the OpenVoiceOS team is introducing an all-in-one toolkit for GUI language translations. The Open Linguistika toolkit allows developers to use auto-translate from various supported translator providers, and additionally support more languages, with the possibility for manual translations without having to go through the different Qt tools and command chain required to manually support a skill GUI for a different language. As a GUI skill Developer, the only know-how you need is to add the translation calls to your skill QML files, Developers can get more information about how to add them here Internationalization and Localization with Qt Quick | Qt 6.3. The \u201cTLDR\u201d version is that for every hard-coded string in your QML UI skill file you need to decorate your strings with the qsTr() decorator and your model list elements with the QT_TR_NOOP() decorator. Open Linguistika when installed by default on your distribution by choice, currently supports 6 European languages and 2 auto-translation providers. The tool provides extensibility through its JSON configuration interface to add more language support, where using a simple JSON language addition mechanism you can extend the tool to support a number of additional languages you would like to support for your skills UI. You can read more about adding additional languages on the tool\u2019s GitHub repository. How-To-Use Demo:","title":"Open Linguistika"},{"location":"unused/linguistika/","text":"Open Linguistika Open Linguistika is a tool to allow Mycroft Skill developers working on GUI\u2019s to easily translate their GUI\u2019s to other languages. For Mycroft\u2019s GUI, the UI interface used currently by Mycroft for which you can find QML files under the UI directory of skills, is based on Qt. Mycroft GUI uses Qt\u2019s translation mechanism to translate GUI\u2019s to other languages. To get your skills GUI translated and ready for other languages involves several manual steps from running Qt tools like lupdate against each QML UI file for each translatable language to running Qt\u2019s tool lrelease for specific language targets to compile a language for the QT environment to understand. To make your developer experience smarter and easier the OpenVoiceOS team is introducing an all-in-one toolkit for GUI language translations. The Open Linguistika toolkit allows developers to use auto-translate from various supported translator providers, and additionally support more languages, with the possibility for manual translations without having to go through the different Qt tools and command chain required to manually support a skill GUI for a different language. As a GUI skill Developer, the only know-how you need is to add the translation calls to your skill QML files, Developers can get more information about how to add them here Internationalization and Localization with Qt Quick | Qt 6.3. The \u201cTLDR\u201d version is that for every hard-coded string in your QML UI skill file you need to decorate your strings with the qsTr() decorator and your model list elements with the QT_TR_NOOP() decorator. Open Linguistika when installed by default on your distribution by choice, currently supports 6 European languages and 2 auto-translation providers. The tool provides extensibility through its JSON configuration interface to add more language support, where using a simple JSON language addition mechanism you can extend the tool to support a number of additional languages you would like to support for your skills UI. You can read more about adding additional languages on the tool\u2019s GitHub repository. How-To-Use Demo:","title":"Open Linguistika"},{"location":"unused/linguistika/#open-linguistika","text":"Open Linguistika is a tool to allow Mycroft Skill developers working on GUI\u2019s to easily translate their GUI\u2019s to other languages. For Mycroft\u2019s GUI, the UI interface used currently by Mycroft for which you can find QML files under the UI directory of skills, is based on Qt. Mycroft GUI uses Qt\u2019s translation mechanism to translate GUI\u2019s to other languages. To get your skills GUI translated and ready for other languages involves several manual steps from running Qt tools like lupdate against each QML UI file for each translatable language to running Qt\u2019s tool lrelease for specific language targets to compile a language for the QT environment to understand. To make your developer experience smarter and easier the OpenVoiceOS team is introducing an all-in-one toolkit for GUI language translations. The Open Linguistika toolkit allows developers to use auto-translate from various supported translator providers, and additionally support more languages, with the possibility for manual translations without having to go through the different Qt tools and command chain required to manually support a skill GUI for a different language. As a GUI skill Developer, the only know-how you need is to add the translation calls to your skill QML files, Developers can get more information about how to add them here Internationalization and Localization with Qt Quick | Qt 6.3. The \u201cTLDR\u201d version is that for every hard-coded string in your QML UI skill file you need to decorate your strings with the qsTr() decorator and your model list elements with the QT_TR_NOOP() decorator. Open Linguistika when installed by default on your distribution by choice, currently supports 6 European languages and 2 auto-translation providers. The tool provides extensibility through its JSON configuration interface to add more language support, where using a simple JSON language addition mechanism you can extend the tool to support a number of additional languages you would like to support for your skills UI. You can read more about adding additional languages on the tool\u2019s GitHub repository. How-To-Use Demo:","title":"Open Linguistika"},{"location":"unused/manjaro/","text":"ovos-image-arch-recipe Make a manjaro based OpenVoiceOS image source code: https://github.com/OpenVoiceOS/ovos-image-arch-recipe/ Building Docker Automated Image Building The included Dockerfile can be used to build a default image in a Docker environment. The following dependencies must be installed on the build system before running the container: chroot qemu-user-static First, create the Docker container: docker build . -t ovos-image-builder Then, run the container to create a OVOS Image. Set CORE_REF to the branch of ovos-core that you want to build and RECIPE_REF to the branch of ovos-image-recipe you want to use. Set MAKE_THREADS to the number of threads to use for make processes. docker run \\ -v /home/${USER}/output:/output:rw \\ -v /run/systemd/resolve:/run/systemd/resolve \\ -e CORE_REF=${CORE_REF:-dev} \\ -e RECIPE_REF=${RECIPE_REF:-master} \\ -e MAKE_THREADS=${MAKE_THREADS:-4} \\ --privileged \\ --network=host \\ --name=ovos-image-builder \\ ovos-image-builder The entire build process will generally take several hours; it takes 1-2 hours on a build server with 2x Xeon Gold 5118 CPUs (48T Total). Interactive Image Building The scripts in the automation directory are available to help automate building a default image. For building an image interactively: bash automation/prepare.sh bash /tmp/run_scripts.sh The below documentation describes how to manually build an image using the individual scripts in this repository. Steps Getting Started The scripts and overlay files in this repository are designed to be applied to a base image as the root user. It is recommended to apply these scripts to a clean base image. Instructions are available at opensource.com . Note : The GUI shell is not installable under some base images For each step except boot_overlay , the directory corresponding to the step should be copied to the mounted image and the script run from a terminal chroot -ed to the image. If running scripts from a booted image, they should be run as root . Preparation From the host system where this repository is cloned, running prepare.sh will copy boot overlay files, mount the image, mount DNS resolver config from the host system, copy all other image overlay files to /tmp , and chroot into the image. From here, you can run any/all of the following scripts to prepare the image before cleaning up core_configuration Configures user accounts and base functionality for RPi. ovos user is created with proper permissions here. At this stage, a booted image should resize its file system to fill the drive it is flashed to. Local login and ssh connections should use ovos / ovos to authenticate and be prompted to change password on login. network_manager Adds Balena wifi-connect to enable a portal for connecting the Pi device to a Wi-Fi network. A booted image will now be ready to connect to a network via SSID OVOS . sj201 For SJ201 board support, the included script will build/install drivers, add required overlays, install required system packages, and add a systemd service to flash the SJ201 chip on boot. This will modify pulseaudio and potentially overwrite any previous settings. Note: Running this scripts grants GPIO permissions to the gpio group. Any user that interfaces with the SJ201 board should be a member of the gpio group. Group permissions are not modified by this script Audio devices should now show up with pactl list . Audio devices can be tested in the image by recording a short audio clip and playing it back. parecord test.wav paplay test.wav embedded_shell Installs ovos-shell and mycroft-gui-app. Adds and enables ovos-gui.service to start the shell on system boot. The image should now boot to the GUI shell. ovos_core Installs ovos-core and dependencies. Configures services for core modules. At this stage, the image is complete and when booted should start OVOS. dashboard Installs the OVOS Dashboard and service to start the dashboard from the GUI. From the GUI Settings -> Developer Settings menu, Enable Dashboard will now start the dashboard for remote access to device diagnostics. camera Installs libcamera and other dependencies for using a CSI camera. The default camera skill can be used to take a photo; libcamera-apps are also installed for testing via CLI. splash_screen Enables a custom splash screen and disables on-device TTY at boot. On boot, a static image should be shown until the GUI Shell starts. Clean Up cleanup.sh removes any temporary files from the mounted image before unmounting it. After running cleanup.sh , the image is ready to burn to a drive and boot.","title":"ovos-image-arch-recipe"},{"location":"unused/manjaro/#ovos-image-arch-recipe","text":"Make a manjaro based OpenVoiceOS image source code: https://github.com/OpenVoiceOS/ovos-image-arch-recipe/","title":"ovos-image-arch-recipe"},{"location":"unused/manjaro/#building","text":"","title":"Building"},{"location":"unused/manjaro/#docker-automated-image-building","text":"The included Dockerfile can be used to build a default image in a Docker environment. The following dependencies must be installed on the build system before running the container: chroot qemu-user-static First, create the Docker container: docker build . -t ovos-image-builder Then, run the container to create a OVOS Image. Set CORE_REF to the branch of ovos-core that you want to build and RECIPE_REF to the branch of ovos-image-recipe you want to use. Set MAKE_THREADS to the number of threads to use for make processes. docker run \\ -v /home/${USER}/output:/output:rw \\ -v /run/systemd/resolve:/run/systemd/resolve \\ -e CORE_REF=${CORE_REF:-dev} \\ -e RECIPE_REF=${RECIPE_REF:-master} \\ -e MAKE_THREADS=${MAKE_THREADS:-4} \\ --privileged \\ --network=host \\ --name=ovos-image-builder \\ ovos-image-builder The entire build process will generally take several hours; it takes 1-2 hours on a build server with 2x Xeon Gold 5118 CPUs (48T Total).","title":"Docker Automated Image Building"},{"location":"unused/manjaro/#interactive-image-building","text":"The scripts in the automation directory are available to help automate building a default image. For building an image interactively: bash automation/prepare.sh bash /tmp/run_scripts.sh The below documentation describes how to manually build an image using the individual scripts in this repository.","title":"Interactive Image Building"},{"location":"unused/manjaro/#steps","text":"","title":"Steps"},{"location":"unused/manjaro/#getting-started","text":"The scripts and overlay files in this repository are designed to be applied to a base image as the root user. It is recommended to apply these scripts to a clean base image. Instructions are available at opensource.com . Note : The GUI shell is not installable under some base images For each step except boot_overlay , the directory corresponding to the step should be copied to the mounted image and the script run from a terminal chroot -ed to the image. If running scripts from a booted image, they should be run as root .","title":"Getting Started"},{"location":"unused/manjaro/#preparation","text":"From the host system where this repository is cloned, running prepare.sh will copy boot overlay files, mount the image, mount DNS resolver config from the host system, copy all other image overlay files to /tmp , and chroot into the image. From here, you can run any/all of the following scripts to prepare the image before cleaning up","title":"Preparation"},{"location":"unused/manjaro/#core_configuration","text":"Configures user accounts and base functionality for RPi. ovos user is created with proper permissions here. At this stage, a booted image should resize its file system to fill the drive it is flashed to. Local login and ssh connections should use ovos / ovos to authenticate and be prompted to change password on login.","title":"core_configuration"},{"location":"unused/manjaro/#network_manager","text":"Adds Balena wifi-connect to enable a portal for connecting the Pi device to a Wi-Fi network. A booted image will now be ready to connect to a network via SSID OVOS .","title":"network_manager"},{"location":"unused/manjaro/#sj201","text":"For SJ201 board support, the included script will build/install drivers, add required overlays, install required system packages, and add a systemd service to flash the SJ201 chip on boot. This will modify pulseaudio and potentially overwrite any previous settings. Note: Running this scripts grants GPIO permissions to the gpio group. Any user that interfaces with the SJ201 board should be a member of the gpio group. Group permissions are not modified by this script Audio devices should now show up with pactl list . Audio devices can be tested in the image by recording a short audio clip and playing it back. parecord test.wav paplay test.wav","title":"sj201"},{"location":"unused/manjaro/#embedded_shell","text":"Installs ovos-shell and mycroft-gui-app. Adds and enables ovos-gui.service to start the shell on system boot. The image should now boot to the GUI shell.","title":"embedded_shell"},{"location":"unused/manjaro/#ovos_core","text":"Installs ovos-core and dependencies. Configures services for core modules. At this stage, the image is complete and when booted should start OVOS.","title":"ovos_core"},{"location":"unused/manjaro/#dashboard","text":"Installs the OVOS Dashboard and service to start the dashboard from the GUI. From the GUI Settings -> Developer Settings menu, Enable Dashboard will now start the dashboard for remote access to device diagnostics.","title":"dashboard"},{"location":"unused/manjaro/#camera","text":"Installs libcamera and other dependencies for using a CSI camera. The default camera skill can be used to take a photo; libcamera-apps are also installed for testing via CLI.","title":"camera"},{"location":"unused/manjaro/#splash_screen","text":"Enables a custom splash screen and disables on-device TTY at boot. On boot, a static image should be shown until the GUI Shell starts.","title":"splash_screen"},{"location":"unused/manjaro/#clean-up","text":"cleanup.sh removes any temporary files from the mounted image before unmounting it. After running cleanup.sh , the image is ready to burn to a drive and boot.","title":"Clean Up"},{"location":"unused/playing_music/","text":"Playing music For playing music (and video as discussed within the next chapter), OpenVoiceOS uses OCP (OpenVoiceOS Common Play) and is basically a full fledge multimedia player on its own designed around open standards like MPRIS and with the vision of being fully integrated within the OpenVoiceOS software stack. Skills designed for OCP provide search results for OCP (think about them as media providers/catalogs/scrapers), OCP will play the best search result for you. OpenVoiceOS comes with a few OCP skills pre-installed, however more can be installed just like any other OVOS skill. You can find more OCP skills in the awesome-ocp-skills list Youtube Music A voiceassistant with smartspeaker functionality should be able to play music straight out of the box. For that reason the buildroot edition of OpenVoiceOS comes with the Youtube Music OCP Skill pre-installed. Just ask it to play something will start playback from Youtube assuming the asked sonmg is present on Youtube ofcourse. Hey Mycroft, play disturbed sound of silence This should just start playing utilizing OCP as shown below. More information about the full functionality of OCP can be found at its own chapter. Play the news Nothing more relaxing after you woke up, cancelling your alarm set on you OpenVoiceOS device than listening to your favorite news station while drinking some coffee (No OpenVoiceOS can not make you that coffee yet). Hey Mycroft, play the BBC news Some more features that come out of the box The whole OCP framework has some benefits and features that are not skill specific, such as \"Playlists\" and a view of the search results. You can access those by swiping to the right when something is playing. Homescreen widget The homescreen skill that comes pre-installed with OpenVoiceOS also comes with a widget for the OCP framework.","title":"Playing music"},{"location":"unused/playing_music/#playing-music","text":"For playing music (and video as discussed within the next chapter), OpenVoiceOS uses OCP (OpenVoiceOS Common Play) and is basically a full fledge multimedia player on its own designed around open standards like MPRIS and with the vision of being fully integrated within the OpenVoiceOS software stack. Skills designed for OCP provide search results for OCP (think about them as media providers/catalogs/scrapers), OCP will play the best search result for you. OpenVoiceOS comes with a few OCP skills pre-installed, however more can be installed just like any other OVOS skill. You can find more OCP skills in the awesome-ocp-skills list","title":"Playing music"},{"location":"unused/playing_music/#youtube-music","text":"A voiceassistant with smartspeaker functionality should be able to play music straight out of the box. For that reason the buildroot edition of OpenVoiceOS comes with the Youtube Music OCP Skill pre-installed. Just ask it to play something will start playback from Youtube assuming the asked sonmg is present on Youtube ofcourse. Hey Mycroft, play disturbed sound of silence This should just start playing utilizing OCP as shown below. More information about the full functionality of OCP can be found at its own chapter.","title":"Youtube Music"},{"location":"unused/playing_music/#play-the-news","text":"Nothing more relaxing after you woke up, cancelling your alarm set on you OpenVoiceOS device than listening to your favorite news station while drinking some coffee (No OpenVoiceOS can not make you that coffee yet). Hey Mycroft, play the BBC news","title":"Play the news"},{"location":"unused/playing_music/#some-more-features-that-come-out-of-the-box","text":"The whole OCP framework has some benefits and features that are not skill specific, such as \"Playlists\" and a view of the search results. You can access those by swiping to the right when something is playing.","title":"Some more features that come out of the box"},{"location":"unused/playing_music/#homescreen-widget","text":"The homescreen skill that comes pre-installed with OpenVoiceOS also comes with a widget for the OCP framework.","title":"Homescreen widget"},{"location":"unused/playing_video/","text":"Playing video Although the screen used on your OpenVoiceOS device might be small, the whole OCP mediaplaying frame does support video playback. You can find video OCP skills in the same awesome-ocp-skills list. The fourth column, \"playback type\" shows which type of payer is used for that specific skill. If you use a skill that utilizes the \"video player\" the below will be shown on your OpenVoiceOS it's screen at playback.","title":"Playing video"},{"location":"unused/playing_video/#playing-video","text":"Although the screen used on your OpenVoiceOS device might be small, the whole OCP mediaplaying frame does support video playback. You can find video OCP skills in the same awesome-ocp-skills list. The fourth column, \"playback type\" shows which type of payer is used for that specific skill. If you use a skill that utilizes the \"video player\" the below will be shown on your OpenVoiceOS it's screen at playback.","title":"Playing video"},{"location":"unused/plugin_intro/","text":"","title":"Plugin intro"},{"location":"unused/prompts/","text":"Prompts Here we look at how to implement the most common types of prompts. For more information on conversation design see the Voice User Interface Design Guidelines . Get Response Any Skill can request a response from the user - making a statement or asking a question before the microphone is activated to record the User's response. The base implementation of this is the get_response() method . To see it in action, let's create a simple Skill that asks the User what their favorite flavor of ice cream is. from mycroft import MycroftSkill, intent_handler class IceCreamSkill(MycroftSkill): @intent_handler('set.favorite.intent') def handle_set_favorite(self): favorite_flavor = self.get_response('what.is.your.favorite.flavor') self.speak_dialog('confirm.favorite.flavor', {'flavor': favorite_flavor}) def create_skill(): return IceCreamSkill() In this Skill we have used get_response() and passed it the name of our dialog file 'what.is.your.favorite.flavor.dialog' . This is the simplest form of this method. It will speak dialog from the given file, then activate the microphone for 3-10 seconds allowing the User to respond. The transcript of their response will then be assigned to our variable favorite_flavor . To confirm that we have heard the User correctly we then speak a confirmation dialog passing the value of favorite_flavor to be spoken as part of that dialog. Optional Arguments The get_response() method also takes the following optional arguments: data (dict) - used to populate the dialog file, just like speak_dialog() validator (function) - returns a boolean to define whether the response meets some criteria for success on_fail (function) - returns a string that will be spoken if the validator returns False num_retries (int) - number of times the system should repeat the question to get a successful result Yes / No Questions ask_yesno() checks if the response contains \"yes\" or \"no\" like phrases. The vocab for this check is sourced from the Skills yes.voc and no.voc files (if they exist), as well as mycroft-cores defaults (contained within mycroft-core/res/text/en-us/yes.voc ). A longer phrase containing the required vocab is considered successful e.g. both \"yes\" and \"yeah that would be great thanks\" would be considered a successful \"yes\". If \"yes\" or \"no\" responses are detected, then the method will return the string \"yes\" or \"no\". If the response does not contain \"yes\" or \"no\" vocabulary then the entire utterance will be returned. If no speech was detected indicating the User did not respond, then the method will return None . Let's add a new intent to our IceCreamSkill to see how this works. from mycroft import MycroftSkill, intent_handler class IceCreamSkill(MycroftSkill): @intent_handler('do.you.like.intent') def handle_do_you_like(self): likes_ice_cream = self.ask_yesno('do.you.like.ice.cream') if likes_ice_cream == 'yes': self.speak_dialog('does.like') elif likes_ice_cream == 'no': self.speak_dialog('does.not.like') else: self.speak_dialog('could.not.understand') def create_skill(): return IceCreamSkill() In this example we have asked the User if they like ice cream. We then speak different dialog whether they respond yes or no. We also speak some error dialog if neither yes nor no are returned. Providing a list of options ask_selection() provides a list of options to the User for them to select from. The User can respond with either the name of one of these options or select with a numbered ordinal eg \"the third\". This method automatically manages fuzzy matching the users response against the list of options provided. Let's jump back into our IceCreamSkill to give the User a list of options to choose from. from mycroft import MycroftSkill, intent_handler class IceCreamSkill(MycroftSkill): def __init__(self): MycroftSkill.__init__(self) self.flavors = ['vanilla', 'chocolate', 'mint'] @intent_handler('request.icecream.intent') def handle_request_icecream(self): self.speak_dialog('welcome') selection = self.ask_selection(self.flavors, 'what.flavor') self.speak_dialog('coming.right.up', {'flavor': selection}) def create_skill(): return IceCreamSkill() In this example we first speak some welcome.dialog . The list of flavors is then spoken, followed by the what.flavor.dialog . Finally, we confirm the Users selection by speaking coming.right.up.dialog Optional arguments There are two optional arguments for this method. min_conf (float) defines the minimum confidence level for fuzzy matching the Users response against the list of options. numeric (bool) if set to True will speak the options as a numbered list eg \"One, vanilla. Two, chocolate. Or three, mint\" Returning responses to the intent parser So far we have looked at ways to prompt the User, and return their response directly to our Skill. It is also possible to speak some dialog, and activate the listener, directing the response back to the standard intent parsing engine. We may do this to let the user trigger another Skill, or because we want to make use of our own intents to handle the response. To do this, we use the expect_response parameter of the speak_dialog() method. from mycroft import MycroftSkill, intent_handler class IceCreamSkill(MycroftSkill): def __init__(self): MycroftSkill.__init__(self) self.flavors = ['vanilla', 'chocolate', 'mint'] @intent_handler('request.icecream.intent') def handle_request_icecream(self): self.speak_dialog('welcome') selection = self.ask_selection(self.flavors, 'what.flavor') self.speak_dialog('coming.right.up', {'flavor': selection}) self.speak_dialog('now.what', expect_response=True) def create_skill(): return IceCreamSkill() Here we have added a new dialog after confirming the Users selection. We may use it to tell the User other things they can do with their OVOS device while they enjoy their delicious ice cream.","title":"Prompts"},{"location":"unused/prompts/#prompts","text":"Here we look at how to implement the most common types of prompts. For more information on conversation design see the Voice User Interface Design Guidelines .","title":"Prompts"},{"location":"unused/prompts/#get-response","text":"Any Skill can request a response from the user - making a statement or asking a question before the microphone is activated to record the User's response. The base implementation of this is the get_response() method . To see it in action, let's create a simple Skill that asks the User what their favorite flavor of ice cream is. from mycroft import MycroftSkill, intent_handler class IceCreamSkill(MycroftSkill): @intent_handler('set.favorite.intent') def handle_set_favorite(self): favorite_flavor = self.get_response('what.is.your.favorite.flavor') self.speak_dialog('confirm.favorite.flavor', {'flavor': favorite_flavor}) def create_skill(): return IceCreamSkill() In this Skill we have used get_response() and passed it the name of our dialog file 'what.is.your.favorite.flavor.dialog' . This is the simplest form of this method. It will speak dialog from the given file, then activate the microphone for 3-10 seconds allowing the User to respond. The transcript of their response will then be assigned to our variable favorite_flavor . To confirm that we have heard the User correctly we then speak a confirmation dialog passing the value of favorite_flavor to be spoken as part of that dialog.","title":"Get Response"},{"location":"unused/prompts/#optional-arguments","text":"The get_response() method also takes the following optional arguments: data (dict) - used to populate the dialog file, just like speak_dialog() validator (function) - returns a boolean to define whether the response meets some criteria for success on_fail (function) - returns a string that will be spoken if the validator returns False num_retries (int) - number of times the system should repeat the question to get a successful result","title":"Optional Arguments"},{"location":"unused/prompts/#yes-no-questions","text":"ask_yesno() checks if the response contains \"yes\" or \"no\" like phrases. The vocab for this check is sourced from the Skills yes.voc and no.voc files (if they exist), as well as mycroft-cores defaults (contained within mycroft-core/res/text/en-us/yes.voc ). A longer phrase containing the required vocab is considered successful e.g. both \"yes\" and \"yeah that would be great thanks\" would be considered a successful \"yes\". If \"yes\" or \"no\" responses are detected, then the method will return the string \"yes\" or \"no\". If the response does not contain \"yes\" or \"no\" vocabulary then the entire utterance will be returned. If no speech was detected indicating the User did not respond, then the method will return None . Let's add a new intent to our IceCreamSkill to see how this works. from mycroft import MycroftSkill, intent_handler class IceCreamSkill(MycroftSkill): @intent_handler('do.you.like.intent') def handle_do_you_like(self): likes_ice_cream = self.ask_yesno('do.you.like.ice.cream') if likes_ice_cream == 'yes': self.speak_dialog('does.like') elif likes_ice_cream == 'no': self.speak_dialog('does.not.like') else: self.speak_dialog('could.not.understand') def create_skill(): return IceCreamSkill() In this example we have asked the User if they like ice cream. We then speak different dialog whether they respond yes or no. We also speak some error dialog if neither yes nor no are returned.","title":"Yes / No Questions"},{"location":"unused/prompts/#providing-a-list-of-options","text":"ask_selection() provides a list of options to the User for them to select from. The User can respond with either the name of one of these options or select with a numbered ordinal eg \"the third\". This method automatically manages fuzzy matching the users response against the list of options provided. Let's jump back into our IceCreamSkill to give the User a list of options to choose from. from mycroft import MycroftSkill, intent_handler class IceCreamSkill(MycroftSkill): def __init__(self): MycroftSkill.__init__(self) self.flavors = ['vanilla', 'chocolate', 'mint'] @intent_handler('request.icecream.intent') def handle_request_icecream(self): self.speak_dialog('welcome') selection = self.ask_selection(self.flavors, 'what.flavor') self.speak_dialog('coming.right.up', {'flavor': selection}) def create_skill(): return IceCreamSkill() In this example we first speak some welcome.dialog . The list of flavors is then spoken, followed by the what.flavor.dialog . Finally, we confirm the Users selection by speaking coming.right.up.dialog","title":"Providing a list of options"},{"location":"unused/prompts/#optional-arguments_1","text":"There are two optional arguments for this method. min_conf (float) defines the minimum confidence level for fuzzy matching the Users response against the list of options. numeric (bool) if set to True will speak the options as a numbered list eg \"One, vanilla. Two, chocolate. Or three, mint\"","title":"Optional arguments"},{"location":"unused/prompts/#returning-responses-to-the-intent-parser","text":"So far we have looked at ways to prompt the User, and return their response directly to our Skill. It is also possible to speak some dialog, and activate the listener, directing the response back to the standard intent parsing engine. We may do this to let the user trigger another Skill, or because we want to make use of our own intents to handle the response. To do this, we use the expect_response parameter of the speak_dialog() method. from mycroft import MycroftSkill, intent_handler class IceCreamSkill(MycroftSkill): def __init__(self): MycroftSkill.__init__(self) self.flavors = ['vanilla', 'chocolate', 'mint'] @intent_handler('request.icecream.intent') def handle_request_icecream(self): self.speak_dialog('welcome') selection = self.ask_selection(self.flavors, 'what.flavor') self.speak_dialog('coming.right.up', {'flavor': selection}) self.speak_dialog('now.what', expect_response=True) def create_skill(): return IceCreamSkill() Here we have added a new dialog after confirming the Users selection. We may use it to tell the User other things they can do with their OVOS device while they enjoy their delicious ice cream.","title":"Returning responses to the intent parser"},{"location":"unused/qml_guidelines/","text":"QML Design Guidelines An introduction to QML and additional documentation are available here Mycroft-GUI frameworks provides you with some base delegates you should use when designing your QML GUI. The base delegates provide you with a basic presentation layer for your skill with some property assignments that can help you set up background images, background dim to give you the control you need for rendering an experience. Before we dive deeper into the Design Guidelines, lets look at some concepts that a GUI developer should learn about: Units & Theming Units: Mycroft.Units.GridUnit is the fundamental unit of space that should be used for all sizing inside the QML UI, expressed in pixels. Each GridUnit is predefined as 16 pixels // Usage in QML Components example width: Mycroft.Units.gridUnit * 2 // 32px Wide height: Mycroft.Units.gridUnit // 16px Tall Theming: OVOS Shell uses a custom Kirigami Platform Theme plugin to provide global theming to all our skills and user interfaces, which also allows our GUI's to be fully compatible with the system themes on platforms that are not running the OVOS Shell. Kirigami Theme and Color Scheme guide is extensive and can be found here OVOS GUI's developed to follow the color scheme depend on only a subset of available colors, mainly: Kirigami.Theme.backgroundColor = Primary Color (Background Color: This will always be a dark palette or light palette depending on the dark or light chosen color scheme) Kirigami.Theme.highlightColor = Secondary Color (Accent Color: This will always be a standout palette that defines the themes dominating color and can be used for buttons, cards, borders, highlighted text etc.) Kirigami.Theme.textColor = Text Color (This will always be an opposite palette to the selected primary color) QML Delegate Design Best Practise Let's look at this image and qml example below, this is a representation of the Mycroft Delegate: When designing your first QML file, it is important to note the red triangles in the above image, these triangles represent the margin from the screen edge the GUI needs to be designed within, these margins ensure your GUI content does not overlap with features like edge lighting and menus in the platforms that support it like OVOS-Shell The content items and components all utilize the selected color scheme, where black is the primary background color, red is our accent color and white is our contrasting text color Let's look at this in QML: import ... import Mycroft 1.0 as Mycroft Mycroft.Delegate { skillBackgroundSource: sessionData.exampleImage leftPadding: 0 rightPadding: 0 topPadding: 0 bottomPadding: 0 Rectangle { anchors.fill: parent // Setting margins that need to be left for the screen edges anchors.margins: Mycroft.Units.gridUnit * 2 //Setting a background dim using our primary theme / background color on top of our skillBackgroundSource image for better readability and contrast color: Qt.rgba(Kirigami.Theme.backgroundColor.r, Kirigami.Theme.backgroundColor.g, Kirigami.Theme.backgroundColor.b, 0.3) Kirigami.Heading { level: 2 text: \"An Example Pie Chart\" anchors.top: parent.top anchors.left: parent.left anchors.right: parent.right height: Mycroft.Units.gridUnit * 3 // Setting the text color to always follow the color scheme for this item displayed on the screen color: Kirigami.Theme.textColor } PieChart { anchors.centerIn: parent pieColorMinor: Kirigami.Theme.backgroundColor // As in the image above the minor area of the pie chart uses our primary color pieColorMid: Kirigami.Theme.highlightColor // As in the image above the middle area is assigned the highlight or our accent color pieColorMajor: Kirigami.Theme.textColor // As in the image above the major area is assigned the text color } } } QML Delegate Multi Platform and Screen Guidelines OVOS Skill GUIs are designed to be multi-platform and screen friendly, to support this we always try to support both Horizontal and Vertical display's. Let's look at an example and a general approach to writing multi resolution friendly UI's Let's look at these images below that represent a Delegate as seen in a Horizontal screen: Let's look at these images below that represent a Delegate as seen in a Vertical screen: When designing for different screens it is preferred to utilize Grids, GridLayouts and GridViews this allows easier content placement as one can control the number of columns and rows displayed on the screen It is also recommended to use Flickables when you believe your content is going to not fit on the screen, this allows for content to always be scrollable. To make it easier to design scrollable content, Mycroft GUI provides you with a ready to use Mycroft.ScrollableDelegate. It is also preferred to use the width vs height comparison on the root delegate item to know when the screen should be using a vertical layout vs horizontal layout Let's look at this in QML: import ... import Mycroft 1.0 as Mycroft Mycroft.Delegate { id: root skillBackgroundSource: sessionData.exampleImage leftPadding: 0 rightPadding: 0 topPadding: 0 bottomPadding: 0 property bool horizontalMode: width >= height ? 1 : 0 // Using a ternary operator to detect if width of the delegate is greater than the height, which provides if the delegate is in horizontalMode Rectangle { anchors.fill: parent // Setting margins that need to be left for the screen edges anchors.margins: Mycroft.Units.gridUnit * 2 //Setting a background dim using our primary theme / background color on top of our skillBackgroundSource image for better readability and contrast color: Qt.rgba(Kirigami.Theme.backgroundColor.r, Kirigami.Theme.backgroundColor.g, Kirigami.Theme.backgroundColor.b, 0.3) Kirigami.Heading { level: 2 text: \"An Example Pie Chart\" // Setting the text color to always follow the color scheme color: Kirigami.Theme.textColor } GridLayout { id: examplesGridView // Checking if we are in horizontal mode, we should display two columns to display the items in the image above, or if we are in vertical mode, we should display a single column only columns: root.horizontalMode ? 2 : 1 Repeater { model: examplesModel delegates: ExamplesDelegate { ... } } } } }","title":"Qml guidelines"},{"location":"unused/qml_guidelines/#qml-design-guidelines","text":"An introduction to QML and additional documentation are available here Mycroft-GUI frameworks provides you with some base delegates you should use when designing your QML GUI. The base delegates provide you with a basic presentation layer for your skill with some property assignments that can help you set up background images, background dim to give you the control you need for rendering an experience. Before we dive deeper into the Design Guidelines, lets look at some concepts that a GUI developer should learn about:","title":"QML Design Guidelines"},{"location":"unused/qml_guidelines/#units-theming","text":"","title":"Units & Theming"},{"location":"unused/qml_guidelines/#units","text":"Mycroft.Units.GridUnit is the fundamental unit of space that should be used for all sizing inside the QML UI, expressed in pixels. Each GridUnit is predefined as 16 pixels // Usage in QML Components example width: Mycroft.Units.gridUnit * 2 // 32px Wide height: Mycroft.Units.gridUnit // 16px Tall","title":"Units:"},{"location":"unused/qml_guidelines/#theming","text":"OVOS Shell uses a custom Kirigami Platform Theme plugin to provide global theming to all our skills and user interfaces, which also allows our GUI's to be fully compatible with the system themes on platforms that are not running the OVOS Shell. Kirigami Theme and Color Scheme guide is extensive and can be found here OVOS GUI's developed to follow the color scheme depend on only a subset of available colors, mainly: Kirigami.Theme.backgroundColor = Primary Color (Background Color: This will always be a dark palette or light palette depending on the dark or light chosen color scheme) Kirigami.Theme.highlightColor = Secondary Color (Accent Color: This will always be a standout palette that defines the themes dominating color and can be used for buttons, cards, borders, highlighted text etc.) Kirigami.Theme.textColor = Text Color (This will always be an opposite palette to the selected primary color)","title":"Theming:"},{"location":"unused/qml_guidelines/#qml-delegate-design-best-practise","text":"Let's look at this image and qml example below, this is a representation of the Mycroft Delegate: When designing your first QML file, it is important to note the red triangles in the above image, these triangles represent the margin from the screen edge the GUI needs to be designed within, these margins ensure your GUI content does not overlap with features like edge lighting and menus in the platforms that support it like OVOS-Shell The content items and components all utilize the selected color scheme, where black is the primary background color, red is our accent color and white is our contrasting text color Let's look at this in QML: import ... import Mycroft 1.0 as Mycroft Mycroft.Delegate { skillBackgroundSource: sessionData.exampleImage leftPadding: 0 rightPadding: 0 topPadding: 0 bottomPadding: 0 Rectangle { anchors.fill: parent // Setting margins that need to be left for the screen edges anchors.margins: Mycroft.Units.gridUnit * 2 //Setting a background dim using our primary theme / background color on top of our skillBackgroundSource image for better readability and contrast color: Qt.rgba(Kirigami.Theme.backgroundColor.r, Kirigami.Theme.backgroundColor.g, Kirigami.Theme.backgroundColor.b, 0.3) Kirigami.Heading { level: 2 text: \"An Example Pie Chart\" anchors.top: parent.top anchors.left: parent.left anchors.right: parent.right height: Mycroft.Units.gridUnit * 3 // Setting the text color to always follow the color scheme for this item displayed on the screen color: Kirigami.Theme.textColor } PieChart { anchors.centerIn: parent pieColorMinor: Kirigami.Theme.backgroundColor // As in the image above the minor area of the pie chart uses our primary color pieColorMid: Kirigami.Theme.highlightColor // As in the image above the middle area is assigned the highlight or our accent color pieColorMajor: Kirigami.Theme.textColor // As in the image above the major area is assigned the text color } } }","title":"QML Delegate Design Best Practise"},{"location":"unused/qml_guidelines/#qml-delegate-multi-platform-and-screen-guidelines","text":"OVOS Skill GUIs are designed to be multi-platform and screen friendly, to support this we always try to support both Horizontal and Vertical display's. Let's look at an example and a general approach to writing multi resolution friendly UI's Let's look at these images below that represent a Delegate as seen in a Horizontal screen: Let's look at these images below that represent a Delegate as seen in a Vertical screen: When designing for different screens it is preferred to utilize Grids, GridLayouts and GridViews this allows easier content placement as one can control the number of columns and rows displayed on the screen It is also recommended to use Flickables when you believe your content is going to not fit on the screen, this allows for content to always be scrollable. To make it easier to design scrollable content, Mycroft GUI provides you with a ready to use Mycroft.ScrollableDelegate. It is also preferred to use the width vs height comparison on the root delegate item to know when the screen should be using a vertical layout vs horizontal layout Let's look at this in QML: import ... import Mycroft 1.0 as Mycroft Mycroft.Delegate { id: root skillBackgroundSource: sessionData.exampleImage leftPadding: 0 rightPadding: 0 topPadding: 0 bottomPadding: 0 property bool horizontalMode: width >= height ? 1 : 0 // Using a ternary operator to detect if width of the delegate is greater than the height, which provides if the delegate is in horizontalMode Rectangle { anchors.fill: parent // Setting margins that need to be left for the screen edges anchors.margins: Mycroft.Units.gridUnit * 2 //Setting a background dim using our primary theme / background color on top of our skillBackgroundSource image for better readability and contrast color: Qt.rgba(Kirigami.Theme.backgroundColor.r, Kirigami.Theme.backgroundColor.g, Kirigami.Theme.backgroundColor.b, 0.3) Kirigami.Heading { level: 2 text: \"An Example Pie Chart\" // Setting the text color to always follow the color scheme color: Kirigami.Theme.textColor } GridLayout { id: examplesGridView // Checking if we are in horizontal mode, we should display two columns to display the items in the image above, or if we are in vertical mode, we should display a single column only columns: root.horizontalMode ? 2 : 1 Repeater { model: examplesModel delegates: ExamplesDelegate { ... } } } } }","title":"QML Delegate Multi Platform and Screen Guidelines"},{"location":"unused/scraps/","text":"from downloading image Buildroot SSH Details: Username: mycroft | password: mycroft Manjaro SSH Details for Respeaker Image: Username: mycroft | password: 12345 Manjaro SSH Details for Mark-2/DevKit Image: Username: ovos | password: ovos buildroot manjaro From Backend Admin Api (personal backend only!) Since local backend does not provide a web ui a admin api can be used to manage your devices A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device OpenVoiceOS is all about choice, We currently support 3 backend types Selene Backend The mycroft backend connects your device to mycroft servers and allows you to use their web interface to manage your device, this requires paring and all your Speech to text queries are processed via this backend Personal Backend The personal backend is a choice for users who would like to self-host their own backend on the device or in their personal home network, this backend requires additional setup but also provides a cool web interface to configure your device and manage your settings No Backend Open Voice OS by default comes with no backend, we do not really believe you need a backend to do anything, this is the best choice for your device if you wish to run completely locally, we provide you with a whole list of Speech to text and Text to speech online and offline options to choose from, All communication to the outside world happens from your own device without data sharing From First Run Setting up your device at first run. At first run of your OpenVoiceOS device a first run setup wizard is started that guides you through the process of setting up your device.","title":"Scraps"},{"location":"unused/scraps/#from-downloading-image","text":"Buildroot SSH Details: Username: mycroft | password: mycroft Manjaro SSH Details for Respeaker Image: Username: mycroft | password: 12345 Manjaro SSH Details for Mark-2/DevKit Image: Username: ovos | password: ovos buildroot manjaro From Backend","title":"from downloading image"},{"location":"unused/scraps/#admin-api-personal-backend-only","text":"Since local backend does not provide a web ui a admin api can be used to manage your devices A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device OpenVoiceOS is all about choice, We currently support 3 backend types","title":"Admin Api (personal backend only!)"},{"location":"unused/scraps/#selene-backend","text":"The mycroft backend connects your device to mycroft servers and allows you to use their web interface to manage your device, this requires paring and all your Speech to text queries are processed via this backend","title":"Selene Backend"},{"location":"unused/scraps/#personal-backend","text":"The personal backend is a choice for users who would like to self-host their own backend on the device or in their personal home network, this backend requires additional setup but also provides a cool web interface to configure your device and manage your settings","title":"Personal Backend"},{"location":"unused/scraps/#no-backend","text":"Open Voice OS by default comes with no backend, we do not really believe you need a backend to do anything, this is the best choice for your device if you wish to run completely locally, we provide you with a whole list of Speech to text and Text to speech online and offline options to choose from, All communication to the outside world happens from your own device without data sharing","title":"No Backend"},{"location":"unused/scraps/#from-first-run","text":"","title":"From First Run"},{"location":"unused/scraps/#setting-up-your-device-at-first-run","text":"At first run of your OpenVoiceOS device a first run setup wizard is started that guides you through the process of setting up your device.","title":"Setting up your device at first run."},{"location":"unused/selecting_backend/","text":"","title":"Selecting backend"},{"location":"unused/server-stuff/","text":"Using a Personal Backend This is concidered an advanced function and is unnecessasary for normal usage The default for ovos-core is no backend. You can go without a backend and go offline and use our free proxy for API services with no accounts. This setup requires there to be a running personal-backend. Refer to this Github page for details. Gui Configuration If your instalation has the skill-ovos-setup installed, you will have a gui available to perform the setup of your device to use the personal backend that you setup. NOTE it is NOT advised to install this skill manually, as it can cause issues if OVOS was not configured to use it. Skip to the Manual Configuration section for headless devices or if this skill was not pre-installed. On first boot, you will be presented with a screen to choose a backend option. NOTE The Selene backend shown in the image is no longer available as an option Select Personal Backend from the options. The next screen will allow you to enter the IP address of your personal backend server. Enter the IP address and Port number of your personal backend eg. 192.168.1.xxx:6712 If everything is entered correctly, and you backend is running, you should see a screen showing that your connection was successful. You should now be able to configure your device with your backend. Manual Configuration This section requires shell access to the device, either with direct connection, or SSH . The local file ~/.config/mycroft/mycroft.conf contains local settings that the user has specified. This file may not exist, and will have to be created to continue. Open the file to edit it nano ~/.config/mycroft/mycroft.conf Add this section to your file. This file must be in valid json or yaml format. { \"server\": { \"url\": \"http://\", \"version\": \"v1\", \"update\": true, \"metrics\": true } } You will also have to make sure there is not an identity file already configured rm ~/.config/mycroft/identity/identity2.json Restart your device, and you should be connected to your backend.","title":"Server stuff"},{"location":"unused/server-stuff/#using-a-personal-backend","text":"This is concidered an advanced function and is unnecessasary for normal usage The default for ovos-core is no backend. You can go without a backend and go offline and use our free proxy for API services with no accounts. This setup requires there to be a running personal-backend. Refer to this Github page for details.","title":"Using a Personal Backend"},{"location":"unused/server-stuff/#gui-configuration","text":"If your instalation has the skill-ovos-setup installed, you will have a gui available to perform the setup of your device to use the personal backend that you setup. NOTE it is NOT advised to install this skill manually, as it can cause issues if OVOS was not configured to use it. Skip to the Manual Configuration section for headless devices or if this skill was not pre-installed. On first boot, you will be presented with a screen to choose a backend option. NOTE The Selene backend shown in the image is no longer available as an option Select Personal Backend from the options. The next screen will allow you to enter the IP address of your personal backend server. Enter the IP address and Port number of your personal backend eg. 192.168.1.xxx:6712 If everything is entered correctly, and you backend is running, you should see a screen showing that your connection was successful. You should now be able to configure your device with your backend.","title":"Gui Configuration"},{"location":"unused/server-stuff/#manual-configuration","text":"This section requires shell access to the device, either with direct connection, or SSH . The local file ~/.config/mycroft/mycroft.conf contains local settings that the user has specified. This file may not exist, and will have to be created to continue. Open the file to edit it nano ~/.config/mycroft/mycroft.conf Add this section to your file. This file must be in valid json or yaml format. { \"server\": { \"url\": \"http://\", \"version\": \"v1\", \"update\": true, \"metrics\": true } } You will also have to make sure there is not an identity file already configured rm ~/.config/mycroft/identity/identity2.json Restart your device, and you should be connected to your backend.","title":"Manual Configuration"},{"location":"unused/setup_network/","text":"Setting up your Wi-Fi network connection Depending on which image you downloaded you will be greeted by the network configuration screen with either one or two option. The buildroot image supports setting up the network via two options. On a mobile device On the OpenVoiceOS device itself. You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work.) On Mobile Setup Choosing this option will create a temporarily open network - hotspot called \"OVOS\" to which you can connect from your mobile device. On your mobile device go into Settings -> Wi-Fi Settings and the \"OVOS\" open network will appear in its list. Connect your device with the OVOS network and a webpage will open to configure the Wi-Fi network on your OpenVoiceOS device. (NOTE: On newer mobile operating systems the captive portal capture has changed and the website will not automatically be opened. If this is the case you can open a browser manually and go to http://172.16.127.1 ) The following webpage will be shown; Select your Wi-Fi network from the list, insert your password and press the \"Connect\" button. If everything went fine, you will soon see the green \"connected\" screen on your OpenVoiceOS device. On Device Setup Choosing this option allows you to set up your Wi-Fi network straight on your OpenVoiceOS device. If selected a screen with the available networks will be shown on your OpenVoiceOS device. Select your network from the list and tap / click on it to allow you to insert your password. If you have a touch screen an on-screen keyboard will appear when you tap the password field. If not use a keyboard. When you have inserted your password, click / tap the connect button and after a short connecting animation, if all went fine you will see the green \"connected\" screen on your OpenVoiceOS device. (Re)configure your network from the drop-down menu If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the \"Wi-Fi\" icon. This brings you to the same on-device configuration screen. From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks.","title":"Setting up your Wi-Fi network connection"},{"location":"unused/setup_network/#setting-up-your-wi-fi-network-connection","text":"Depending on which image you downloaded you will be greeted by the network configuration screen with either one or two option. The buildroot image supports setting up the network via two options. On a mobile device On the OpenVoiceOS device itself. You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work.)","title":"Setting up your Wi-Fi network connection"},{"location":"unused/setup_network/#on-mobile-setup","text":"Choosing this option will create a temporarily open network - hotspot called \"OVOS\" to which you can connect from your mobile device. On your mobile device go into Settings -> Wi-Fi Settings and the \"OVOS\" open network will appear in its list. Connect your device with the OVOS network and a webpage will open to configure the Wi-Fi network on your OpenVoiceOS device. (NOTE: On newer mobile operating systems the captive portal capture has changed and the website will not automatically be opened. If this is the case you can open a browser manually and go to http://172.16.127.1 ) The following webpage will be shown; Select your Wi-Fi network from the list, insert your password and press the \"Connect\" button. If everything went fine, you will soon see the green \"connected\" screen on your OpenVoiceOS device.","title":"On Mobile Setup"},{"location":"unused/setup_network/#on-device-setup","text":"Choosing this option allows you to set up your Wi-Fi network straight on your OpenVoiceOS device. If selected a screen with the available networks will be shown on your OpenVoiceOS device. Select your network from the list and tap / click on it to allow you to insert your password. If you have a touch screen an on-screen keyboard will appear when you tap the password field. If not use a keyboard. When you have inserted your password, click / tap the connect button and after a short connecting animation, if all went fine you will see the green \"connected\" screen on your OpenVoiceOS device.","title":"On Device Setup"},{"location":"unused/setup_network/#reconfigure-your-network-from-the-drop-down-menu","text":"If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the \"Wi-Fi\" icon. This brings you to the same on-device configuration screen. From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks.","title":"(Re)configure your network from the drop-down menu"},{"location":"unused/setup_ovos/","text":"Setting up your device at first run. At first run of your OpenVoiceOS device a first run setup wizard is started that guides you through the process of setting up your device. Selecting Your Backend What is a backend ? A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device OpenVoiceOS is all about choice, We currently support 3 backend types Selene Backend The mycroft backend connects your device to mycroft servers and allows you to use their web interface to manage your device, this requires paring and all your Speech to text queries are processed via this backend Personal Backend The personal backend is a choice for users who would like to self-host their own backend on the device or in their personal home network, this backend requires additional setup but also provides a cool web interface to configure your device and manage your settings No Backend Open Voice OS by default comes with no backend, we do not really believe you need a backend to do anything, this is the best choice for your device if you wish to run completely locally, we provide you with a whole list of Speech to text and Text to speech online and offline options to choose from, All communication to the outside world happens from your own device without data sharing Mycroft A.I. - Selene Backend The Pairing Process The GUI will now show you a Pairing Code, This pairing code needs to be entered on the mycroft backend which you can find online at https://account.mycroft.ai Create an account using your email id on https://account.mycroft.ai Head over to https://account.mycroft.ai/devices/add Enter the pairing code, a unique device name, and location settings Click next on the web interface, your device should now be paired No backend - No calling home Select A Text To Speech (TTS) Engine: A text-to-speech (TTS) system converts normal language text into speech, select an engine from the list Select A Speech To Text (STT) Engine: A speech-to-text (STT) system converts human speech to text, select an engine from the list Personal backend - Host your own Personal backend is a reverse engineered alternative to selene and requires the backend to be hosted locally. Install and Configure your personal backend, information available on: https://github.com/OpenVoiceOS/ovos-personal-backend The gui on device will display a setup page to enter the host address of your hosted backend on your device Pairing with your personal backend happens automatically once you hit the confirm button with the correct host address","title":"Setting up your device at first run."},{"location":"unused/setup_ovos/#setting-up-your-device-at-first-run","text":"At first run of your OpenVoiceOS device a first run setup wizard is started that guides you through the process of setting up your device.","title":"Setting up your device at first run."},{"location":"unused/setup_ovos/#selecting-your-backend","text":"","title":"Selecting Your Backend"},{"location":"unused/setup_ovos/#what-is-a-backend","text":"A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device OpenVoiceOS is all about choice, We currently support 3 backend types","title":"What is a backend ?"},{"location":"unused/setup_ovos/#selene-backend","text":"The mycroft backend connects your device to mycroft servers and allows you to use their web interface to manage your device, this requires paring and all your Speech to text queries are processed via this backend","title":"Selene Backend"},{"location":"unused/setup_ovos/#personal-backend","text":"The personal backend is a choice for users who would like to self-host their own backend on the device or in their personal home network, this backend requires additional setup but also provides a cool web interface to configure your device and manage your settings","title":"Personal Backend"},{"location":"unused/setup_ovos/#no-backend","text":"Open Voice OS by default comes with no backend, we do not really believe you need a backend to do anything, this is the best choice for your device if you wish to run completely locally, we provide you with a whole list of Speech to text and Text to speech online and offline options to choose from, All communication to the outside world happens from your own device without data sharing","title":"No Backend"},{"location":"unused/setup_ovos/#mycroft-ai-selene-backend","text":"The Pairing Process The GUI will now show you a Pairing Code, This pairing code needs to be entered on the mycroft backend which you can find online at https://account.mycroft.ai Create an account using your email id on https://account.mycroft.ai Head over to https://account.mycroft.ai/devices/add Enter the pairing code, a unique device name, and location settings Click next on the web interface, your device should now be paired","title":"Mycroft A.I. - Selene Backend"},{"location":"unused/setup_ovos/#no-backend-no-calling-home","text":"Select A Text To Speech (TTS) Engine: A text-to-speech (TTS) system converts normal language text into speech, select an engine from the list Select A Speech To Text (STT) Engine: A speech-to-text (STT) system converts human speech to text, select an engine from the list","title":"No backend - No calling home"},{"location":"unused/setup_ovos/#personal-backend-host-your-own","text":"Personal backend is a reverse engineered alternative to selene and requires the backend to be hosted locally. Install and Configure your personal backend, information available on: https://github.com/OpenVoiceOS/ovos-personal-backend The gui on device will display a setup page to enter the host address of your hosted backend on your device Pairing with your personal backend happens automatically once you hit the confirm button with the correct host address","title":"Personal backend - Host your own"},{"location":"unused/shell/","text":"OVOS Shell OVOS-shell is the OpenVoiceOS client implementation of the mycroft-gui library used in our embedded device images Plugins OVOS-shell is tightly coupled to PHAL , the following companion plugins should be installed if you are using ovos-shell ovos-PHAL-plugin-notification-widgets ovos-PHAL-plugin-network-manager ovos-PHAL-plugin-gui-network-client ovos-PHAL-plugin-wifi-setup ovos-PHAL-plugin-alsa ovos-PHAL-plugin-system ovos-PHAL-plugin-dashboard ovos-PHAL-plugin-brightness-control-rpi ovos-PHAL-plugin-color-scheme-manager ovos-PHAL-plugin-configuration-provider Alternative Clients Other distributions may offer alternative implementations such as: mycroft-gui also hosts a client for developers on the desktop. plasma-bigscreen mycroft mark2 Configuration The Shell can be configured in a few ways. GUI Display settings Color Theme editor Shell Options ~/.config/OpenvoiceOS/OvosShell.conf can be edited to change shell options that may also be changed via UI. An example config would look like: [General] fakeBrightness=1 menuLabels=true Themes Shell themes can be included in /usr/share/OVOS/ColorSchemes/ or ~/.local/share/OVOS/ColorSchemes/ in json format. Note that colors should include an alpha value (usually FF ). { \"name\": \"Neon Green\", \"primaryColor\": \"#FF072103\", \"secondaryColor\": \"#FF2C7909\", \"textColor\": \"#FFF1F1F1\" }","title":"OVOS Shell"},{"location":"unused/shell/#ovos-shell","text":"OVOS-shell is the OpenVoiceOS client implementation of the mycroft-gui library used in our embedded device images","title":"OVOS Shell"},{"location":"unused/shell/#plugins","text":"OVOS-shell is tightly coupled to PHAL , the following companion plugins should be installed if you are using ovos-shell ovos-PHAL-plugin-notification-widgets ovos-PHAL-plugin-network-manager ovos-PHAL-plugin-gui-network-client ovos-PHAL-plugin-wifi-setup ovos-PHAL-plugin-alsa ovos-PHAL-plugin-system ovos-PHAL-plugin-dashboard ovos-PHAL-plugin-brightness-control-rpi ovos-PHAL-plugin-color-scheme-manager ovos-PHAL-plugin-configuration-provider","title":"Plugins"},{"location":"unused/shell/#alternative-clients","text":"Other distributions may offer alternative implementations such as: mycroft-gui also hosts a client for developers on the desktop. plasma-bigscreen mycroft mark2","title":"Alternative Clients"},{"location":"unused/shell/#configuration","text":"The Shell can be configured in a few ways.","title":"Configuration"},{"location":"unused/shell/#gui","text":"Display settings Color Theme editor","title":"GUI"},{"location":"unused/shell/#shell-options","text":"~/.config/OpenvoiceOS/OvosShell.conf can be edited to change shell options that may also be changed via UI. An example config would look like: [General] fakeBrightness=1 menuLabels=true","title":"Shell Options"},{"location":"unused/shell/#themes","text":"Shell themes can be included in /usr/share/OVOS/ColorSchemes/ or ~/.local/share/OVOS/ColorSchemes/ in json format. Note that colors should include an alpha value (usually FF ). { \"name\": \"Neon Green\", \"primaryColor\": \"#FF072103\", \"secondaryColor\": \"#FF2C7909\", \"textColor\": \"#FFF1F1F1\" }","title":"Themes"},{"location":"unused/speech_service/","text":"Speech Client The speech client is responsible for loading STT, VAD and Wake Word plugins Speech is transcribed into text and forwarded to the skills service Hotwords OVOS allows you to load any number of hot words in parallel and trigger different actions when they are detected each hotword can do one or more of the following: trigger listening, also called a wake_word play a sound emit a bus event take ovos-core out of sleep mode, also called a wakeup_word or standup_word take ovos-core out of recording mode, also called a stop_word To add a new hotword add its configuration under \"hotwords\" section. By default, all hotwords are disabled unless you set \"active\": true . Under the \"listener\" setting a main wake word and stand up word are defined, those will be automatically enabled unless you set \"active\": false . This is usually not desired unless you are looking to completely disabled wake word usage \"listener\": { // Default wake_word and stand_up_word will be automatically set to active // unless explicitly disabled under \"hotwords\" section \"wake_word\": \"hey mycroft\", \"stand_up_word\": \"wake up\" }, \"hotwords\": { \"hey mycroft\": { \"module\": \"ovos-ww-plugin-precise\", \"version\": \"0.3\", \"model\": \"https://github.com/MycroftAI/precise-data/raw/models-dev/hey-mycroft.tar.gz\", \"phonemes\": \"HH EY . M AY K R AO F T\", \"threshold\": 1e-90, \"lang\": \"en-us\", \"listen\": true, \"sound\": \"snd/start_listening.wav\" }, \"wake up\": { \"module\": \"ovos-ww-plugin-pocketsphinx\", \"phonemes\": \"W EY K . AH P\", \"threshold\": 1e-20, \"lang\": \"en-us\", \"wakeup\": true } }, STT Two STT plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used. This allows you to have a lower accuracy offline model as fallback to account for internet outages, this ensures your device never becomes fully unusable \"stt\": { \"module\": \"ovos-stt-plugin-server\", \"fallback_module\": \"ovos-stt-plugin-vosk\", \"ovos-stt-plugin-server\": {\"url\": \"https://stt.openvoiceos.com/stt\"} }, Listener You can modify microphone settings and enable additional features under the listener section such as wake word / utterance recording / uploading \"listener\": { \"sample_rate\": 16000, // if enabled the noise level is saved to a ipc file, useful for // debuging if microphone is working but writes a lot to disk, // recommended that you set \"ipc_path\" to a tmpfs \"mic_meter_ipc\": true, // Set 'save_path' to configure the location of files stored if // 'record_wake_words' and/or 'save_utterances' are set to 'true'. // WARNING: Make sure that user 'mycroft' has write-access on the // directory! // \"save_path\": \"/tmp\", // Set 'record_wake_words' to save a copy of wake word triggers // as .wav files under: /'save_path'/mycroft_wake_words \"record_wake_words\": false, // Set 'save_utterances' to save each sentence sent to STT -- by default // they are only kept briefly in-memory. This can be useful for for // debugging or other custom purposes. Recordings are saved // under: /'save_path'/mycroft_utterances/.wav \"save_utterances\": false, \"wake_word_upload\": { \"disable\": false, \"url\": \"https://training.mycroft.ai/precise/upload\" }, // Override as SYSTEM or USER to select a specific microphone input instead of // the PortAudio default input. // \"device_name\": \"somename\", // can be regex pattern or substring // or // \"device_index\": 12, // Stop listing to the microphone during playback to prevent accidental triggering // This is enabled by default, but instances with good microphone noise cancellation // can disable this to listen all the time, allowing 'barge in' functionality. \"mute_during_output\" : true, // How much (if at all) to 'duck' the speaker output during listening. A // setting of 0.0 will not duck at all. A 1.0 will completely mute output // while in a listening state. Values in between will lower the volume // partially (this is optional behavior, depending on the enclosure). \"duck_while_listening\" : 0.3, // In milliseconds \"phoneme_duration\": 120, \"multiplier\": 1.0, \"energy_ratio\": 1.5, // Settings used by microphone to set recording timeout \"recording_timeout\": 10.0, \"recording_timeout_with_silence\": 3.0, // instant listen is an experimental setting, it removes the need for // the pause between \"hey mycroft\" and starting to speak the utterance, //however it might slightly downgrade STT accuracy depending on engine used \"instant_listen\": false }, VAD Voice Activity Detection is used by the speech client to determine when a user stopped speaking, this indicates the voice command is ready to be executed. Several VAD strategies are supported \"listener\": { // Voice Activity Detection is used to determine when speech ended \"VAD\": { // silence method defined the main vad strategy // valid values: // VAD_ONLY - Only use vad // RATIO_ONLY - Only use max/current energy ratio threshold // CURRENT_ONLY - Only use current energy threshold // VAD_AND_RATIO - Use vad and max/current energy ratio threshold // VAD_AND_CURRENT - Use vad and current energy threshold // ALL - Use vad, max/current energy ratio, and current energy threshold // NOTE: if a vad plugin is not available method will fallback to RATIO_ONLY \"silence_method\": \"vad_and_ratio\", // Seconds of speech before voice command has begun \"speech_seconds\": 0.1, // Seconds of silence before a voice command has finished \"silence_seconds\": 0.5, // Seconds of audio to keep before voice command has begun \"before_seconds\": 0.5, // Minimum length of voice command (seconds) // NOTE: max_seconds uses recording_timeout listener setting \"min_seconds\": 1, // Ratio of max/current energy below which audio is considered speech \"max_current_ratio_threshold\": 2, // Energy threshold above which audio is considered speech // NOTE: this is dynamic, only defining start value \"initial_energy_threshold\": 1000.0, // vad module can be any plugin, by default it is not used // recommended plugin: \"ovos-vad-plugin-silero\" \"module\": \"\", \"ovos-vad-plugin-silero\": {\"threshold\": 0.2}, \"ovos-vad-plugin-webrtcvad\": {\"vad_mode\": 3} } },","title":"Speech Client"},{"location":"unused/speech_service/#speech-client","text":"The speech client is responsible for loading STT, VAD and Wake Word plugins Speech is transcribed into text and forwarded to the skills service","title":"Speech Client"},{"location":"unused/speech_service/#hotwords","text":"OVOS allows you to load any number of hot words in parallel and trigger different actions when they are detected each hotword can do one or more of the following: trigger listening, also called a wake_word play a sound emit a bus event take ovos-core out of sleep mode, also called a wakeup_word or standup_word take ovos-core out of recording mode, also called a stop_word To add a new hotword add its configuration under \"hotwords\" section. By default, all hotwords are disabled unless you set \"active\": true . Under the \"listener\" setting a main wake word and stand up word are defined, those will be automatically enabled unless you set \"active\": false . This is usually not desired unless you are looking to completely disabled wake word usage \"listener\": { // Default wake_word and stand_up_word will be automatically set to active // unless explicitly disabled under \"hotwords\" section \"wake_word\": \"hey mycroft\", \"stand_up_word\": \"wake up\" }, \"hotwords\": { \"hey mycroft\": { \"module\": \"ovos-ww-plugin-precise\", \"version\": \"0.3\", \"model\": \"https://github.com/MycroftAI/precise-data/raw/models-dev/hey-mycroft.tar.gz\", \"phonemes\": \"HH EY . M AY K R AO F T\", \"threshold\": 1e-90, \"lang\": \"en-us\", \"listen\": true, \"sound\": \"snd/start_listening.wav\" }, \"wake up\": { \"module\": \"ovos-ww-plugin-pocketsphinx\", \"phonemes\": \"W EY K . AH P\", \"threshold\": 1e-20, \"lang\": \"en-us\", \"wakeup\": true } },","title":"Hotwords"},{"location":"unused/speech_service/#stt","text":"Two STT plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used. This allows you to have a lower accuracy offline model as fallback to account for internet outages, this ensures your device never becomes fully unusable \"stt\": { \"module\": \"ovos-stt-plugin-server\", \"fallback_module\": \"ovos-stt-plugin-vosk\", \"ovos-stt-plugin-server\": {\"url\": \"https://stt.openvoiceos.com/stt\"} },","title":"STT"},{"location":"unused/speech_service/#listener","text":"You can modify microphone settings and enable additional features under the listener section such as wake word / utterance recording / uploading \"listener\": { \"sample_rate\": 16000, // if enabled the noise level is saved to a ipc file, useful for // debuging if microphone is working but writes a lot to disk, // recommended that you set \"ipc_path\" to a tmpfs \"mic_meter_ipc\": true, // Set 'save_path' to configure the location of files stored if // 'record_wake_words' and/or 'save_utterances' are set to 'true'. // WARNING: Make sure that user 'mycroft' has write-access on the // directory! // \"save_path\": \"/tmp\", // Set 'record_wake_words' to save a copy of wake word triggers // as .wav files under: /'save_path'/mycroft_wake_words \"record_wake_words\": false, // Set 'save_utterances' to save each sentence sent to STT -- by default // they are only kept briefly in-memory. This can be useful for for // debugging or other custom purposes. Recordings are saved // under: /'save_path'/mycroft_utterances/.wav \"save_utterances\": false, \"wake_word_upload\": { \"disable\": false, \"url\": \"https://training.mycroft.ai/precise/upload\" }, // Override as SYSTEM or USER to select a specific microphone input instead of // the PortAudio default input. // \"device_name\": \"somename\", // can be regex pattern or substring // or // \"device_index\": 12, // Stop listing to the microphone during playback to prevent accidental triggering // This is enabled by default, but instances with good microphone noise cancellation // can disable this to listen all the time, allowing 'barge in' functionality. \"mute_during_output\" : true, // How much (if at all) to 'duck' the speaker output during listening. A // setting of 0.0 will not duck at all. A 1.0 will completely mute output // while in a listening state. Values in between will lower the volume // partially (this is optional behavior, depending on the enclosure). \"duck_while_listening\" : 0.3, // In milliseconds \"phoneme_duration\": 120, \"multiplier\": 1.0, \"energy_ratio\": 1.5, // Settings used by microphone to set recording timeout \"recording_timeout\": 10.0, \"recording_timeout_with_silence\": 3.0, // instant listen is an experimental setting, it removes the need for // the pause between \"hey mycroft\" and starting to speak the utterance, //however it might slightly downgrade STT accuracy depending on engine used \"instant_listen\": false },","title":"Listener"},{"location":"unused/speech_service/#vad","text":"Voice Activity Detection is used by the speech client to determine when a user stopped speaking, this indicates the voice command is ready to be executed. Several VAD strategies are supported \"listener\": { // Voice Activity Detection is used to determine when speech ended \"VAD\": { // silence method defined the main vad strategy // valid values: // VAD_ONLY - Only use vad // RATIO_ONLY - Only use max/current energy ratio threshold // CURRENT_ONLY - Only use current energy threshold // VAD_AND_RATIO - Use vad and max/current energy ratio threshold // VAD_AND_CURRENT - Use vad and current energy threshold // ALL - Use vad, max/current energy ratio, and current energy threshold // NOTE: if a vad plugin is not available method will fallback to RATIO_ONLY \"silence_method\": \"vad_and_ratio\", // Seconds of speech before voice command has begun \"speech_seconds\": 0.1, // Seconds of silence before a voice command has finished \"silence_seconds\": 0.5, // Seconds of audio to keep before voice command has begun \"before_seconds\": 0.5, // Minimum length of voice command (seconds) // NOTE: max_seconds uses recording_timeout listener setting \"min_seconds\": 1, // Ratio of max/current energy below which audio is considered speech \"max_current_ratio_threshold\": 2, // Energy threshold above which audio is considered speech // NOTE: this is dynamic, only defining start value \"initial_energy_threshold\": 1000.0, // vad module can be any plugin, by default it is not used // recommended plugin: \"ovos-vad-plugin-silero\" \"module\": \"\", \"ovos-vad-plugin-silero\": {\"threshold\": 0.2}, \"ovos-vad-plugin-webrtcvad\": {\"vad_mode\": 3} } },","title":"VAD"},{"location":"unused/spotifyd/","text":"Spotifyd Spotifyd is able to advertise itself on the network without credentials and using zeroconf authentication from Spotify Connect on your mobile device. This is the default configuration shipped with the buildroot image. If for whatever reason zeroconf is not properly working on your network, or you want spotifyd to log in itself you can configure your username and password combination within it's configuration file by uncommenting and configuring the username and password variables within ~/.config/spotifyd/spotifyd.conf and reboot the device or run systemctl --user restart spotifyd . Open spotify on you mobile device and go to the Devices menu within the Settings or tap the devices menu icon on the left bottom of the now playing screen. An OpenVoiceOS \"speaker\" device will be present which you can select as output device. When you play something on Spotify the music will come from your OpenVoiceOS device which will be indicated by the \"OPENVOICEOS\" indicator on the device menu icon on the top bottom of the now playing screen on your mobile device. As Spotifyd has full MPRIS support including audio player controls, the full OCP now playing screen will be shown on your OpenVoiceOS device as shown below, just like playing something from YouTube as shown above.","title":"Spotifyd"},{"location":"unused/spotifyd/#spotifyd","text":"Spotifyd is able to advertise itself on the network without credentials and using zeroconf authentication from Spotify Connect on your mobile device. This is the default configuration shipped with the buildroot image. If for whatever reason zeroconf is not properly working on your network, or you want spotifyd to log in itself you can configure your username and password combination within it's configuration file by uncommenting and configuring the username and password variables within ~/.config/spotifyd/spotifyd.conf and reboot the device or run systemctl --user restart spotifyd . Open spotify on you mobile device and go to the Devices menu within the Settings or tap the devices menu icon on the left bottom of the now playing screen. An OpenVoiceOS \"speaker\" device will be present which you can select as output device. When you play something on Spotify the music will come from your OpenVoiceOS device which will be indicated by the \"OPENVOICEOS\" indicator on the device menu icon on the top bottom of the now playing screen on your mobile device. As Spotifyd has full MPRIS support including audio player controls, the full OCP now playing screen will be shown on your OpenVoiceOS device as shown below, just like playing something from YouTube as shown above.","title":"Spotifyd"},{"location":"unused/standard_skills/","text":"Standard Skills Usage Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage. Date / Time skill You can ask your device what time or date it is just in case you lost your watch. Hey Mycroft, what time is it? Hey Mycroft, what is the date? Setting an alarm Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device. Hey Mycroft, set an alarm for 8 AM. Setting of timers Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers. Hey Mycroft, set a timer for 5 minutes. You can always set more timers and even name them, so you know which timers is for what. Hey, Mycroft, set another timer called rice cooking for 7 minutes. Asking the weather You can ask your device what the weather is or would be at any given time or place. Hey Mycroft, what is the weather like today? The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen. File Browser The file browser allows you to browse the filesystem in your device and any connected media, you can view images and play music and videos. KDEConnect integration allows you to share files with your mobile devices","title":"Standard Skills Usage"},{"location":"unused/standard_skills/#standard-skills-usage","text":"Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage.","title":"Standard Skills Usage"},{"location":"unused/standard_skills/#date-time-skill","text":"You can ask your device what time or date it is just in case you lost your watch. Hey Mycroft, what time is it? Hey Mycroft, what is the date?","title":"Date / Time skill"},{"location":"unused/standard_skills/#setting-an-alarm","text":"Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device. Hey Mycroft, set an alarm for 8 AM.","title":"Setting an alarm"},{"location":"unused/standard_skills/#setting-of-timers","text":"Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers. Hey Mycroft, set a timer for 5 minutes. You can always set more timers and even name them, so you know which timers is for what. Hey, Mycroft, set another timer called rice cooking for 7 minutes.","title":"Setting of timers"},{"location":"unused/standard_skills/#asking-the-weather","text":"You can ask your device what the weather is or would be at any given time or place. Hey Mycroft, what is the weather like today? The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen.","title":"Asking the weather"},{"location":"unused/standard_skills/#file-browser","text":"The file browser allows you to browse the filesystem in your device and any connected media, you can view images and play music and videos. KDEConnect integration allows you to share files with your mobile devices","title":"File Browser"},{"location":"unused/statements/","text":"Statements Editors Note This will probably move Speaking a statement One of OVOS's most important core capabilities is to convert text to speech, that is, to speak a statement. Within a Skill's Intent handler, you may pass a string of text to OVOS and OVOS will speak it. For example: self.speak('this is my statement') That's cool and fun to experiment with, but passing strings of text to Mycroft doesn't help to make Mycroft a multilingual product. Rather than hard-coded strings of text, OVOS has a design pattern for multilingualism. Multilingualism To support multilingualism, the text that OVOS speaks must come from a file. That file is called a dialog file. The dialog file contains statements (lines of text) that a listener in a particular language would consider to be equivalent. For instance, in USA English, the statements \"I am okay\" and \"I am fine\" are equivalent, and both of these statements might appear in a dialog file used for responding to the USA English question: \"How are you?\". By convention, the dialog filename is formed by dot connected words and must end with \".dialog\". The dialog filename should be descriptive of the contents as a whole. Sometimes, the filename describes the question being answered, and other times, the filename describes the answer itself. For the example above, the dialog filename might be: how.are.you.dialog or i.am.fine.dialog . Multilingualism is accomplished by translating the dialog files into other languages, and storing them in their own directory named for the country and language. The filenames remain the same. Using the same filenames in separate language dependent directories allows the Skills to be language agnostic; no hard-coded text strings. Adjust the language setting for your Device **** and OVOS uses the corresponding set of dialog files. If the desired file does not exist in the directory for that language, Mycroft will use the file from the USA English directory. As an example of the concept, the contents of how.are.you.dialog in the directory for the French language in France (fr-fr) might include the statement: \"Je vais bien\". The Tomato Skill Revisited To demonstrate the multilingualism design pattern, we examine the usage of the speak_dialog() method in the Tomato Skill . The Tomato Skill has two Intents: one demonstrates simple, straightforward statements, and the other demonstrates the use of variables within a statement. Simple statement The first Intent within the Tomato Skill, what.is.a.tomato.intent , handles inquiries about tomatoes, and the dialog file, tomato.description.dialog , provides the statements for OVOS to speak in reply to that inquiry. Sample contents of the Intent and dialog files: what.is.a.tomato.intent what is a tomato what would you say a tomato is describe a tomato what defines a tomato tomato.description.dialog The tomato is a fruit of the nightshade family A tomato is an edible berry of the plant Solanum lycopersicum A tomato is a fruit but nutrionists consider it a vegetable Observe the statements in the tomato.description.dialog file. They are all acceptable answers to the question: \"What is a tomato?\" Providing more than one statement in a dialog file is one way to make OVOS to seem less robotic, more natural. OVOS will randomly select one of the statements. The Tomato Skill code snippet: @intent_handler('what.is.a.tomato.intent') def handle_what_is(self, message): \"\"\"Speaks a statement from the dialog file.\"\"\" self.speak_dialog('tomato.description') With the Tomato Skill installed, if the User utters **** \"Hey Mycroft, what is a tomato?\", the Intent handler method handle_what_is() will be called. Inside handle_what_is() , we find: self.speak_dialog('tomato.description') As you can probably guess, the parameter 'tomato.description' is the dialog filename without the \".dialog\" extension. Calling this method opens the dialog file, selects one of the statements, and converts that text to speech. OVOS will speak a statement from the dialog file. In this example, OVOS might say \"The tomato is a fruit of the nightshade family\". Remember, OVOS has a language setting that determines from which directory to find the dialog file. File locations The Skill Structure section describes where to place the Intent file and dialog file. Basically, there are two choices: Put both files in locale/en-us Put the dialog file in dialog/en-us , and put the Intent file in vocab/en-us Statements with variables The second Padatious Intent, do.you.like.intent , demonstrates the use of variables in the Intent file and in one of the dialog files: do.you.like.intent do you like tomatoes do you like {type} tomatoes like.tomato.type.dialog I do like {type} tomatoes {type} tomatoes are my favorite like.tomato.generic.dialog I do like tomatoes tomatoes are my favorite Compare these two dialog files. The like.tomato.generic.dialog file contains only simple statements. The statements in the like.tomato.type.dialog file include a variable named type . The variable is a placeholder in the statement specifying where text may be inserted. The speak_dialog() method accepts a dictionary as an optional parameter. If that dictionary contains an entry for a variable named in the statement, then the value from the dictionary will be inserted at the placeholder's location. Dialog file variables are formed by surrounding the variable's name with curly braces. In OVOS parlance, curly braces are known as a mustache . For multi-line dialog files, be sure to include the same variable on all lines. The Tomato Skill code snippet: @intent_handler('do.you.like.intent') def handle_do_you_like(self, message): tomato_type = message.data.get('type') if tomato_type is not None: self.speak_dialog('like.tomato.type', {'type': tomato_type}) else: self.speak_dialog('like.tomato.generic') When the User utters \"Hey Mycroft, do you like RED tomatoes?\", the second of the two Intent lines \"do you like {type} tomatoes\" is recognized by Mycroft, and the value 'RED' is returned in the message dictionary assigned to the 'type' entry when handle_do_you_like() is called. The line tomato_type = message.data.get('type') extracts the value from the dictionary for the entry 'type'. In this case, the variable tomato_type will receive the value 'RED', and speak_dialog() will be called with the 'like.tomato.type' dialog file, and a dictionary with 'RED' assigned to 'type'. The statement \"I do like {type} tomatoes\" might be randomly selected, and after insertion of the value 'RED' for the placeholder variable {type}, OVOS would say: \"I do like RED tomatoes\". Should the User utter \"Hey Mycroft, do you like tomatoes?\", the first line in the Intent file \"do you like tomatoes\" is recognized. There is no variable in this line, and when handle_do_you_like() is called, the dictionary in the message is empty. This means tomato_type is None , speak_dialog('like.tomato.generic') would be called, and Mycroft might reply with \"Yes, I do like tomatoes\". Waiting for speech By default, the speak_dialog() method is non-blocking. That is any code following the call to speak_dialog() will execute whilst OVOS is talking. This is useful to allow your Skill to perform actions while it is speaking. Rather than telling the User that we are fetching some data, then going out to fetch it, we can do the two things simultaneously providing a better experience. However, there are times when we need to wait until the statement has been spoken before doing something else. We have two options for this. Wait Parameter We can pass a wait=True parameter to our speak_dialog() method. This makes the method blocking and no other code will execute until the statement has been spoken. @intent_handler('what.is.a.tomato.intent') def handle_what_is(self, message): \"\"\"Speaks a statement from the dialog file. Waits (i.e. blocks) within speak_dialog() until the speaking has completed. \"\"\" self.speak_dialog('tomato.description', wait=True) self.log.info(\"I waited for you\") wait_while_speaking The mycroft.audio.wait_while_speaking() method allows us to execute some code, then wait for OVOS to finish speaking. @intent_handler('what.is.a.tomato.intent') def handle_what_is(self, message): \"\"\"Speaks a statement from the dialog file. Returns from speak_dialog() before the speaking has completed, and logs some info. Then it, waits for the speech to complete. \"\"\" self.speak_dialog('tomato.description') self.log.info(\"I am executed immediately\") wait_while_speaking() self.log.info(\"But I waited for you\") Here we have executed one line of code immediately. Our Skill will then wait for the statement from i.do.like.dialog to be spoken before executing the final line of code. Using translatable resources There may be a situation where the dialog file and the speak_dialog() method do not give the Skill enough flexibility. For instance, there may be a need to manipulate the statement from the dialog file before having it spoken by OVOS. The MycroftSkill class provides four multilingual methods to address these needs. Each method uses a file, and multilingualism is accomplished using the country/language directory system. The translate() method returns a random string from a \".dialog\" file (modified by a data dictionary). The translate_list() method returns a list of strings from a \".list\" file (each modified by the data dictionary). Same as translate_template() just with a different file extension. The translate_namedvalue() method returns a dictionary formed from CSV entries in a \".value\" file. The translate_template() method returns a list of strings from a \".template\" file (each modified by the data dictionary). Same as translate_list() just with a different file extension.","title":"Statements"},{"location":"unused/statements/#statements","text":"Editors Note This will probably move","title":"Statements"},{"location":"unused/statements/#speaking-a-statement","text":"One of OVOS's most important core capabilities is to convert text to speech, that is, to speak a statement. Within a Skill's Intent handler, you may pass a string of text to OVOS and OVOS will speak it. For example: self.speak('this is my statement') That's cool and fun to experiment with, but passing strings of text to Mycroft doesn't help to make Mycroft a multilingual product. Rather than hard-coded strings of text, OVOS has a design pattern for multilingualism.","title":"Speaking a statement"},{"location":"unused/statements/#multilingualism","text":"To support multilingualism, the text that OVOS speaks must come from a file. That file is called a dialog file. The dialog file contains statements (lines of text) that a listener in a particular language would consider to be equivalent. For instance, in USA English, the statements \"I am okay\" and \"I am fine\" are equivalent, and both of these statements might appear in a dialog file used for responding to the USA English question: \"How are you?\". By convention, the dialog filename is formed by dot connected words and must end with \".dialog\". The dialog filename should be descriptive of the contents as a whole. Sometimes, the filename describes the question being answered, and other times, the filename describes the answer itself. For the example above, the dialog filename might be: how.are.you.dialog or i.am.fine.dialog . Multilingualism is accomplished by translating the dialog files into other languages, and storing them in their own directory named for the country and language. The filenames remain the same. Using the same filenames in separate language dependent directories allows the Skills to be language agnostic; no hard-coded text strings. Adjust the language setting for your Device **** and OVOS uses the corresponding set of dialog files. If the desired file does not exist in the directory for that language, Mycroft will use the file from the USA English directory. As an example of the concept, the contents of how.are.you.dialog in the directory for the French language in France (fr-fr) might include the statement: \"Je vais bien\".","title":"Multilingualism"},{"location":"unused/statements/#the-tomato-skill-revisited","text":"To demonstrate the multilingualism design pattern, we examine the usage of the speak_dialog() method in the Tomato Skill . The Tomato Skill has two Intents: one demonstrates simple, straightforward statements, and the other demonstrates the use of variables within a statement.","title":"The Tomato Skill Revisited"},{"location":"unused/statements/#simple-statement","text":"The first Intent within the Tomato Skill, what.is.a.tomato.intent , handles inquiries about tomatoes, and the dialog file, tomato.description.dialog , provides the statements for OVOS to speak in reply to that inquiry. Sample contents of the Intent and dialog files: what.is.a.tomato.intent what is a tomato what would you say a tomato is describe a tomato what defines a tomato tomato.description.dialog The tomato is a fruit of the nightshade family A tomato is an edible berry of the plant Solanum lycopersicum A tomato is a fruit but nutrionists consider it a vegetable Observe the statements in the tomato.description.dialog file. They are all acceptable answers to the question: \"What is a tomato?\" Providing more than one statement in a dialog file is one way to make OVOS to seem less robotic, more natural. OVOS will randomly select one of the statements. The Tomato Skill code snippet: @intent_handler('what.is.a.tomato.intent') def handle_what_is(self, message): \"\"\"Speaks a statement from the dialog file.\"\"\" self.speak_dialog('tomato.description') With the Tomato Skill installed, if the User utters **** \"Hey Mycroft, what is a tomato?\", the Intent handler method handle_what_is() will be called. Inside handle_what_is() , we find: self.speak_dialog('tomato.description') As you can probably guess, the parameter 'tomato.description' is the dialog filename without the \".dialog\" extension. Calling this method opens the dialog file, selects one of the statements, and converts that text to speech. OVOS will speak a statement from the dialog file. In this example, OVOS might say \"The tomato is a fruit of the nightshade family\". Remember, OVOS has a language setting that determines from which directory to find the dialog file.","title":"Simple statement"},{"location":"unused/statements/#file-locations","text":"The Skill Structure section describes where to place the Intent file and dialog file. Basically, there are two choices: Put both files in locale/en-us Put the dialog file in dialog/en-us , and put the Intent file in vocab/en-us","title":"File locations"},{"location":"unused/statements/#statements-with-variables","text":"The second Padatious Intent, do.you.like.intent , demonstrates the use of variables in the Intent file and in one of the dialog files: do.you.like.intent do you like tomatoes do you like {type} tomatoes like.tomato.type.dialog I do like {type} tomatoes {type} tomatoes are my favorite like.tomato.generic.dialog I do like tomatoes tomatoes are my favorite Compare these two dialog files. The like.tomato.generic.dialog file contains only simple statements. The statements in the like.tomato.type.dialog file include a variable named type . The variable is a placeholder in the statement specifying where text may be inserted. The speak_dialog() method accepts a dictionary as an optional parameter. If that dictionary contains an entry for a variable named in the statement, then the value from the dictionary will be inserted at the placeholder's location. Dialog file variables are formed by surrounding the variable's name with curly braces. In OVOS parlance, curly braces are known as a mustache . For multi-line dialog files, be sure to include the same variable on all lines. The Tomato Skill code snippet: @intent_handler('do.you.like.intent') def handle_do_you_like(self, message): tomato_type = message.data.get('type') if tomato_type is not None: self.speak_dialog('like.tomato.type', {'type': tomato_type}) else: self.speak_dialog('like.tomato.generic') When the User utters \"Hey Mycroft, do you like RED tomatoes?\", the second of the two Intent lines \"do you like {type} tomatoes\" is recognized by Mycroft, and the value 'RED' is returned in the message dictionary assigned to the 'type' entry when handle_do_you_like() is called. The line tomato_type = message.data.get('type') extracts the value from the dictionary for the entry 'type'. In this case, the variable tomato_type will receive the value 'RED', and speak_dialog() will be called with the 'like.tomato.type' dialog file, and a dictionary with 'RED' assigned to 'type'. The statement \"I do like {type} tomatoes\" might be randomly selected, and after insertion of the value 'RED' for the placeholder variable {type}, OVOS would say: \"I do like RED tomatoes\". Should the User utter \"Hey Mycroft, do you like tomatoes?\", the first line in the Intent file \"do you like tomatoes\" is recognized. There is no variable in this line, and when handle_do_you_like() is called, the dictionary in the message is empty. This means tomato_type is None , speak_dialog('like.tomato.generic') would be called, and Mycroft might reply with \"Yes, I do like tomatoes\".","title":"Statements with variables"},{"location":"unused/statements/#waiting-for-speech","text":"By default, the speak_dialog() method is non-blocking. That is any code following the call to speak_dialog() will execute whilst OVOS is talking. This is useful to allow your Skill to perform actions while it is speaking. Rather than telling the User that we are fetching some data, then going out to fetch it, we can do the two things simultaneously providing a better experience. However, there are times when we need to wait until the statement has been spoken before doing something else. We have two options for this.","title":"Waiting for speech"},{"location":"unused/statements/#wait-parameter","text":"We can pass a wait=True parameter to our speak_dialog() method. This makes the method blocking and no other code will execute until the statement has been spoken. @intent_handler('what.is.a.tomato.intent') def handle_what_is(self, message): \"\"\"Speaks a statement from the dialog file. Waits (i.e. blocks) within speak_dialog() until the speaking has completed. \"\"\" self.speak_dialog('tomato.description', wait=True) self.log.info(\"I waited for you\")","title":"Wait Parameter"},{"location":"unused/statements/#wait_while_speaking","text":"The mycroft.audio.wait_while_speaking() method allows us to execute some code, then wait for OVOS to finish speaking. @intent_handler('what.is.a.tomato.intent') def handle_what_is(self, message): \"\"\"Speaks a statement from the dialog file. Returns from speak_dialog() before the speaking has completed, and logs some info. Then it, waits for the speech to complete. \"\"\" self.speak_dialog('tomato.description') self.log.info(\"I am executed immediately\") wait_while_speaking() self.log.info(\"But I waited for you\") Here we have executed one line of code immediately. Our Skill will then wait for the statement from i.do.like.dialog to be spoken before executing the final line of code.","title":"wait_while_speaking"},{"location":"unused/statements/#using-translatable-resources","text":"There may be a situation where the dialog file and the speak_dialog() method do not give the Skill enough flexibility. For instance, there may be a need to manipulate the statement from the dialog file before having it spoken by OVOS. The MycroftSkill class provides four multilingual methods to address these needs. Each method uses a file, and multilingualism is accomplished using the country/language directory system. The translate() method returns a random string from a \".dialog\" file (modified by a data dictionary). The translate_list() method returns a list of strings from a \".list\" file (each modified by the data dictionary). Same as translate_template() just with a different file extension. The translate_namedvalue() method returns a dictionary formed from CSV entries in a \".value\" file. The translate_template() method returns a list of strings from a \".template\" file (each modified by the data dictionary). Same as translate_list() just with a different file extension.","title":"Using translatable resources"},{"location":"unused/stt_backends/","text":"Speech to Text backends You can run a local nemo instance using ovos-stt-server","title":"Speech to Text backends"},{"location":"unused/stt_backends/#speech-to-text-backends","text":"You can run a local nemo instance using ovos-stt-server","title":"Speech to Text backends"},{"location":"unused/stt_plugins/","text":"STT Plugins STT plugins are responsible for converting spoken audio into text List of STT plugins Plugin Offline Type ovos-stt-plugin-vosk yes FOSS ovos-stt-plugin-chromium no API (free) neon-stt-plugin-google_cloud_streaming no API (key) neon-stt-plugin-scribosermo yes FOSS neon-stt-plugin-silero yes FOSS neon-stt-plugin-polyglot yes FOSS neon-stt-plugin-deepspeech_stream_local yes FOSS ovos-stt-plugin-selene no API (free) ovos-stt-plugin-http-server no API (self hosted) ovos-stt-plugin-pocketsphinx yes FOSS Standalone Usage STT plugins can be used in your owm projects as follows from speech_recognition import Recognizer, AudioFile plug = STTPlug() # verify lang is supported lang = \"en-us\" assert lang in plug.available_languages # read file with AudioFile(\"test.wav\") as source: audio = Recognizer().record(source) # transcribe AudioData object transcript = plug.execute(audio, lang) Plugin Template from ovos_plugin_manager.templates.stt import STT # base plugin class class MySTTPlugin(STT): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # read config settings for your plugin lm = self.config.get(\"language-model\") hmm = self.config.get(\"acoustic-model\") def execute(self, audio, language=None): # TODO - convert audio into text and return string transcript = \"You said this\" return transcript @property def available_languages(self): \"\"\"Return languages supported by this STT implementation in this state This property should be overridden by the derived class to advertise what languages that engine supports. Returns: set: supported languages \"\"\" # TODO - what langs can this STT handle? return {\"en-us\", \"es-es\"} # sample valid configurations per language # \"display_name\" and \"offline\" provide metadata for UI # \"priority\" is used to calculate position in selection dropdown # 0 - top, 100-bottom # all other keys represent an example valid config for the plugin MySTTConfig = { lang: [{\"lang\": lang, \"display_name\": f\"MySTT ({lang}\", \"priority\": 70, \"offline\": True}] for lang in [\"en-us\", \"es-es\"] }","title":"STT Plugins"},{"location":"unused/stt_plugins/#stt-plugins","text":"STT plugins are responsible for converting spoken audio into text","title":"STT Plugins"},{"location":"unused/stt_plugins/#list-of-stt-plugins","text":"Plugin Offline Type ovos-stt-plugin-vosk yes FOSS ovos-stt-plugin-chromium no API (free) neon-stt-plugin-google_cloud_streaming no API (key) neon-stt-plugin-scribosermo yes FOSS neon-stt-plugin-silero yes FOSS neon-stt-plugin-polyglot yes FOSS neon-stt-plugin-deepspeech_stream_local yes FOSS ovos-stt-plugin-selene no API (free) ovos-stt-plugin-http-server no API (self hosted) ovos-stt-plugin-pocketsphinx yes FOSS","title":"List of STT plugins"},{"location":"unused/stt_plugins/#standalone-usage","text":"STT plugins can be used in your owm projects as follows from speech_recognition import Recognizer, AudioFile plug = STTPlug() # verify lang is supported lang = \"en-us\" assert lang in plug.available_languages # read file with AudioFile(\"test.wav\") as source: audio = Recognizer().record(source) # transcribe AudioData object transcript = plug.execute(audio, lang)","title":"Standalone Usage"},{"location":"unused/stt_plugins/#plugin-template","text":"from ovos_plugin_manager.templates.stt import STT # base plugin class class MySTTPlugin(STT): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # read config settings for your plugin lm = self.config.get(\"language-model\") hmm = self.config.get(\"acoustic-model\") def execute(self, audio, language=None): # TODO - convert audio into text and return string transcript = \"You said this\" return transcript @property def available_languages(self): \"\"\"Return languages supported by this STT implementation in this state This property should be overridden by the derived class to advertise what languages that engine supports. Returns: set: supported languages \"\"\" # TODO - what langs can this STT handle? return {\"en-us\", \"es-es\"} # sample valid configurations per language # \"display_name\" and \"offline\" provide metadata for UI # \"priority\" is used to calculate position in selection dropdown # 0 - top, 100-bottom # all other keys represent an example valid config for the plugin MySTTConfig = { lang: [{\"lang\": lang, \"display_name\": f\"MySTT ({lang}\", \"priority\": 70, \"offline\": True}] for lang in [\"en-us\", \"es-es\"] }","title":"Plugin Template"},{"location":"unused/stt_server/","text":"OpenVoiceOS STT HTTP Server Turn any OVOS STT plugin into a microservice! Install pip install ovos-stt-http-server Usage ovos-stt-server --help usage: ovos-stt-server [-h] [--engine ENGINE] [--port PORT] [--host HOST] options: -h, --help show this help message and exit --engine ENGINE stt plugin to be used --port PORT port number --host HOST host Companion plugin Use with OpenVoiceOS companion plugin Docker Template you can create easily create a docker file to serve any plugin FROM python:3.7 RUN pip3 install ovos-stt-http-server==0.0.1 RUN pip3 install {PLUGIN_HERE} ENTRYPOINT ovos-stt-http-server --engine {PLUGIN_HERE} build it docker build . -t my_ovos_stt_plugin run it docker run -p 8080:9666 my_ovos_stt_plugin Each plugin can provide its own Dockerfile in its repository using ovos-stt-http-server","title":"OpenVoiceOS STT HTTP Server"},{"location":"unused/stt_server/#openvoiceos-stt-http-server","text":"Turn any OVOS STT plugin into a microservice!","title":"OpenVoiceOS STT HTTP Server"},{"location":"unused/stt_server/#install","text":"pip install ovos-stt-http-server","title":"Install"},{"location":"unused/stt_server/#usage","text":"ovos-stt-server --help usage: ovos-stt-server [-h] [--engine ENGINE] [--port PORT] [--host HOST] options: -h, --help show this help message and exit --engine ENGINE stt plugin to be used --port PORT port number --host HOST host","title":"Usage"},{"location":"unused/stt_server/#companion-plugin","text":"Use with OpenVoiceOS companion plugin","title":"Companion plugin"},{"location":"unused/stt_server/#docker-template","text":"you can create easily create a docker file to serve any plugin FROM python:3.7 RUN pip3 install ovos-stt-http-server==0.0.1 RUN pip3 install {PLUGIN_HERE} ENTRYPOINT ovos-stt-http-server --engine {PLUGIN_HERE} build it docker build . -t my_ovos_stt_plugin run it docker run -p 8080:9666 my_ovos_stt_plugin Each plugin can provide its own Dockerfile in its repository using ovos-stt-http-server","title":"Docker Template"},{"location":"unused/translate_server/","text":"OpenVoiceOS Translate Server Turn any OVOS Language plugin into a microservice! Use with OpenVoiceOS companion plugin Install pip install ovos-translate-server Usage ovos-translate-server --help usage: ovos-translate-server [-h] [--tx-engine TX_ENGINE] [--detect-engine DETECT_ENGINE] [--port PORT] [--host HOST] optional arguments: -h, --help show this help message and exit --tx-engine TX_ENGINE translate plugin to be used --detect-engine DETECT_ENGINE lang detection plugin to be used --port PORT port number --host HOST host eg, to use the Google Translate plugin ovos-translate-server --tx-engine googletranslate_plug --detect-engine googletranslate_detection_plug then you can do get requests http://0.0.0.0:9686/translate/en/o meu nome \u00e9 Casimiro (auto detect source lang) http://0.0.0.0:9686/translate/pt/en/o meu nome \u00e9 Casimiro (specify source lang) http://0.0.0.0:9686/detect/o meu nome \u00e9 Casimiro Docker Template you can create easily crete a docker file to serve any plugin FROM python:3.7 RUN pip3 install ovos-utils==0.0.15 RUN pip3 install ovos-plugin-manager==0.0.4 RUN pip3 install ovos-translate-server==0.0.1 RUN pip3 install {PLUGIN_HERE} ENTRYPOINT ovos-translate-server --tx-engine {PLUGIN_HERE} --detect-engine {PLUGIN_HERE} build it docker build . -t my_ovos_translate_plugin run it docker run -p 8080:9686 my_ovos_translate_plugin Each plugin can provide its own Dockerfile in its repository using ovos-translate-server","title":"OpenVoiceOS Translate Server"},{"location":"unused/translate_server/#openvoiceos-translate-server","text":"Turn any OVOS Language plugin into a microservice! Use with OpenVoiceOS companion plugin","title":"OpenVoiceOS Translate Server"},{"location":"unused/translate_server/#install","text":"pip install ovos-translate-server","title":"Install"},{"location":"unused/translate_server/#usage","text":"ovos-translate-server --help usage: ovos-translate-server [-h] [--tx-engine TX_ENGINE] [--detect-engine DETECT_ENGINE] [--port PORT] [--host HOST] optional arguments: -h, --help show this help message and exit --tx-engine TX_ENGINE translate plugin to be used --detect-engine DETECT_ENGINE lang detection plugin to be used --port PORT port number --host HOST host eg, to use the Google Translate plugin ovos-translate-server --tx-engine googletranslate_plug --detect-engine googletranslate_detection_plug then you can do get requests http://0.0.0.0:9686/translate/en/o meu nome \u00e9 Casimiro (auto detect source lang) http://0.0.0.0:9686/translate/pt/en/o meu nome \u00e9 Casimiro (specify source lang) http://0.0.0.0:9686/detect/o meu nome \u00e9 Casimiro","title":"Usage"},{"location":"unused/translate_server/#docker-template","text":"you can create easily crete a docker file to serve any plugin FROM python:3.7 RUN pip3 install ovos-utils==0.0.15 RUN pip3 install ovos-plugin-manager==0.0.4 RUN pip3 install ovos-translate-server==0.0.1 RUN pip3 install {PLUGIN_HERE} ENTRYPOINT ovos-translate-server --tx-engine {PLUGIN_HERE} --detect-engine {PLUGIN_HERE} build it docker build . -t my_ovos_translate_plugin run it docker run -p 8080:9686 my_ovos_translate_plugin Each plugin can provide its own Dockerfile in its repository using ovos-translate-server","title":"Docker Template"},{"location":"unused/tts_backends/","text":"OpenVoiceOS TTS Server Turn any OVOS TTS plugin into a microservice! Install pip install ovos-tts-server Usage ovos-tts-server --help usage: ovos-tts-server [-h] [--engine ENGINE] [--port PORT] [--host HOST] [--cache] options: -h, --help show this help message and exit --engine ENGINE tts plugin to be used --port PORT port number --host HOST host --cache save every synth to disk eg, to use the GladosTTS plugin ovos-tts-server --engine neon-tts-plugin-glados --cache then do a get request http://192.168.1.112:9666/synthesize/hello Companion plugin Use with OpenVoiceOS companion plugin Docker Template you can create easily crete a docker file to serve any plugin FROM python:3.7 RUN pip3 install ovos-utils==0.0.15 RUN pip3 install ovos-plugin-manager==0.0.4 RUN pip3 install ovos-tts-server==0.0.1 RUN pip3 install {PLUGIN_HERE} ENTRYPOINT ovos-tts-server --engine {PLUGIN_HERE} --cache build it docker build . -t my_ovos_tts_plugin run it docker run -p 8080:9666 my_ovos_tts_plugin use it http://localhost:8080/synthesize/hello Each plugin can provide its own Dockerfile in its repository using ovos-tts-server","title":"OpenVoiceOS TTS Server"},{"location":"unused/tts_backends/#openvoiceos-tts-server","text":"Turn any OVOS TTS plugin into a microservice!","title":"OpenVoiceOS TTS Server"},{"location":"unused/tts_backends/#install","text":"pip install ovos-tts-server","title":"Install"},{"location":"unused/tts_backends/#usage","text":"ovos-tts-server --help usage: ovos-tts-server [-h] [--engine ENGINE] [--port PORT] [--host HOST] [--cache] options: -h, --help show this help message and exit --engine ENGINE tts plugin to be used --port PORT port number --host HOST host --cache save every synth to disk eg, to use the GladosTTS plugin ovos-tts-server --engine neon-tts-plugin-glados --cache then do a get request http://192.168.1.112:9666/synthesize/hello","title":"Usage"},{"location":"unused/tts_backends/#companion-plugin","text":"Use with OpenVoiceOS companion plugin","title":"Companion plugin"},{"location":"unused/tts_backends/#docker-template","text":"you can create easily crete a docker file to serve any plugin FROM python:3.7 RUN pip3 install ovos-utils==0.0.15 RUN pip3 install ovos-plugin-manager==0.0.4 RUN pip3 install ovos-tts-server==0.0.1 RUN pip3 install {PLUGIN_HERE} ENTRYPOINT ovos-tts-server --engine {PLUGIN_HERE} --cache build it docker build . -t my_ovos_tts_plugin run it docker run -p 8080:9666 my_ovos_tts_plugin use it http://localhost:8080/synthesize/hello Each plugin can provide its own Dockerfile in its repository using ovos-tts-server","title":"Docker Template"},{"location":"unused/tts_plugins/","text":"TTS Plugins TTS plugins are responsible for converting text into audio for playback List of TTS plugins Plugin Offline Type ovos-tts-plugin-mimic yes FOSS ovos-tts-plugin-mimic2 no API (free) ovos-tts-plugin-mimic3 yes FOSS ovos-tts-plugin-marytts no API (self hosted) neon-tts-plugin-larynx_server no API (self hosted) ovos-tts-server-plugin no API (self hosted) ovos-tts-plugin-pico yes FOSS neon-tts-plugin-glados yes FOSS neon-tts-plugin-mozilla_local yes FOSS neon-tts-plugin-polly no API (key) ovos-tts-plugin-voicerss no API (key) ovos-tts-plugin-google-TX no API (free) ovos-tts-plugin-responsivevoice no API (free) neon-tts-plugin-mozilla_remote no API (self hosted) neon-tts-plugin-tacotron2 yes FOSS ovos-tts-plugin-espeakNG yes FOSS ovos-tts-plugin-cotovia yes FOSS ovos-tts-plugin-catotron no API (self hosted) ovos-tts-plugin-softcatala no API (self hosted) ovos-tts-plugin-SAM yes Abandonware ovos-tts-plugin-beepspeak yes Fun Plugin Template from ovos_plugin_manager.templates.tts import TTS # base plugin class class MyTTSPlugin(TTS): def __init__(self, *args, **kwargs): # in here you should specify if your plugin return wav or mp3 files # you should also specify any valid ssml tags ssml_tags = [\"speak\", \"s\", \"w\", \"voice\", \"prosody\", \"say-as\", \"break\", \"sub\", \"phoneme\"] super().__init__(*args, **kwargs, audio_ext=\"wav\", ssml_tags=ssml_tags) # read config settings for your plugin if any self.pitch = self.config.get(\"pitch\", 0.5) def get_tts(self, sentence, wav_file): # TODO - create TTS audio @ wav_file (path) return wav_file, None @property def available_languages(self): \"\"\"Return languages supported by this TTS implementation in this state This property should be overridden by the derived class to advertise what languages that engine supports. Returns: set: supported languages \"\"\" # TODO - what langs can this TTS handle? return {\"en-us\", \"es-es\"} # sample valid configurations per language # \"display_name\" and \"offline\" provide metadata for UI # \"priority\" is used to calculate position in selection dropdown # 0 - top, 100-bottom # all other keys represent an example valid config for the plugin MyTTSConfig = { lang: [{\"lang\": lang, \"display_name\": f\"MyTTS ({lang}\", \"priority\": 70, \"offline\": True}] for lang in [\"en-us\", \"es-es\"] }","title":"TTS Plugins"},{"location":"unused/tts_plugins/#tts-plugins","text":"TTS plugins are responsible for converting text into audio for playback","title":"TTS Plugins"},{"location":"unused/tts_plugins/#list-of-tts-plugins","text":"Plugin Offline Type ovos-tts-plugin-mimic yes FOSS ovos-tts-plugin-mimic2 no API (free) ovos-tts-plugin-mimic3 yes FOSS ovos-tts-plugin-marytts no API (self hosted) neon-tts-plugin-larynx_server no API (self hosted) ovos-tts-server-plugin no API (self hosted) ovos-tts-plugin-pico yes FOSS neon-tts-plugin-glados yes FOSS neon-tts-plugin-mozilla_local yes FOSS neon-tts-plugin-polly no API (key) ovos-tts-plugin-voicerss no API (key) ovos-tts-plugin-google-TX no API (free) ovos-tts-plugin-responsivevoice no API (free) neon-tts-plugin-mozilla_remote no API (self hosted) neon-tts-plugin-tacotron2 yes FOSS ovos-tts-plugin-espeakNG yes FOSS ovos-tts-plugin-cotovia yes FOSS ovos-tts-plugin-catotron no API (self hosted) ovos-tts-plugin-softcatala no API (self hosted) ovos-tts-plugin-SAM yes Abandonware ovos-tts-plugin-beepspeak yes Fun","title":"List of TTS plugins"},{"location":"unused/tts_plugins/#plugin-template","text":"from ovos_plugin_manager.templates.tts import TTS # base plugin class class MyTTSPlugin(TTS): def __init__(self, *args, **kwargs): # in here you should specify if your plugin return wav or mp3 files # you should also specify any valid ssml tags ssml_tags = [\"speak\", \"s\", \"w\", \"voice\", \"prosody\", \"say-as\", \"break\", \"sub\", \"phoneme\"] super().__init__(*args, **kwargs, audio_ext=\"wav\", ssml_tags=ssml_tags) # read config settings for your plugin if any self.pitch = self.config.get(\"pitch\", 0.5) def get_tts(self, sentence, wav_file): # TODO - create TTS audio @ wav_file (path) return wav_file, None @property def available_languages(self): \"\"\"Return languages supported by this TTS implementation in this state This property should be overridden by the derived class to advertise what languages that engine supports. Returns: set: supported languages \"\"\" # TODO - what langs can this TTS handle? return {\"en-us\", \"es-es\"} # sample valid configurations per language # \"display_name\" and \"offline\" provide metadata for UI # \"priority\" is used to calculate position in selection dropdown # 0 - top, 100-bottom # all other keys represent an example valid config for the plugin MyTTSConfig = { lang: [{\"lang\": lang, \"display_name\": f\"MyTTS ({lang}\", \"priority\": 70, \"offline\": True}] for lang in [\"en-us\", \"es-es\"] }","title":"Plugin Template"},{"location":"unused/vad_plugins/","text":"VAD Plugins Voice Activity Detection is the process of determining when speech starts and ends in a piece of audio VAD plugins classify audio and report if it contains speech or not List of VAD plugins Plugin Type ovos-vad-plugin-silero model ovos-vad-plugin-webrtcvad model","title":"VAD Plugins"},{"location":"unused/vad_plugins/#vad-plugins","text":"Voice Activity Detection is the process of determining when speech starts and ends in a piece of audio VAD plugins classify audio and report if it contains speech or not","title":"VAD Plugins"},{"location":"unused/vad_plugins/#list-of-vad-plugins","text":"Plugin Type ovos-vad-plugin-silero model ovos-vad-plugin-webrtcvad model","title":"List of VAD plugins"},{"location":"unused/why/","text":"About OpenVoiceOS Introducing OpenVoiceOS - The Free and Open-Source Personal Assistant and Smart Speaker. OpenVoiceOS is a new player in the smart speaker market, offering a powerful and flexible alternative to proprietary solutions like Amazon Echo and Google Home. With OpenVoiceOS, you have complete control over your personal data and the ability to customize and extend the functionality of your smart speaker. Built on open-source software, OpenVoiceOS is designed to provide users with a seamless and intuitive voice interface for controlling their smart home devices, playing music, setting reminders, and much more. The platform leverages cutting-edge technology, including machine learning and natural language processing, to deliver a highly responsive and accurate experience. In addition to its voice capabilities, OpenVoiceOS features a touch-screen GUI made using QT5 and the KF5 framework. The GUI provides an intuitive, user-friendly interface that allows you to access the full range of OpenVoiceOS features and functionality. Whether you prefer voice commands or a more traditional touch interface, OpenVoiceOS has you covered. One of the key advantages of OpenVoiceOS is its open-source nature, which means that anyone with the technical skills can contribute to the platform and help shape its future. Whether you're a software developer, data scientist, or just someone with a passion for technology, you can get involved and help build the next generation of personal assistants and smart speakers. With OpenVoiceOS, you have the option to run the platform fully offline, giving you complete control over your data and ensuring that your information is never shared with third parties. This makes OpenVoiceOS the perfect choice for anyone who values privacy and security. So if you're looking for a personal assistant and smart speaker that gives you the freedom and control you deserve, be sure to check out OpenVoiceOS today! Disclaimer : This post was written in collaboration with ChatGPT","title":"About OpenVoiceOS"},{"location":"unused/why/#about-openvoiceos","text":"Introducing OpenVoiceOS - The Free and Open-Source Personal Assistant and Smart Speaker. OpenVoiceOS is a new player in the smart speaker market, offering a powerful and flexible alternative to proprietary solutions like Amazon Echo and Google Home. With OpenVoiceOS, you have complete control over your personal data and the ability to customize and extend the functionality of your smart speaker. Built on open-source software, OpenVoiceOS is designed to provide users with a seamless and intuitive voice interface for controlling their smart home devices, playing music, setting reminders, and much more. The platform leverages cutting-edge technology, including machine learning and natural language processing, to deliver a highly responsive and accurate experience. In addition to its voice capabilities, OpenVoiceOS features a touch-screen GUI made using QT5 and the KF5 framework. The GUI provides an intuitive, user-friendly interface that allows you to access the full range of OpenVoiceOS features and functionality. Whether you prefer voice commands or a more traditional touch interface, OpenVoiceOS has you covered. One of the key advantages of OpenVoiceOS is its open-source nature, which means that anyone with the technical skills can contribute to the platform and help shape its future. Whether you're a software developer, data scientist, or just someone with a passion for technology, you can get involved and help build the next generation of personal assistants and smart speakers. With OpenVoiceOS, you have the option to run the platform fully offline, giving you complete control over your data and ensuring that your information is never shared with third parties. This makes OpenVoiceOS the perfect choice for anyone who values privacy and security. So if you're looking for a personal assistant and smart speaker that gives you the freedom and control you deserve, be sure to check out OpenVoiceOS today! Disclaimer : This post was written in collaboration with ChatGPT","title":"About OpenVoiceOS"},{"location":"unused/ww_plugins/","text":"Wake Word Plugins WakeWord plugins classify audio and report if a certain word or sound is present or not These plugins usually correspond to the name of the voice assistant, \"hey mycroft\", but can also be used for other purposes List of Wake Word plugins Plugin Type ovos-ww-plugin-pocketsphinx phonemes ovos-ww-plugin-vosk text samples ovos-ww-plugin-snowboy model ovos-ww-plugin-precise model ovos-ww-plugin-precise-lite model ovos-ww-plugin-nyumaya model ovos-ww-plugin-nyumaya-legacy model neon_ww_plugin_efficientwordnet model mycroft-porcupine-plugin model ovos-ww-plugin-hotkeys keyboard Standalone Usage first lets get some boilerplate ouf of the way for the microphone handling logic import pyaudio # helper class class CyclicAudioBuffer: def __init__(self, duration=0.98, initial_data=None, sample_rate=16000, sample_width=2): self.size = self.duration_to_bytes(duration, sample_rate, sample_width) initial_data = initial_data or self.get_silence(self.size) # Get at most size bytes from the end of the initial data self._buffer = initial_data[-self.size:] @staticmethod def duration_to_bytes(duration, sample_rate=16000, sample_width=2): return int(duration * sample_rate) * sample_width @staticmethod def get_silence(num_bytes): return b'\\0' * num_bytes def append(self, data): \"\"\"Add new data to the buffer, and slide out data if the buffer is full Arguments: data (bytes): binary data to append to the buffer. If buffer size is exceeded the oldest data will be dropped. \"\"\" buff = self._buffer + data if len(buff) > self.size: buff = buff[-self.size:] self._buffer = buff def get(self): \"\"\"Get the binary data.\"\"\" return self._buffer # pyaudio params FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 16000 CHUNK = 1024 MAX_RECORD_SECONDS = 20 SAMPLE_WIDTH = pyaudio.get_sample_size(FORMAT) audio = pyaudio.PyAudio() # start Recording stream = audio.open(channels=CHANNELS, format=FORMAT, rate=RATE, frames_per_buffer=CHUNK, input=True) def load_plugin(): # Wake word initialization config = {\"model\": \"path/to/hey_computer.model\"} return MyHotWord(\"hey computer\", config=config) def listen_for_ww(plug): # TODO - see examples below return False plug = load_plugin() print(f\"Waiting for wake word {MAX_RECORD_SECONDS} seconds\") found = listen_for_ww(plug) if found: print(\"Found wake word!\") else: print(\"No wake word found\") # stop everything plug.stop() stream.stop_stream() stream.close() audio.terminate() new style plugins New style plugins expect to receive live audio, they may keep their own cyclic buffers internally def listen_for_ww(plug): for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)): data = stream.read(CHUNK) # feed data directly to streaming prediction engines plug.update(data) # streaming engines return result here found = plug.found_wake_word(data) if found: return True old style plugins (DEPRECATED) Old style plugins expect to receive ~3 seconds of audio data at once def listen_for_ww(plug): # used for old style non-streaming wakeword (deprecated) audio_buffer = CyclicAudioBuffer(plug.expected_duration, sample_rate=RATE, sample_width=SAMPLE_WIDTH) for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)): data = stream.read(CHUNK) # add data to rolling buffer, used by non-streaming engines audio_buffer.append(data) # non-streaming engines check the byte_data in audio_buffer audio_data = audio_buffer.get() found = plug.found_wake_word(audio_data) if found: return True new + old style plugins (backwards compatibility) if you are unsure what kind of plugin you will be using you can be compatible with both approaches like ovos-core def listen_for_ww(plug): # used for old style non-streaming wakeword (deprecated) audio_buffer = CyclicAudioBuffer(plug.expected_duration, sample_rate=RATE, sample_width=SAMPLE_WIDTH) for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)): data = stream.read(CHUNK) # old style engines will ignore the update plug.update(data) # streaming engines will ignore the byte_data audio_buffer.append(data) audio_data = audio_buffer.get() found = plug.found_wake_word(audio_data) if found: return True Plugin Template from ovos_plugin_manager.templates.hotwords import HotWordEngine from threading import Event class MyWWPlugin(HotWordEngine): def __init__(self, key_phrase=\"hey mycroft\", config=None, lang=\"en-us\"): super().__init__(key_phrase, config, lang) self.detection = Event() # read config settings for your plugin self.sensitivity = self.config.get(\"sensitivity\", 0.5) # TODO - plugin stuff # how does your plugin work? phonemes? text? models? self.engine = MyWW(key_phrase) def found_wake_word(self, frame_data): \"\"\"Check if wake word has been found. Checks if the wake word has been found. Should reset any internal tracking of the wake word state. Arguments: frame_data (binary data): Deprecated. Audio data for large chunk of audio to be processed. This should not be used to detect audio data instead use update() to incrementally update audio Returns: bool: True if a wake word was detected, else False \"\"\" detected = self.detection.is_set() if detected: self.detection.clear() return detected def update(self, chunk): \"\"\"Updates the hotword engine with new audio data. The engine should process the data and update internal trigger state. Arguments: chunk (bytes): Chunk of audio data to process \"\"\" if self.engine.found_it(chunk): # TODO - check for wake word self.detection.set() def stop(self): \"\"\"Perform any actions needed to shut down the wake word engine. This may include things such as unloading data or shutdown external processess. \"\"\" self.engine.bye() # TODO - plugin specific shutdown","title":"Wake Word Plugins"},{"location":"unused/ww_plugins/#wake-word-plugins","text":"WakeWord plugins classify audio and report if a certain word or sound is present or not These plugins usually correspond to the name of the voice assistant, \"hey mycroft\", but can also be used for other purposes","title":"Wake Word Plugins"},{"location":"unused/ww_plugins/#list-of-wake-word-plugins","text":"Plugin Type ovos-ww-plugin-pocketsphinx phonemes ovos-ww-plugin-vosk text samples ovos-ww-plugin-snowboy model ovos-ww-plugin-precise model ovos-ww-plugin-precise-lite model ovos-ww-plugin-nyumaya model ovos-ww-plugin-nyumaya-legacy model neon_ww_plugin_efficientwordnet model mycroft-porcupine-plugin model ovos-ww-plugin-hotkeys keyboard","title":"List of Wake Word plugins"},{"location":"unused/ww_plugins/#standalone-usage","text":"first lets get some boilerplate ouf of the way for the microphone handling logic import pyaudio # helper class class CyclicAudioBuffer: def __init__(self, duration=0.98, initial_data=None, sample_rate=16000, sample_width=2): self.size = self.duration_to_bytes(duration, sample_rate, sample_width) initial_data = initial_data or self.get_silence(self.size) # Get at most size bytes from the end of the initial data self._buffer = initial_data[-self.size:] @staticmethod def duration_to_bytes(duration, sample_rate=16000, sample_width=2): return int(duration * sample_rate) * sample_width @staticmethod def get_silence(num_bytes): return b'\\0' * num_bytes def append(self, data): \"\"\"Add new data to the buffer, and slide out data if the buffer is full Arguments: data (bytes): binary data to append to the buffer. If buffer size is exceeded the oldest data will be dropped. \"\"\" buff = self._buffer + data if len(buff) > self.size: buff = buff[-self.size:] self._buffer = buff def get(self): \"\"\"Get the binary data.\"\"\" return self._buffer # pyaudio params FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 16000 CHUNK = 1024 MAX_RECORD_SECONDS = 20 SAMPLE_WIDTH = pyaudio.get_sample_size(FORMAT) audio = pyaudio.PyAudio() # start Recording stream = audio.open(channels=CHANNELS, format=FORMAT, rate=RATE, frames_per_buffer=CHUNK, input=True) def load_plugin(): # Wake word initialization config = {\"model\": \"path/to/hey_computer.model\"} return MyHotWord(\"hey computer\", config=config) def listen_for_ww(plug): # TODO - see examples below return False plug = load_plugin() print(f\"Waiting for wake word {MAX_RECORD_SECONDS} seconds\") found = listen_for_ww(plug) if found: print(\"Found wake word!\") else: print(\"No wake word found\") # stop everything plug.stop() stream.stop_stream() stream.close() audio.terminate() new style plugins New style plugins expect to receive live audio, they may keep their own cyclic buffers internally def listen_for_ww(plug): for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)): data = stream.read(CHUNK) # feed data directly to streaming prediction engines plug.update(data) # streaming engines return result here found = plug.found_wake_word(data) if found: return True old style plugins (DEPRECATED) Old style plugins expect to receive ~3 seconds of audio data at once def listen_for_ww(plug): # used for old style non-streaming wakeword (deprecated) audio_buffer = CyclicAudioBuffer(plug.expected_duration, sample_rate=RATE, sample_width=SAMPLE_WIDTH) for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)): data = stream.read(CHUNK) # add data to rolling buffer, used by non-streaming engines audio_buffer.append(data) # non-streaming engines check the byte_data in audio_buffer audio_data = audio_buffer.get() found = plug.found_wake_word(audio_data) if found: return True new + old style plugins (backwards compatibility) if you are unsure what kind of plugin you will be using you can be compatible with both approaches like ovos-core def listen_for_ww(plug): # used for old style non-streaming wakeword (deprecated) audio_buffer = CyclicAudioBuffer(plug.expected_duration, sample_rate=RATE, sample_width=SAMPLE_WIDTH) for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)): data = stream.read(CHUNK) # old style engines will ignore the update plug.update(data) # streaming engines will ignore the byte_data audio_buffer.append(data) audio_data = audio_buffer.get() found = plug.found_wake_word(audio_data) if found: return True","title":"Standalone Usage"},{"location":"unused/ww_plugins/#plugin-template","text":"from ovos_plugin_manager.templates.hotwords import HotWordEngine from threading import Event class MyWWPlugin(HotWordEngine): def __init__(self, key_phrase=\"hey mycroft\", config=None, lang=\"en-us\"): super().__init__(key_phrase, config, lang) self.detection = Event() # read config settings for your plugin self.sensitivity = self.config.get(\"sensitivity\", 0.5) # TODO - plugin stuff # how does your plugin work? phonemes? text? models? self.engine = MyWW(key_phrase) def found_wake_word(self, frame_data): \"\"\"Check if wake word has been found. Checks if the wake word has been found. Should reset any internal tracking of the wake word state. Arguments: frame_data (binary data): Deprecated. Audio data for large chunk of audio to be processed. This should not be used to detect audio data instead use update() to incrementally update audio Returns: bool: True if a wake word was detected, else False \"\"\" detected = self.detection.is_set() if detected: self.detection.clear() return detected def update(self, chunk): \"\"\"Updates the hotword engine with new audio data. The engine should process the data and update internal trigger state. Arguments: chunk (bytes): Chunk of audio data to process \"\"\" if self.engine.found_it(chunk): # TODO - check for wake word self.detection.set() def stop(self): \"\"\"Perform any actions needed to shut down the wake word engine. This may include things such as unloading data or shutdown external processess. \"\"\" self.engine.bye() # TODO - plugin specific shutdown","title":"Plugin Template"}]} \ No newline at end of file diff --git a/search/worker.js b/search/worker.js new file mode 100644 index 00000000..8628dbce --- /dev/null +++ b/search/worker.js @@ -0,0 +1,133 @@ +var base_path = 'function' === typeof importScripts ? '.' : '/search/'; +var allowSearch = false; +var index; +var documents = {}; +var lang = ['en']; +var data; + +function getScript(script, callback) { + console.log('Loading script: ' + script); + $.getScript(base_path + script).done(function () { + callback(); + }).fail(function (jqxhr, settings, exception) { + console.log('Error: ' + exception); + }); +} + +function getScriptsInOrder(scripts, callback) { + if (scripts.length === 0) { + callback(); + return; + } + getScript(scripts[0], function() { + getScriptsInOrder(scripts.slice(1), callback); + }); +} + +function loadScripts(urls, callback) { + if( 'function' === typeof importScripts ) { + importScripts.apply(null, urls); + callback(); + } else { + getScriptsInOrder(urls, callback); + } +} + +function onJSONLoaded () { + data = JSON.parse(this.responseText); + var scriptsToLoad = ['lunr.js']; + if (data.config && data.config.lang && data.config.lang.length) { + lang = data.config.lang; + } + if (lang.length > 1 || lang[0] !== "en") { + scriptsToLoad.push('lunr.stemmer.support.js'); + if (lang.length > 1) { + scriptsToLoad.push('lunr.multi.js'); + } + if (lang.includes("ja") || lang.includes("jp")) { + scriptsToLoad.push('tinyseg.js'); + } + for (var i=0; i < lang.length; i++) { + if (lang[i] != 'en') { + scriptsToLoad.push(['lunr', lang[i], 'js'].join('.')); + } + } + } + loadScripts(scriptsToLoad, onScriptsLoaded); +} + +function onScriptsLoaded () { + console.log('All search scripts loaded, building Lunr index...'); + if (data.config && data.config.separator && data.config.separator.length) { + lunr.tokenizer.separator = new RegExp(data.config.separator); + } + + if (data.index) { + index = lunr.Index.load(data.index); + data.docs.forEach(function (doc) { + documents[doc.location] = doc; + }); + console.log('Lunr pre-built index loaded, search ready'); + } else { + index = lunr(function () { + if (lang.length === 1 && lang[0] !== "en" && lunr[lang[0]]) { + this.use(lunr[lang[0]]); + } else if (lang.length > 1) { + this.use(lunr.multiLanguage.apply(null, lang)); // spread operator not supported in all browsers: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator#Browser_compatibility + } + this.field('title'); + this.field('text'); + this.ref('location'); + + for (var i=0; i < data.docs.length; i++) { + var doc = data.docs[i]; + this.add(doc); + documents[doc.location] = doc; + } + }); + console.log('Lunr index built, search ready'); + } + allowSearch = true; + postMessage({config: data.config}); + postMessage({allowSearch: allowSearch}); +} + +function init () { + var oReq = new XMLHttpRequest(); + oReq.addEventListener("load", onJSONLoaded); + var index_path = base_path + '/search_index.json'; + if( 'function' === typeof importScripts ){ + index_path = 'search_index.json'; + } + oReq.open("GET", index_path); + oReq.send(); +} + +function search (query) { + if (!allowSearch) { + console.error('Assets for search still loading'); + return; + } + + var resultDocuments = []; + var results = index.search(query); + for (var i=0; i < results.length; i++){ + var result = results[i]; + doc = documents[result.ref]; + doc.summary = doc.text.substring(0, 200); + resultDocuments.push(doc); + } + return resultDocuments; +} + +if( 'function' === typeof importScripts ) { + onmessage = function (e) { + if (e.data.init) { + init(); + } else if (e.data.query) { + postMessage({ results: search(e.data.query) }); + } else { + console.error("Worker - Unrecognized message: " + e); + } + }; +} diff --git a/selene_backend/index.html b/selene_backend/index.html new file mode 100644 index 00000000..400bd782 --- /dev/null +++ b/selene_backend/index.html @@ -0,0 +1,295 @@ + + + + + + + + Selene Backend - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Additional Services »
  • + +
  • +
  • +
+
+
+
+
+ +

Selene Backend

+

The Mycroft Selene backend is deprecated but the software has been made avaliable on github.

+

https://github.com/MycroftAI/selene-backend

+

Coming Soon +Installing your own Selene backend

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..00b54a00 --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,708 @@ + + + + https://openvoiceos.github.io/community-docs/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/about/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/architecture/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/config/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/config_manual_config/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/config_ovos_config/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/faq/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/friends/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/glossary/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/hardware/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/hardware_audio/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/hardware_video/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_change_voice/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_config_homescreen/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_config_security/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_intro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_listener_intro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_mic/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_phal/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_skills/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_skills_config/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_skills_install/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_stt/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_stt_config/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_tts/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_tts_config/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_vad/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/ht_ww/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/images/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/images_buildroot/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/images_intro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/install_gui/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/install_intro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/install_ovos_core/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/install_ovos_docker/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/kdeconnect/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/license/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/members/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/mycroft/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/neonos/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/personal_backend/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/qs_first_boot/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/qs_intro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/qs_next_steps/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/selene_backend/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/starting_intro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/starting_modules/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/starting_systemd/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_audio/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_backend/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_installation/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_intro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_network/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_ovos_core/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_plugins/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_skills/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_stt/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_tts/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/troubleshooting_wakeword/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/OCA/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/PHAL/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/airplay/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/arch_apis/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/arch_backend/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/arch_gui/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/arch_phal/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/arch_plugins/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/arch_services/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/arch_skills/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/audio_plugins/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/audio_service/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/backend/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/backend_client/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/backend_manager/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/bigscreen/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/btspeaker/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/buildroot/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/bus_service/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/config_backend/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/config_gui/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/config_homescreen/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/config_oca/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/config_ovos_audio/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/config_ovos_core/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/config_security/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/config_skillls/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/config_wake_word/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/context/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/contributing/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/converse/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/core/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/dev/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/dev_ocp_skill/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/dinkum/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/first_boot/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/g2p_plugins/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/gs_first_boot/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/gs_installation_choices/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/gs_installing_image/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/gs_next_steps/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/gui_extensions/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/gui_intro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/gui_protocol/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/gui_service/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/guidelines/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/howto_secure_ssh/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/images_manjaro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/install_image/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/install_images/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/install_ovos_images/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/install_phal_plugins/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/install_raspbian/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/install_skills/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/intents/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/lang_plugins/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/linguistika/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/manjaro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/playing_music/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/playing_video/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/plugin_intro/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/prompts/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/qml_guidelines/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/scraps/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/selecting_backend/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/server-stuff/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/setup_network/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/setup_ovos/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/shell/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/speech_service/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/spotifyd/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/standard_skills/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/statements/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/stt_backends/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/stt_plugins/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/stt_server/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/translate_server/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/tts_backends/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/tts_plugins/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/vad_plugins/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/why/ + 2023-07-29 + daily + + + https://openvoiceos.github.io/community-docs/unused/ww_plugins/ + 2023-07-29 + daily + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..edf24daa Binary files /dev/null and b/sitemap.xml.gz differ diff --git a/starting_intro/index.html b/starting_intro/index.html new file mode 100644 index 00000000..df53a750 --- /dev/null +++ b/starting_intro/index.html @@ -0,0 +1,301 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Starting OVOS »
  • + +
  • +
  • +
+
+
+
+
+ +

Starting OVOS

+

Being a modular system has the advantage of being able to start with several different methods.

+

If you installed an image, your device is already pre-configured to start all of the services automatically.

+

As of July 2023, both the Buildroot image, and the Rasbpian image, use systemd service files to start, restart, and stop each OVOS module.

+

Typical command to restart the OVOS stack

+

systemctl --user restart ovos.service

+

ovos.service is a special systemd service file that instructs the rest of the stack to follow what it is doing. If you stop ovos.service all of the services will stop. Same with start and restart. This makes it handy to restart the complete stack in one command after changes have been made.

+

The rest of this section will describe this method, and others in detail.

+

Starting as stand alone modules

+

Starting with systemd service files

+

Starting Docker

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/starting_modules/index.html b/starting_modules/index.html new file mode 100644 index 00000000..41c420a8 --- /dev/null +++ b/starting_modules/index.html @@ -0,0 +1,319 @@ + + + + + + + + Starting Modules - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Starting OVOS »
  • + +
  • +
  • +
+
+
+
+
+ +

Starting OVOS - modules

+

OVOS in its simplest form is just a python module, and can be invoked as one. In fact, the systemd service method of starting OVOS uses a glorified version of this.

+

ovos-core

+

ovos-core is the brains of the device. Without it, you would have some cool software that does not work together. It controls the skills service and directs intents to the right skill.

+

Invoking the skills module

+

Open a command shell and type the following

+

ovos-core

+

You will see a bunch of lines from the logs, and at the end, it will say WARNING - Message Bus Client will reconnect in 5.0 seconds. This is because we have not started the messagebus service and that is the nervous system. You cannot communicate to the other parts without it.

+

ovos-messagebus

+

ovos-messagebus is the nervous system of OVOS. This is what makes everything work together.

+

NOTE The messagebus is an unsecured bus to your system and should NOT be exposed to the outside world.

+

firewall your messagebus WIP

+

Invoking the messagebus

+

With ovos-core running in one terminal shell, open another and type the command

+

ovos-messagebus

+

Once again, a whole bunch of log lines will scroll by, and at the end, it will say INFO - Message bus service started!

+

If you look back at the terminal with ovos-core, you will notice that there are new logs that ovos-core has connected to the messagebus.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/starting_systemd/index.html b/starting_systemd/index.html new file mode 100644 index 00000000..ee7b6f11 --- /dev/null +++ b/starting_systemd/index.html @@ -0,0 +1,405 @@ + + + + + + + + Starting Systemd - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Starting OVOS »
  • + +
  • +
  • +
+
+
+
+
+ +

Starting OVOS - systemd

+

OVOS, being a modular system, has several pieces that can start individually.

+

The OVOS team suggests doing this in a virtual environment. While not necessary, it can keep dependency problems in a running system from conflicting with one another.

+

Starting a Virtual Environment

+

We will assume that you are starting from your home directory. +Enter the following commands into a shell terminal.

+
python -m venv .venv
+
+. .venv/bin/activate
+
+

After a couple of seconds, your command prompt will change with (.venv) being at the front. + nn +User systemd service files are the recommended way to start each module. Other methods exist, such as using the modules as a python library, but are advanced topics and not discussed here.

+

This is the preferred method to start the OVOS modules. If you have not used systemd before, there are many references on the web with more information. It is out of scope of this document. The following is assuming the user ovos is being used.

+

A systemd service file and a systemd hook file is required for this to work. We will create both files for the ovos-messagebus service because this is used by all other modules. The provided system hook files need another Python package sdnotify to work as written.

+

pip install sdnotify

+

ovos.service

+

This is the main service file that is used to start the stack as a unit. This is not necessasary, but helpful if more than one module should start together.

+

Create the service file

+

nano .config/systemd/user/ovos.service

+

This file should contain

+
[Unit]
+Description=OVOS A.I. Software stack.
+
+[Service]
+Type=oneshot
+ExecStart=/bin/true
+RemainAfterExit=yes
+
+[Install]
+WantedBy=default.target
+
+

There is no hook file needed for this service.

+

ovos-messagebus.service

+

The messagebus is the main nervous system for OVOS and is needed by all other modules to enable communication between them.

+

Create the service file

+

nano ~/.config/systemd/user/ovos-messagebus.service

+

And make it contain the following

+
[Unit]
+Description=OVOS Messagebus
+PartOf=ovos.service
+After=ovos.service
+
+[Service]
+Type=notify
+ExecStart=/home/ovos/.local/bin/ovos-systemd-messagebus
+TimeoutStartSec=1m
+TimeoutStopSec=1m
+Restart=on-failure
+StartLimitInterval=5min
+StartLimitBurst=4
+#StartLimitAction=reboot-force
+#WatchdogSec=30s
+
+[Install]
+WantedBy=ovos.service
+
+

Create the hook file

+

nano ~/.local/bin/ovos-systemd-messagebus

+

This file should contain

+
#!/usr/bin/env python
+##########################################################################
+# ovos-systemd_messagebus.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##########################################################################
+import sdnotify
+from mycroft.messagebus.service.__main__ import main
+
+n = sdnotify.SystemdNotifier()
+
+def notify_ready():
+    n.notify('READY=1')
+    print('Startup of Mycroft Messagebus service complete')
+
+    def notify_stopping():
+        n.notify('STOPPING=1')
+        print('Stopping the Mycroft Messagebus service')
+
+        main(ready_hook=notify_ready, stopping_hook=notify_stopping)
+
+

Reload the systemd daemon

+

systemctl --user daemon-reload

+

The service can now be started

+

systemctl --user start ovos-messagebus.service

+

To start the stack on boot, enable both of the services

+

systemctl --user enable ovos.service

+

systemctl --user enable ovos-messagebus.service

+

Now on every reboot, the OVOS system should start automatically.

+

NOTE the systemd service and hook files are examples used in the raspbian-ovos repository.

+

For each module that needs to be started, they should have similar service and hook files.

+

For a complete system the following need to be running. +- ovos-messagebus.service +- ovos-skills.service +- ovos-audio.service +- ovos-dinkum-listener.service +- ovos-phal.service

+

The ovos-admin-phal.service need to run as a system service or with the user root +- ovos-admin-phal.service

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_audio/index.html b/troubleshooting_audio/index.html new file mode 100644 index 00000000..3a12f33f --- /dev/null +++ b/troubleshooting_audio/index.html @@ -0,0 +1,401 @@ + + + + + + + + Troubleshooting Audio - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Troubleshooting operating system audio

+

If audio isn't working for OpenVoiceOS, it's useful to verify that the operating system audio is working.

+

Architecture

+

ALSA

+

ALSA is the kernel level sound mixer, it manages your sound card directly. ALSA is crap (seriously) and it can handle a few (sometimes just one) channel. We don't generally have to deal with ALSA directly.

+

ALSA can be configured to use PulseAudio as it's default device, that way ALSA applications that are not PulseAudio aware will still use PulseAudio via an indirection layer.

+

Pulse Audio

+

PulseAudio is a software mixer, running in user space. When it runs, it uses Alsa's channel and manages everything. mixing, devices, network devices, etc.

+

PulseAudio always uses ALSA as backend, and on startup opens all ALSA devices. Since most ALSA devices can't be opened multiple times, this will cause all ALSA applications that try to use an ALSA device directly when PulseAudio is running to fail. If you have a legacy application that for some reason doesn't work, you can use pasuspender to temporary suspend PulseAudio to run this particular application.

+

Pulse Audio Modules

+

Troubleshooting Commands

+

List hardware cards cat /proc/asound/cards +List Playback and capture devices visible to ALSA (note the Card Number)

+
aplay -l
+arecord -l
+
+

This will list the cards, which can then be referenced in arecord using -D hw:,0:

+
arecord -f dat -r 16000 -D hw:4,0 -c 4 -d 10 test2.wav
+
+

You can then play the file back to test your speakers

+
aplay -D hw:2,0 test2.wav
+
+

** If PulseAudio is installed, Alsa should be configured to use PulseAudio as it's default, and we don't change anything in Alsa, we configure our default sources and sinks in Pulse Audio**

+

Verify that pulseaudio is installed

+
apt list pulseaudio
+
+

Verify that Alsa is using Pulse Audio as the default

+
$ aplay -L | head -n9
+null
+    Discard all samples (playback) or generate zero samples (capture)
+default
+    Playback/recording through the PulseAudio sound server
+
+

PulseAudio

+

List sinks (speakers) and sources (microphones) visible to PulseAudio

+
pactl list sinks
+pactl list sources
+
+

This will list the sources that can be used to set the default source for pulseaudio below.

+
pacmd set-default-source
+
+

e.g.

+
pacmd set-default-source alsa_input.usb-OmniVision_Technologies__Inc._USB_Camera-B4.09.24.1-01.multichannel-input
+
+

Test if OVOS is receiving mic input

+

You can test if OVOS is recieving mic input using the ovos-cli-client

+

Install the ovos-cli-client from github to ensure you have the latest version

+
cd ~
+git clone https://github.com/openvoiceos/ovos-cli-client
+pip install ./ovos-cli-client
+
+

Run the ovos-cli-client

+
ovos-cli-client
+
+

In the lower left you can observe the microphone levels, when you talk, the levels should increase. If they don't ovos is probably using the wrong microphone.

+

Gather Data

+

Before submitting an issue or asking a question on Element, please gather the following data.

+

For Microphone issues:

+
arecord -l
+arecord -L | head -n9
+pactl list sources
+pacmd dump
+
+

For Speaker issues:

+
aplay -l
+aplay -L | head -n9
+pactl list sinks
+pacmd dump
+
+

Additional Resources

+ +

Problem/fix

+
ERROR: pulseaudio sink always suspended
+

Try disabling suspend-on-idle in /etc/pulse/default.pa

+

Change this:

+
### Automatically suspend sinks/sources that become idle for too long
+load-module module-suspend-on-idle
+
+

to this:

+
### Automatically suspend sinks/sources that become idle for too long
+#load-module module-suspend-on-idle
+
+

and then restart PulseAudio. There is quite a lot of variation in how people do this but killall pulseaudio is one option (it gets automatically started again). If you want to be sure, you can restart the system.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_backend/index.html b/troubleshooting_backend/index.html new file mode 100644 index 00000000..6d4281e4 --- /dev/null +++ b/troubleshooting_backend/index.html @@ -0,0 +1,307 @@ + + + + + + + + Troubleshooting Backend - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Troubleshooting Backend Configuration

+

Architecture

+

Troubleshooting Commands

+

Problem/Fix

+
when I try to navigate to the backend-manager I get the error: "This personal backend instance does not have the admin interface exposed"
+

The personal backend has a config file. https://github.com/OpenVoiceOS/ovos-personal-backend#configuration

+

Set a value for "admin_key"

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_installation/index.html b/troubleshooting_installation/index.html new file mode 100644 index 00000000..8a948bf3 --- /dev/null +++ b/troubleshooting_installation/index.html @@ -0,0 +1,317 @@ + + + + + + + + Troubleshooting Installation - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Troubleshooting OpenVoiceOS Installation

+

coming soon

+

Architecture

+

Troubleshooting Commands

+

Gather Data

+

Problem/Fix

+

Install fails to create OVOS wifi hotspot

+

There is a known issue with balena (the wifi access point app) when it detects a WPA3 network of any sort, it fails. +More Information

+

Raspbian OVOS stuck on "Generating SSH keys..."

+

Not sure what is causing this, but if you reboot the pi (ctrl-alt-del) it will come up fine on the second boot.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_intro/index.html b/troubleshooting_intro/index.html new file mode 100644 index 00000000..190dd244 --- /dev/null +++ b/troubleshooting_intro/index.html @@ -0,0 +1,311 @@ + + + + + + + + Introduction to Troubleshooting - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Introduction to Troubleshooting OpenVoiceOS

+

** Coming soon **

+

Architecture

+

Troubleshooting Commands

+

Problem/Fix

+

locale.Error: unsupported locale setting

+

This error could from the ovos-cli-client, or other sources.

+

To resolve, ensure that your locale is set correctly, try running raspi-config to set it if your on Raspberry Pi OS (Raspbian).

+

Manually update /etc/default/locale

+

use locale to verify your current locale, and locale -a to verify the local you've set is actually available. +Source

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_network/index.html b/troubleshooting_network/index.html new file mode 100644 index 00000000..660317fc --- /dev/null +++ b/troubleshooting_network/index.html @@ -0,0 +1,301 @@ + + + + + + + + Troubleshooting Network - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Troubleshooting Networking

+

Architecture

+

Troubleshooting Commands

+

Problem/Fix

+

See Also Troubleshooting Installation

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_ovos_core/index.html b/troubleshooting_ovos_core/index.html new file mode 100644 index 00000000..9a5c4599 --- /dev/null +++ b/troubleshooting_ovos_core/index.html @@ -0,0 +1,301 @@ + + + + + + + + Troubleshooting OVOS Core - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Troubleshooting OVOS Core

+

** Coming Soon **

+

Architecture

+

Troubleshooting Commands

+

Problem/Fix

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_plugins/index.html b/troubleshooting_plugins/index.html new file mode 100644 index 00000000..d6d96ff3 --- /dev/null +++ b/troubleshooting_plugins/index.html @@ -0,0 +1,303 @@ + + + + + + + + Troubleshooting Plugins - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Troubleshooting Plugins

+

Architecture

+

Troubleshooting Commands

+

Gather Data

+

Problem/Fix

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_skills/index.html b/troubleshooting_skills/index.html new file mode 100644 index 00000000..29b101d3 --- /dev/null +++ b/troubleshooting_skills/index.html @@ -0,0 +1,303 @@ + + + + + + + + Troubleshooting Skills - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Troubleshooting Skills

+

Architecture

+

Troubleshooting Commands

+

Gather Data

+

Problem/Fix

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_stt/index.html b/troubleshooting_stt/index.html new file mode 100644 index 00000000..40fb6b7b --- /dev/null +++ b/troubleshooting_stt/index.html @@ -0,0 +1,305 @@ + + + + + + + + Troubleshooting STT - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Troubleshooting Speech To Text

+

Architecture

+

Troubleshooting Commands

+

Gather Data

+

Problem/Fix

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_tts/index.html b/troubleshooting_tts/index.html new file mode 100644 index 00000000..3314a526 --- /dev/null +++ b/troubleshooting_tts/index.html @@ -0,0 +1,305 @@ + + + + + + + + Troubleshooting TTS - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Troubleshooting Text To Speech

+

Architecture

+

Troubleshooting Commands

+

Gather Data

+

Problem/Fix

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/troubleshooting_wakeword/index.html b/troubleshooting_wakeword/index.html new file mode 100644 index 00000000..bc4f86a2 --- /dev/null +++ b/troubleshooting_wakeword/index.html @@ -0,0 +1,350 @@ + + + + + + + + Troubleshooting Wakeword - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • +
  • Troubleshooting »
  • + +
  • +
  • +
+
+
+
+
+ +

Troubleshooting Wake Word

+

Architecture

+

OpenVoiceOS uses a plugin (or plugins) to recognize the wake word. In your mycroft.conf file you'll specify the plugin used and what the wakeword is.

+

Troubleshooting Commands

+

Gather Data

+

Verify it's the Wake Word and not the microphone

+

To verify that it is the Wake Word and not the microphone causing the issue, we will get OVOS to ask us a question that we can respond to. In the OVOS cli type in the utterance "set timer". OVOS will then ask how long of a timer you would like. Speaking now should result in your utterance being transcribed.

+

If your response is successfully transcribed, it is most likely the Wake Word engine causing the problem.

+

Check your mycroft.conf file to see what plugin and wake word is configured.

+

Determine which configuration files are being loaded

+
grep mycroft.conf /var/log/syslog
+
+

Look at your mycoft.conf file and verify how your wake word is configured. Look for the following lines (or similar):

+
 "wake_word": "hey_mycroft"  ## this will match a hotword listed later in the config
+ ...
+   "hotwords": {
+    "hey_mycroft": {                             ## matches the wake_word
+        "module": "ovos-ww-plugin-precise",      ## what plugin is used
+        "version": "0.3",
+        "model": "https://github.com/MycroftAI/precise-data/raw/models-dev/hey-mycroft.tar.gz",
+        "phonemes": "HH EY . M AY K R AO F T",
+        "threshold": 1e-90,
+        "lang": "en-us",
+        "listen": true,
+        "sound": "snd/start_listening.wav"
+    },
+
+

Verify the Wake Word plugin is loading properly

+

grep wake /var/log/syslog +Look for lines similar to:

+
voice - ovos_plugin_manager.wakewords:load_module:85 - INFO - Loading "wake_up" wake word via ovos-ww-plugin-pocketsphinx
+voice - ovos_plugin_manager.wakewords:load_module:92 - INFO - Loaded the Wake Word plugin ovos-ww-plugin-pocketsphinx
+voice - ovos_plugin_manager.wakewords:load_module:85 - INFO - Loading "hey_mycroft" wake word via ovos-ww-plugin-precise
+voice - ovos_plugin_manager.wakewords:load_module:92 - INFO - Loaded the Wake Word plugin ovos-ww-plugin-precise
+
+

If you see an error about "failed to load plugin" make sure the plugin is installed.

+

Use ovos-cli-client to see if the microphone is working and the wakeword is being triggered. +ovos-cli-client +Look for

+
 05:19:54.975 - voice - mycroft.listener.service:handle_wakeword:97 - INFO - Wakeword Detected: hey_mycroft
+ 05:19:55.555 - voice - mycroft.listener.service:handle_record_begin:71 - INFO - Begin Recording...
+ 05:19:57.052 - voice - mycroft.listener.service:handle_record_end:78 - INFO - End Recording...
+
+

Problem/Fix

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « Previous + + + Next » + + +
+ + + + + + + + + diff --git a/unused/OCA/index.html b/unused/OCA/index.html new file mode 100644 index 00000000..5ff25053 --- /dev/null +++ b/unused/OCA/index.html @@ -0,0 +1,344 @@ + + + + + + + + OCA - OVOS Config Assistant - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OCA - OVOS Config Assistant

+

OCA is a user facing interface to configure ovos devices

+

Web UI

+

OCA provides a local Web UI similar to ovos-backend-manager, in here you can configure your device, view metrics, handle OAuth and more

+

CLI

+

A command line interface is planned but not yet available to provide equivalent functionality to the Web UI

+

Python utils

+
from ovos_config_assistant.module_helpers import pprint_core_module_info
+pprint_core_module_info()
+"""
+## Mycroft module info
+     can import mycroft     : True
+     is ovos-core           : True
+     mycroft module location: /home/user/ovos-core/mycroft
+
+## Downstream ovos.conf overrides
+Module: neon_core
+     can import neon_core     : False
+     neon_core module location: None
+     xdg compliance            : True
+     base xdg folder           : neon
+     mycroft config filename   : neon.conf
+     default mycroft.conf path :
+          /home/user/NeonCore/neon_core/configuration/neon.conf
+Module: hivemind
+     can import hivemind     : False
+     hivemind module location: None
+     xdg compliance            : True
+     base xdg folder           : hivemind
+     mycroft config filename   : hivemind.conf
+     default mycroft.conf path :
+          /home/user/PycharmProjects/ovos_workspace/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf
+
+## Downstream module overrides:
+Module: neon_speech
+     uses config from   : neon_core
+     can import neon_speech     : False
+     neon_speech module location: None
+Module: neon_audio
+     uses config from   : neon_core
+     can import neon_audio     : False
+     neon_audio module location: None
+Module: neon_enclosure
+     uses config from   : neon_core
+     can import neon_enclosure     : False
+     neon_enclosure module location: None
+Module: hivemind_voice_satellite
+     uses config from   : hivemind
+     can import hivemind_voice_satellite     : True
+     hivemind_voice_satellite module location: /home/user/HiveMind-voice-sat/hivemind_voice_satellite
+"""
+
+from ovos_config_assistant.config_helpers import pprint_ovos_conf
+pprint_ovos_conf()
+"""
+## OVOS Configuration
+ ovos.conf exists          : True
+      /home/user/.config/OpenVoiceOS/ovos.conf
+ xdg compliance            : True
+ base xdg folder           : mycroft
+ mycroft config filename   : mycroft.conf
+ default mycroft.conf path :
+      /home/user/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf
+"""
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/PHAL/index.html b/unused/PHAL/index.html new file mode 100644 index 00000000..d52be4a1 --- /dev/null +++ b/unused/PHAL/index.html @@ -0,0 +1,451 @@ + + + + + + + + PHAL - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

PHAL

+

PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the +concept of hardcoded "enclosure" from mycroft-core

+

Any number of plugins providing functionality can be loaded and validated at runtime, plugins can +be system integrations to handle things like reboot and +shutdown, or hardware drivers such as mycroft mark2 plugin

+

PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it +does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images

+

Plugins

+

Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginDescription
ovos-PHAL-plugin-alsavolume control
ovos-PHAL-plugin-systemreboot / shutdown / factory reset
ovos-PHAL-plugin-mk1mycroft mark1 integration
ovos-PHAL-plugin-mk2mycroft mark2 integration
ovos-PHAL-plugin-respeaker-2micrespeaker 2mic hat integration
ovos-PHAL-plugin-respeaker-4micrespeaker 4mic hat integration
ovos-PHAL-plugin-wifi-setupwifi setup (central plugin)
ovos-PHAL-plugin-gui-network-clientwifi setup (GUI interface)
ovos-PHAL-plugin-balena-wifiwifi setup (hotspot)
ovos-PHAL-plugin-network-managerwifi setup (network manager)
ovos-PHAL-plugin-brightness-control-rpibrightness control
ovos-PHAL-plugin-ipgeoautomatic geolocation (IP address)
ovos-PHAL-plugin-gpsdautomatic geolocation (GPS)
ovos-PHAL-plugin-dashboarddashboard control (ovos-shell)
ovos-PHAL-plugin-notification-widgetssystem notifications (ovos-shell)
ovos-PHAL-plugin-color-scheme-managerGUI color schemes (ovos-shell)
ovos-PHAL-plugin-configuration-providerUI to edit mycroft.conf (ovos-shell)
ovos-PHAL-plugin-analog-media-devicesvideo/audio capture devices (OCP)
+

Developers

+

In mycroft-core the equivalent to PHAL plugins would usually be shipped as skills or hardcoded

+

in OVOS sometimes it may be unclear if we should develop a skill or plugin, there isn't a one size fits all answer, in some circumstances it may make sense to create both a plugin and a companion skill

+

flow

+

Template

+

PHAL plugins do not follow a strict template, they are usually event listeners that perform certain actions and integrate with other components

+
from mycroft_bus_client import Message
+from ovos_plugin_manager.phal import PHALPlugin
+
+
+class MyPHALPluginValidator:
+    @staticmethod
+    def validate(config=None):
+        """ this method is called before loading the plugin.
+        If it returns False the plugin is not loaded.
+        This allows a plugin to run platform checks"""
+        return True
+
+
+class MyPHALPlugin(PHALPlugin):
+    validator = MyPHALPluginValidator
+
+    def __init__(self, bus=None, config=None):
+        super().__init__(bus=bus, name="ovos-PHAL-plugin-NAME", config=config)
+        # register events for plugin
+        self.bus.on("my.event", self.handle_event)
+
+    def handle_event(self, message):
+        # TODO plugin stuff
+        self.bus.emit(Message("my.event.response"))
+
+    def shutdown(self):
+        # cleanly remove any event listeners and perform shutdown actions
+        self.bus.remove("my.event", self.handle_event)
+        super().shutdown()
+
+

You can find plugin packaging documentation here

+

Admin PHAL

+

AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root privileges.

+

This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely

+

NOTE: Because this service runs as root, plugins it loads are responsible for not writing +configuration changes which would result in breaking config file permissions.

+

Service

+

to use AdminPHAL create a launcher /usr/libexec/mycroft-systemd-admin-phal

+
import sdnotify
+from ovos_PHAL.admin import main
+
+n = sdnotify.SystemdNotifier()
+
+def notify_ready():
+    n.notify('READY=1')
+    print('Startup of Admin service complete')
+
+def notify_stopping():
+    n.notify('STOPPING=1')
+    print('Stopping Admin service')
+
+main(ready_hook=notify_ready, stopping_hook=notify_stopping)
+
+

and system service /usr/lib/systemd/user/mycroft-admin-phal.service

+
[Unit]
+Description=Admin PHAL
+PartOf=mycroft.service
+After=mycroft-messagebus.service
+
+[Service]
+Type=notify
+TimeoutStopSec=30
+Restart=always
+User=root
+ExecStart=/usr/libexec/mycroft-systemd-admin-phal
+
+[Install]
+WantedBy=mycroft.service
+
+

Plugins

+

AdminPlugins are just like regular PHAL plugins that run with root privileges

+

A plugin needs to identify itself as an admin plugin via it's entry point, PHAL will not load Admin plugins and AdminPHAL will not load regular plugins

+

Admin plugins will only load if their configuration contains "enabled": true. All admin plugins need to be explicitly enabled

+

You can find plugin packaging documentation here

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/airplay/index.html b/unused/airplay/index.html new file mode 100644 index 00000000..ce878895 --- /dev/null +++ b/unused/airplay/index.html @@ -0,0 +1,290 @@ + + + + + + + + Airplay - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Airplay

+

By default, your OpenVoiceOS device is advertising itself as Airplay (v1 - currently) device on your network. This can be used from either the iOS Airplay selection screen if you play some local files, like shown below; + +Tap / Click the bottom middle Airplay icon on your music player which opens the Airplay devices menu. It should pick up your OpenVoiceOS device automatically from the network. + +Select the OpenVoiceOS device to re-route your sound output to your OpenVoiceOS device. +

+

The Airplay selection menu is also available within other music clients such as the Spotify app. + +And if that client also supports metadata over MPRIS your OpenVoiceOS device will show it on it's screen as well. +

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/arch_apis/index.html b/unused/arch_apis/index.html new file mode 100644 index 00000000..d7f76861 --- /dev/null +++ b/unused/arch_apis/index.html @@ -0,0 +1,285 @@ + + + + + + + + APIs - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

APIs

+

There are several API's used by OVOS including for weather inquiries, Wolfram-alpha and others

+

OVOS provides default keys that are used by default and no more configuration is needed.

+

You can always provide your own API keys for each service, and even add more APIs for OVOS to use.

+

TODO fix link to api config +API Configuration

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/arch_backend/index.html b/unused/arch_backend/index.html new file mode 100644 index 00000000..80bb2dc1 --- /dev/null +++ b/unused/arch_backend/index.html @@ -0,0 +1,294 @@ + + + + + + + + OpenVoiceOS Backends - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Backends

+

A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device. The backend is optional and concidered to be advanced usage in OVOS. By default, OVOS does not use a backend, and is totally unnecessasary for basic usage. A local backend becomes usefull if you have several devices around the house and would like a central place to configure them.

+

A backend can provide:

+
    +
  • A nice web ui to configure device, this allows for you to configure once and push it to all your devices.
  • +
  • Free API services for weather and wolfram alpha
  • +
  • Collecting data, e.g. upload ww and utterance audio samples from all your devices
  • +
+

Available backends:

+ +

TODO fix link to personal backend

+

Setting Up a Personal Backend

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/arch_gui/index.html b/unused/arch_gui/index.html new file mode 100644 index 00000000..38b015fb --- /dev/null +++ b/unused/arch_gui/index.html @@ -0,0 +1,308 @@ + + + + + + + + GUI Architecture - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

GUI Architecture

+

Introduction

+

OVOS devices with displays provide skill developers the opportunity to create skills that can be empowered by both voice and screen interaction.

+

The display interaction technology is based on the QML user interface markup language that gives you complete freedom to create in-depth innovative interactions without boundaries or provide you with simple templates within the Mycroft GUI framework that allow minimalistic display of text and images based on your skill development specifics and preferences.

+

Framework

+

Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt.

+

Introduction to QML

+

QML user interface markup language is a declarative language built on top of Qt's existing strengths designed to describe the user interface of a program: both what it looks like, and how it behaves. QML provides modules that consist of sophisticated set of graphical and behavioral building elements.

+

A collection of resources to familiarize you with QML and Kirigami Framework.

+ +

GUI Extensions

+

OVOS Core supports a GUI Extension framework which allows the GUI service to incorporate additional behaviour for a +specific platform. GUI Extensions currently supported:

+

Smartspeaker Extension

+

This extension is responsible for managing the smartspeaker GUI interface behaviour, it supports homescreens and +homescreen management. Enabling the smartspeaker GUI extension:

+

Bigscreen Extension

+

This extension is responsible for managing the plasma bigscreen GUI interface behaviour, it supports window management +and window behaviour control on specific window managers like Kwin. Enabling the Bigscreen GUI extension:

+

Mobile Extension

+

This extension is responsible for managing the mobile GUI interface behaviour, it supports homescreens and additionally +adds support for global page back navigation. Enabling the Mobile GUI extension:

+

Generic Extension

+

This extension provides a generic GUI interface and does not add any additional behaviour, +it optionally supports homescreens if the platform or user manually enables it. +This extension is enabled by default when no other extension is specified.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/arch_phal/index.html b/unused/arch_phal/index.html new file mode 100644 index 00000000..13a5cdd1 --- /dev/null +++ b/unused/arch_phal/index.html @@ -0,0 +1,383 @@ + + + + + + + + PHAL - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

PHAL

+

** Editors Note ** +Lots of cleanup coming here, just placeholder information for now.

+

PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the +concept of hardcoded "enclosure" from mycroft-core

+

Any number of plugins providing functionality can be loaded and validated at runtime, plugins can +be system integrations to handle things like reboot and +shutdown, or hardware drivers such as mycroft mark2 plugin

+

PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it +does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images

+

Plugins

+

Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginDescription
ovos-PHAL-plugin-alsavolume control
ovos-PHAL-plugin-systemreboot / shutdown / factory reset
ovos-PHAL-plugin-mk1mycroft mark1 integration
ovos-PHAL-plugin-mk2mycroft mark2 integration
ovos-PHAL-plugin-respeaker-2micrespeaker 2mic hat integration
ovos-PHAL-plugin-respeaker-4micrespeaker 4mic hat integration
ovos-PHAL-plugin-wifi-setupwifi setup (central plugin)
ovos-PHAL-plugin-gui-network-clientwifi setup (GUI interface)
ovos-PHAL-plugin-balena-wifiwifi setup (hotspot)
ovos-PHAL-plugin-network-managerwifi setup (network manager)
ovos-PHAL-plugin-brightness-control-rpibrightness control
ovos-PHAL-plugin-ipgeoautomatic geolocation (IP address)
ovos-PHAL-plugin-gpsdautomatic geolocation (GPS)
ovos-PHAL-plugin-dashboarddashboard control (ovos-shell)
ovos-PHAL-plugin-notification-widgetssystem notifications (ovos-shell)
ovos-PHAL-plugin-color-scheme-managerGUI color schemes (ovos-shell)
ovos-PHAL-plugin-configuration-providerUI to edit mycroft.conf (ovos-shell)
ovos-PHAL-plugin-analog-media-devicesvideo/audio capture devices (OCP)
+

flow

+

Admin PHAL

+

AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root privileges.

+

This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely

+

NOTE: Because this service runs as root, plugins it loads are responsible for not writing +configuration changes which would result in breaking config file permissions.

+

Admin Plugins

+

AdminPlugins are just like regular PHAL plugins that run with root privileges

+

Admin plugins will only load if their configuration contains "enabled": true. All admin plugins need to be explicitly enabled

+

You can find plugin packaging documentation here

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/arch_plugins/index.html b/unused/arch_plugins/index.html new file mode 100644 index 00000000..2c9c4771 --- /dev/null +++ b/unused/arch_plugins/index.html @@ -0,0 +1,281 @@ + + + + + + + + Plugin Architecture - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Plugin Architecture

+

** Coming Soon **

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/arch_services/index.html b/unused/arch_services/index.html new file mode 100644 index 00000000..341be764 --- /dev/null +++ b/unused/arch_services/index.html @@ -0,0 +1,311 @@ + + + + + + + + OpenVoiceOS Services - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Services

+

Editors Note Major revisions coming here, mostly placeholder information

+

Skills Service

+

The skills service is responsible for loading skills and intent parsers

+

All user queries are handled by the skills service, you can think of it as OVOS's brain +More Information

+

Speech Service

+

Speech Client

+

The speech client is responsible for loading STT, VAD and Wake Word plugins

+

Speech is transcribed into text and forwarded to the skills service

+

Hotwords

+

OVOS allows you to load any number of hot words in parallel and trigger different actions when they are detected

+

each hotword can do one or more of the following:

+
    +
  • trigger listening, also called a wake_word
  • +
  • play a sound
  • +
  • emit a bus event
  • +
  • take ovos-core out of sleep mode, also called a wakeup_word or standup_word
  • +
  • take ovos-core out of recording mode, also called a stop_word
  • +
+

To add a new hotword add its configuration under "hotwords" section.

+

By default, all hotwords are disabled unless you set "listen": true. +Under the "listener" setting a main wake word and stand up word are defined, those will be automatically enabled unless you set "active": false. +This is usually not desired unless you are looking to completely disabled wake word usage

+

STT

+

Two STT plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used.

+

This allows you to have a lower accuracy offline model as fallback to account for internet outages, this ensures your device never becomes fully unusable

+

Listener

+

You can modify microphone settings and enable additional features under the listener section such as wake word / utterance recording / uploading

+

VAD

+

Voice Activity Detection is used by the speech client to determine when a user stopped speaking, this indicates the voice command is ready to be executed. +Several VAD strategies are supported

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/arch_skills/index.html b/unused/arch_skills/index.html new file mode 100644 index 00000000..acf042d7 --- /dev/null +++ b/unused/arch_skills/index.html @@ -0,0 +1,348 @@ + + + + + + + + OVOS Skills - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Skills

+

What can a Skill do?

+

Skills give OVOS the ability to perform a variety of functions. They can be installed or removed by the user, and can be easily updated to expand functionality. To get a good idea of what skills to build, let’s talk about the best use cases for a voice assistant, and what types of things OVOS can do.

+

OVOS can run on a variety of platforms from the Linux Desktop to SBCs like the raspberry pi. +Different devices will have slightly different use cases. Devices in the home are generally located in the living room or kitchen and are ideal for listening to the news, playing music, general information, using timers while cooking, checking the weather, and other similar activities that are easily accomplished hands free.

+

Basic functions

+

We cover a lot of the basics with our Default Skills, things like Timers, Alarms, Weather, Time and Date, and more.

+

Information

+

We also call this General Question and Answer, and it covers all of those factual questions someone might think to ask a +voice assistant. Questions like “who was the 32nd President of the United States?”, or “how tall is Eiffel Tower?” +Although the Default Skills cover a great deal of questions there is room for more. There are many topics that could use a specific skill such as Science, Academics, Movie Info, TV info, and Music info, etc..

+

Media

+

One of the biggest use cases for Smart Speakers is playing media. The reason media playback is so popular is that it makes playing a song so easy, all you have to do is say “Hey Mycroft play the Beatles,” and you can be enjoying music without having to reach for a phone or remote. In addition to listening to music, there are +skills that handle videos as well.

+

News

+

Much like listening to music, getting the latest news with a simple voice interaction is extremely convenient. OVOS +supports multiple news feeds, and has the ability to support multiple news skills.

+

Smart Home

+

Another popular use case for Voice Assistants is to control Smart Home and IoT products. Within the mycroft ecosystem +there are skills for Home Assistant, Wink IoT, Lifx and more, but there are many products that we do not have skill for yet. +The open source community has been enthusiastically expanding OVOS's ability to voice control all kinds of smart home +products.

+

Games

+

Voice games are becoming more and more popular, especially those that allow multiple users to play together. Trivia games are some of the most popular types of games to develop for voice assistants. There are several games already available for OVOS. There are native voice adventure games, ports of the popular text adventure games from infocom, a Crystal Ball game, a Number Guessing game and much more!

+

OpenVoiceOS Standard Skills

+

Standard Skills Usage

+

Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage.

+

Date / Time skill

+

You can ask your device what time or date it is just in case you lost your watch.

+
+

Hey Mycroft, what time is it?

+
+

Time

+
+

Hey Mycroft, what is the date?

+
+

Date

+

Setting an alarm

+

Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device.

+
+

Hey Mycroft, set an alarm for 8 AM.

+
+

Alarm

+

Setting of timers

+

Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers.

+
+

Hey Mycroft, set a timer for 5 minutes.

+
+

Timer

+

You can always set more timers and even name them, so you know which timers is for what.

+
+

Hey, Mycroft, set another timer called rice cooking for 7 minutes.

+
+

Timers

+

Asking the weather

+

You can ask your device what the weather is or would be at any given time or place.

+
+

Hey Mycroft, what is the weather like today?

+
+

Weather 1

+

The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen.

+

Weather 2

+

File Browser

+

The file browser allows you to browse the filesystem in your device and any connected media, you can view images and play music and videos.

+

KDEConnect integration allows you to share files with your mobile devices

+

+

GUI Framework

+

Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt.

+

OVOS uses the standard mycroft-gui framework, you can find the official documentation here

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/audio_plugins/index.html b/unused/audio_plugins/index.html new file mode 100644 index 00000000..16924950 --- /dev/null +++ b/unused/audio_plugins/index.html @@ -0,0 +1,309 @@ + + + + + + + + Audio Plugins - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Audio Plugins

+

Audio plugins are responsible for handling playback of media, like music and podcasts

+

If mycroft-gui is available these plugins will rarely be used unless ovos is explicitly configured to do so

+

List of Audio plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + +
PluginDescription
ovos-ocp-audio-pluginframework + compatibility layer
ovos-audio-plugin-simplesox / aplay / paplay / mpg123
ovos-vlc-pluginvlc audio backend
ovos-mplayer-pluginmplayer audio backend
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/audio_service/index.html b/unused/audio_service/index.html new file mode 100644 index 00000000..210501d6 --- /dev/null +++ b/unused/audio_service/index.html @@ -0,0 +1,322 @@ + + + + + + + + Audio Service - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Audio Service

+

The audio service is responsible for loading TTS and Audio plugins

+

All audio playback is handled by this service

+

Native playback

+

Usually playback is triggered by some originating bus message, eg "recognizer_loop:utterance", this message contains metadata that is used to determine if playback should happen.

+

message.context may contain a source and destination, playback is only triggered if a message destination is a native_source or if missing (considered a broadcast).

+

This separation of native sources allows remote clients such as an android app to interact with OVOS without the actual device where ovos-core is running repeating all TTS and music playback out loud

+

You can learn more about message targeting here

+

By default, only utterances originating from the speech client and ovos cli are considered native

+

for legacy reasons the names for ovos cli and speech client are "debug_cli" and "audio" respectively

+

TTS

+

Two TTS plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used.

+

This allows you to have a lower quality offline voice as fallback to account for internet outages, this ensures your device can always give you feedback

+
"tts": {
+    "pulse_duck": false,
+    "module": "ovos-tts-plugin-mimic2",
+    "fallback_module": "ovos-tts-plugin-mimic"
+},
+
+

Audio

+

You can enable additional Audio plugins and define the native sources described above under the "Audio" section of mycroft.conf

+

ovos-core uses OCP natively for media playback, you can learn more about OCP here

+

OCP will decide when to call the Audio service and what plugin to use, the main use case is for headless setups without a GUI

+

NOTE: mycroft-core has a "default-backend" config option, in ovos-core this option has been deprecated and is always OCP.

+
"Audio": {
+    "native_sources": ["debug_cli", "audio"],
+
+    "backends": {
+      "OCP": {
+        "type": "ovos_common_play",
+        "active": true
+      },
+      "simple": {
+        "type": "ovos_audio_simple",
+        "active": true
+      },
+      "vlc": {
+        "type": "ovos_vlc",
+        "active": true
+      }
+    }
+},
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/backend/index.html b/unused/backend/index.html new file mode 100644 index 00000000..af008881 --- /dev/null +++ b/unused/backend/index.html @@ -0,0 +1,363 @@ + + + + + + + + Available Backend Services - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Available Backend Services

+

Supported Backends

+

ovos-core supports multiple backends under a single unified interface

+
    +
  • Personal backend - self hosted
  • +
  • Selene - https://api.mycroft.ai
  • +
  • OpenVoiceOS API Service - https://api.openvoiceos.com
  • +
  • Offline - support for setting your own api keys and query services directly
  • +
+

Developers do not need to worry about backend details in their applications and skills

+

Identity Information

+

A unique uuid and pairing information generated by registering with Home is stored in: +~/.config/mycroft/identity/identity2.json <-- DO NOT SHARE THIS WITH OTHERS!

+

This file uniquely identifies your device and should be kept safe

+

STT Plugin

+

a companion stt plugin is available to use a backend as remote STT provider

+

edit your configuration to use ovos-stt-plugin-selene

+
{
+  "stt": {
+    "module": "ovos-stt-plugin-selene"
+  }
+}
+
+

source code

+

Offline Backend

+

OVOS by default runs without a backend, in this case you will need to configure api keys manually

+

This can be done with OCA or by editing mycroft.conf

+

edit your configuration to use the offline backend

+
{
+  "server": {
+    "backend_type": "offline"
+  }
+}
+
+

Selene

+

The official mycroft home backend is called selene, users need to create an account and pair devices with the mycroft +servers.

+

This backend is not considered optional by MycroftAI but is not used by OVOS unless explicitly enabled

+

Selene is AGPL licensed: +- backend source code +- frontend source code

+

edit your configuration to use the selene backend

+
{
+  "server": {
+    "backend_type": "selene",
+    "url": "https://api.mycroft.ai",
+    "version": "v1",
+    "update": true,
+    "metrics": true,
+    "sync_skill_settings": true
+  }
+}
+
+

Personal Backend

+

Personal backend is a reverse engineered alternative to selene that predates it

+

It provides the same functionality for devices and packs some extra options

+

It is not intended to serve different users or thousands of devices, there are no user accounts!

+

This is currently the only way to run a vanilla mycroft-core device offline

+

edit your configuration to use your own personal backend instance

+
{
+  "server": {
+    "backend_type": "personal",
+    "url": "http://0.0.0.0:6712",
+    "version": "v1",
+    "update": true,
+    "metrics": true,
+    "sync_skill_settings": true
+  }
+}
+
+

source code

+

OVOS API Service

+

OVOS Api Service is not a full backend, it is a set of free proxy services hosted by the OVOS Team for usage in default +skills

+

device management functionality and user accounts do not exist, offline mode will be used for these apis

+

edit your configuration to use the OVOS backend

+
{
+  "server": {
+    "backend_type": "ovos",
+    "url": "https://api.openvoiceos.com"
+  }
+}
+
+

source code

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/backend_client/index.html b/unused/backend_client/index.html new file mode 100644 index 00000000..9867ccc5 --- /dev/null +++ b/unused/backend_client/index.html @@ -0,0 +1,520 @@ + + + + + + + + OVOS Backend Client - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Backend Client

+

Python client library for interaction with several supported backends under a single unified interface

+
    +
  • Personal backend - self hosted
  • +
  • Selene - https://api.mycroft.ai
  • +
  • OpenVoiceOS API Service - https://api.openvoiceos.com
  • +
  • Offline - support for setting your own api keys and query services directly
  • +
+

Backend Overview

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
APIOfflinePersonalSeleneOVOS
Adminyes [1]yesnono
Deviceyes [2]yesyesyes [4]
Metricsyes [2]yesyesyes [4]
Datasetyes [2]yesyesyes [4]
OAuthyes [2]yesyesyes [4]
Wolframyes [3]yesyesyes
Geolocateyesyesyesyes
STTyes [3]yesyesyes
Weatheryes [3]yesyesyes
Emailyes [3]yesyesyes
+
[1] will update user level mycroft.conf
+[2] shared json database with personal backend for UI compat
+[3] needs additional configuration (eg. credentials)
+[4] uses offline_backend functionality
+
+

Geolocation

+
from ovos_backend_client.api import GeolocationApi
+
+geo = GeolocationApi()
+data = geo.get_geolocation("Lisbon Portugal")
+# {'city': 'Lisboa',
+# 'country': 'Portugal', 
+# 'latitude': 38.7077507, 
+# 'longitude': -9.1365919, 
+# 'timezone': 'Europe/Lisbon'}
+
+

Remote Settings

+

To interact with skill settings on selene

+
from ovos_backend_client.settings import RemoteSkillSettings
+
+# in ovos-core skill_id is deterministic and safe
+s = RemoteSkillSettings("skill.author")
+# in mycroft-core please ensure a valid remote_id
+# in MycroftSkill class you can use
+# remote_id = self.settings_meta.skill_gid
+# s = RemoteSkillSettings("skill.author", remote_id="@|whatever_msm_decided")
+s.download()
+
+s.settings["existing_value"] = True
+s.settings["new_value"] = "will NOT show up in UI"
+s.upload()
+
+# auto generate new settings meta for all new values before uploading
+s.settings["new_value"] = "will show up in UI"
+s.generate_meta()  # now "new_value" is in meta
+s.upload()
+
+
+
+

Selene Cloud

+

by hijacking skill settings we allows storing arbitrary data in selene and use it across devices and skills

+
from ovos_backend_client.cloud import SeleneCloud
+
+cloud = SeleneCloud()
+cloud.add_entry("test", {"secret": "NOT ENCRYPTED MAN"})
+data = cloud.get_entry("test")
+
+

an encrypted version is also supported if you dont trust selene!

+
from ovos_backend_client.cloud import SecretSeleneCloud
+
+k = "D8fmXEP5VqzVw2HE"  # you need this to read back the data
+cloud = SecretSeleneCloud(k)
+cloud.add_entry("test", {"secret": "secret data, selene cant read this"})
+data = cloud.get_entry("test")
+
+

+

OAuth

+

Retrieving the tokens in a skill does not depend on the selected backend, the mechanism to register a token is backend +specific

+

First you need to authorize the application, this can be done +with ovos-config-assistant if running offline +or ovos-backend-manager if using personal backend

+

If using selene there is no automated process to add a +token, you need to contact support@mycroft.ai

+
from ovos_backend_client.api import OAuthApi, BackendType
+
+# api = OAuthApi()  # determine oauth backend from mycroft.conf
+api = OAuthApi(backend_type=BackendType.OFFLINE)  # explicitly use ovos-backend-manager oauth
+token_json = api.get_oauth_token("spotify")
+
+

OpenWeatherMap Proxy

+
from ovos_backend_client.api import OpenWeatherMapApi
+
+owm = OpenWeatherMapApi()
+data = owm.get_weather()
+# dict - see api docs from owm onecall api
+
+

Wolfram Alpha proxy

+
from ovos_backend_client.api import WolframAlphaApi
+
+wolf = WolframAlphaApi()
+answer = wolf.spoken("what is the speed of light")
+# The speed of light has a value of about 300 million meters per second
+
+data = wolf.full_results("2+2")
+# dict - see api docs from wolfram
+
+

STT

+

a companion stt plugin is available - ovos-stt-plugin-selene

+

Admin Api (personal backend only!)

+

since local backend does not provide a web ui a admin api +can be used to manage your devices

+
from ovos_backend_client.api import AdminApi
+
+admin = AdminApi("secret_admin_key")
+uuid = "..."  # check identity2.json in the device you want to manage
+
+# manually pair a device
+identity_json = admin.pair(uuid)
+
+# set device info
+info = {"opt_in": True,
+        "name": "my_device",
+        "device_location": "kitchen",
+        "email": "notifications@me.com",
+        "isolated_skills": False,
+        "lang": "en-us"}
+admin.set_device_info(uuid, info)
+
+# set device preferences
+prefs = {"time_format": "full",
+         "date_format": "DMY",
+         "system_unit": "metric",
+         "lang": "en-us",
+         "wake_word": "hey_mycroft",
+         "ww_config": {"phonemes": "HH EY . M AY K R AO F T",
+                       "module": "ovos-ww-plugin-pocketsphinx",
+                       "threshold": 1e-90},
+         "tts_module": "ovos-tts-plugin-mimic",
+         "tts_config": {"voice": "ap"}}
+admin.set_device_prefs(uuid, prefs)
+
+# set location data
+loc = {
+    "city": {
+        "code": "Lawrence",
+        "name": "Lawrence",
+        "state": {
+            "code": "KS",
+            "name": "Kansas",
+            "country": {
+                "code": "US",
+                "name": "United States"
+            }
+        }
+    },
+    "coordinate": {
+        "latitude": 38.971669,
+        "longitude": -95.23525
+    },
+    "timezone": {
+        "code": "America/Chicago",
+        "name": "Central Standard Time",
+        "dstOffset": 3600000,
+        "offset": -21600000
+    }
+}
+admin.set_device_location(uuid, loc)
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/backend_manager/index.html b/unused/backend_manager/index.html new file mode 100644 index 00000000..f56c8630 --- /dev/null +++ b/unused/backend_manager/index.html @@ -0,0 +1,280 @@ + + + + + + + + Backend manager - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ + + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/bigscreen/index.html b/unused/bigscreen/index.html new file mode 100644 index 00000000..cf741d6f --- /dev/null +++ b/unused/bigscreen/index.html @@ -0,0 +1,287 @@ + + + + + + + + Bigscreen - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Introduction:

+

OpenVoiceOS GUI supports various Skills and PHAL plugins that share a voice application interface with Plasma Bigscreen. In order to enable key navigation on Plasma Bigscreen and Media Centers, the user needs to export an environment variable.

+

Exporting the Environment Variable:

+

In order to enable key navigation on Plasma Bigscreen and Media Centers, the user needs to export the environment variable QT_FILE_SELECTORS=mediacenter. This can be done by executing the following command in the terminal:

+
export QT_FILE_SELECTORS=mediacenter
+
+

This environment variable by default is enabled and added to the Plasma Bigscreen environment. To create your own media center environment store the variable in /etc/environment or /etc/profile.d/99-ovos-media-center.sh

+

Exporting the environment variable QT_FILE_SELECTORS=mediacenter is a necessary step to enable key navigation on Plasma Bigscreen and Media Centers for the Open Voice OS project GUI. With this in place, the user can enjoy seamless key navigation while using the Skills and PHAL plugins on their Plasma Bigscreen and Media Centers.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/btspeaker/index.html b/unused/btspeaker/index.html new file mode 100644 index 00000000..7b78acd4 --- /dev/null +++ b/unused/btspeaker/index.html @@ -0,0 +1,284 @@ + + + + + + + + Bluetooth speaker - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Bluetooth speaker

+

The buildroot edition of OpenVoiceOS by default also acts as a bluetooth speaker. You can find it from any (mobile) device as discoverable within the bluetooth pairing menu. + +You can pair with it and use your OpenVoiceOS as any other Bluetooth speaker you might own. +(NOTE: At the moment, pairing is broken but will be fixed as soon as we get to it)

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/buildroot/index.html b/unused/buildroot/index.html new file mode 100644 index 00000000..39759673 --- /dev/null +++ b/unused/buildroot/index.html @@ -0,0 +1,379 @@ + + + + + + + + <img src='https://camo.githubusercontent.com/48b782bbddb51b97cf2971fda5817080075f7799/68747470733a2f2f7261772e6769746861636b2e636f6d2f466f7274417765736f6d652f466f6e742d417765736f6d652f6d61737465722f737667732f736f6c69642f636f67732e737667' width='50' height='50' style='vertical-align:bottom'/> Open Voice Operating System - Buildroot Edition - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Open Voice Operating System - Buildroot Edition

+

Auto detection and configuration of HAT's

+

The buildroot OpenVoiceOS editions is considered to be consumer friendly type of device, or as Mycroft A.I. would like to call, a retail version. However as we so not target a specific hardware platform and would like to support custom made systems we are implementing a smart way to detect and configure different type of Raspberry Pi HAT's.

+

At boot the system scan the I2C bus for known and supported HAT's and if found configures the underlying linux sound system. At the moment this is still very much in development, however the below HAT's are or should soon be supported by this system; +- ReSpeaker 2-mic HAT +- ReSpeaker 4-mic Square HAT +- ReSpeaker 4-mic linear / 6-mic HAT +- USB devices such as the PS EYE-2 +- SJ-201 Dev Kits +- SJ-201 Mark2 retail device

+

Snapcast Client & Server

+

TODO - write docs

+

Remote shared folder access (SMB - Windows)

+

Your OpenVoiceOS device is accessible over the network from your Windows computer. This is still a work in process, but you can open a file explorer and navigate to you OpenVoiceOS device. + +At the moment the following directories within the user's home directory are shared over the network. +- Documents +- Music +- Pictures +These folders are also used by KDE Connect file transfer plugins and for instance the Camera skill (Hey Mycroft, take a selfie) and / or Homescreen Skill (Hey Mycroft, take a screenshot)

+

Remote shared folder access (NFS - Linux)

+

In the near future the above Windows network shares are also made available over NFS for Linux clients. This is still a Work In Progress / To Do item.

+

Development.

+

At this moment development is in very early stages and focussed on the Raspberry Pi 3B & 4. As soon as an initial first workable version +is created, other hardware might be added.

+

Source code: https://github.com/OpenVoiceOS/ovos-buildroot

+

Build Environment

+

Only use x86_64 based architecture/ hardware to build the image.

+

The following example Build environment has been tested :

+
    +
  • Architecture: x86_64
  • +
  • Hardware: Intel Core i5 processor, 8GB RAM, 240GB SSD (you can build on less RAM (2GB) and slower storage but more RAM, faster storage = quicker image building)
  • +
  • OS: Ubuntu 22.04 LTS desktop
  • +
+

Installing System Build Dependencies

+

The following system packages are required to build the image:

+
    +
  • gcc
  • +
  • subversion
  • +
  • qttools5-dev
  • +
  • qttools5-dev-tools
  • +
  • python
  • +
  • git
  • +
  • make
  • +
  • g++
  • +
  • curl
  • +
  • wget
  • +
  • qtdeclarative5-dev
  • +
+

The following firewall ports need to be allowed to the internet.

+

In addition to the usual http/https ports (tcp 80, tcp 443) a couple of other ports need to be allowed to the internet : +- tcp 9418 (git). +- tcp 21 (ftp PASV) and random ports for DATA channel. This can be optional but better to have this allowed along with the corresponding random data channel ports. (knowledge of firewalls required)

+

Getting the code.

+

First, get the code on your system! The simplest method is via git. +
+- cd ~/ +- git clone --recurse-submodules https://github.com/OpenVoiceOS/OpenVoiceOS.git +- cd OpenVoiceOS

+

Patching Buildroot.

+

(ONLY at the first clean checkout/clone) If this is the very first time you are going to build an image, you need to execute the following command once; +
+- ./scripts/br-patches.sh +
+This will patch the Buildroot packages.

+

Building the image.

+

Building the image(s) can be done by utilizing a proper Makefile; +
+To see the available commands, just run: 'make help' +
+As example to build the rpi4 version;
+- make clean +- make rpi4_64-gui-config +- make rpi4_64-gui

+

Now grab a cup of coffee, go for a walk, sleep and repeat as the build process takes up a long time pulling everything from source and cross compiling everything for the device. Especially the qtwebengine package is taking a LONG time. +
+(At the moment there is an outstanding issue which prevents the build to run completely to the end. The plasma-workspace package will error out, not finding the libGLESv4 properly linked within QT5GUI. When the build stopped because of this error, edit the following file; +

+buildroot/output/host/aarch64-buildroot-linux-gnu/sysroot/usr/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake +

+at the bottom of the file replace this line; +

+_qt5gui_find_extra_libs(OPENGL "GLESv2" "" "") +

And replace it bit this line;

+_qt5gui_find_extra_libs(OPENGL "${CMAKE_SYSROOT}/usr/lib/libGLESv2.so" "" "${CMAKE_SYSROOT}/usr/include/libdrm") +

+Then you can continue the build process by re-running the "make rpi4_64-gui" command. (DO NOT, run "make clean" and/or "make rpi4_64-gui-config" again, or you will start from scratch again !!!) +

+When everything goes fine the xz compressed image will be available within the release directory.

+

Booting image from sd card for the first time (setting up Wi-Fi and backend).

+

1.Ensure all required peripherals (mic, speakers, HDMI, usb mouse etc) are plugged in before powering on your RPI4 for the first time. +
+2. Skip this step if RPI4 is using an ethernet cable. Once powered on, the screen will present the Wifi setup screen ( a Wifi HotSpot is created). Connect to the Wifi HotSpot (ssid OVOS) from another device and follow the on-screen instructions to set up Wifi. +
+3.Once Wifi is setup a choice of Mycroft backend and Local backend is presented. Choose the Mycroft backend for now and follow the on-screen instructions, Local backend is not ready to use yet. After the pairing process has completed and skills downloaded it's time to test/ use it.

+

Accessing the CLI.

+
    +
  • SSH to ip address of RPI4
  • +
  • default credentials 'mycroft/mycroft'
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/bus_service/index.html b/unused/bus_service/index.html new file mode 100644 index 00000000..e6915f8c --- /dev/null +++ b/unused/bus_service/index.html @@ -0,0 +1,351 @@ + + + + + + + + Bus Service - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Bus Service

+

The bus service provides a websocket where all internal events travel

+

You can think of the bus service as OVOS's nervous system

+

The mycroft-bus is considered an internal and private websocket, external clients should not connect directly to it.

+

Exposing the bus

+

Please do not expose the messagebus to the outside world!

+

Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system!

+

You can read more about the security issues over at Nhoya/MycroftAI-RCE

+

If you need to connect to the bus from the outside world please check the companion project HiveMind

+

Lots of guides for mycroft tell you to expose the websocket, please (re)read the info and links above, think 10 times before following steps blindly

+

Message

+

A Message consists of a json payload, it contains a type , some data and a context.

+

The context is considered to be metadata and might be changed at any time in transit, the context can contain anything depending on where the message came from, and often is completely empty.

+

You can think of the message context as a sort of session data for a individual interaction, in general messages down the chain keep the context from the original message, most listeners (eg, skills) will only care about type and data.

+

Targeting Theory

+

ovos-core uses the message context to add metadata about the messages themselves, where do they come from and what are they intended for.

+

the Message object provides the following methods:

+
    +
  • message.forward method, keeps previous context.
      +
    • message continues going to same destination
    • +
    +
  • +
  • message.reply method swaps "source" with "destination"
      +
    • message goes back to source
    • +
    +
  • +
+

The context destination parameter in the original message can be set to a list with any number of intended targets:

+
bus.emit(Message('recognizer_loop:utterance', data, 
+                 context={'destination': ['audio', 'kde'],
+                          'source': "remote_service"))
+
+

Sources

+

ovos-core injects the context when it emits an utterance, this can be either typed in the ovos-cli-client or spoken via STT service

+

STT will identify itself as audio

+

ovos-cli-client will identify itself as debug_cli

+

mycroft.conf contains a native_sources section you can configure to change how the audio service processes external requests

+

Destinations

+

Output capable services are the cli and the TTS

+

The command line is a debug tool, it will ignore the destination

+

TTS checks the message context if it's the intended target for the message and will only speak in the following conditions:

+
    +
  • Explicitly targeted i.e. the destination is "audio"
  • +
  • destination is set to None
  • +
  • destination is missing completely
  • +
+

The idea is that for example when the android app is used to access OpenVoiceOS the device at home shouldn't start to speak.

+

TTS will be executed when "audio" or "debug_cli" are the destination

+

A missing destination or if the destination is set to None is interpreted as a multicast and should trigger all output capable processes (be it the audio service, a web-interface, the KDE plasmoid or maybe the android app)

+

Internal routing

+
    +
  • intent service will .reply to the original utterance message
  • +
  • all skill/intent service messages are .forward (from previous intent service .reply)
  • +
  • skills sending their own messages might not respect this warning
  • +
  • converse/get_response support is limited, the context may be lost warning
  • +
  • in the context of the multiple users skills might keep a shared state between clients, eg. a client may enable parrot mode for everyone warning
  • +
  • scheduled events support is limited, the context is lost warning
  • +
+

Configuration

+

The messagebus has a dedicated section in mycroft.conf

+
"websocket": {
+    "host": "0.0.0.0",
+    "port": 8181,
+    "route": "/core",
+    "shared_connection": true
+}
+
+

Security

+

in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills

+

you can see a demonstration of this problem with BusBrickerSkill

+

"shared_connection": false ensures each skill gets its own websocket connection and avoids this problem

+

Additionally, it is recommended you change "host": "127.0.0.1", this will ensure no outside world connections are allowed

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/config_backend/index.html b/unused/config_backend/index.html new file mode 100644 index 00000000..bb4f4c41 --- /dev/null +++ b/unused/config_backend/index.html @@ -0,0 +1,376 @@ + + + + + + + + OpenVoiceOS Backend Configuration - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Backend Configuration

+

Backend Manager

+

a simple UI for ovos-personal-backend, utility to manage all +your devices

+

If you are running ovos-core without a backend OCA provides a similar local interface

+

+

Install

+

pip install ovos-backend-manager

+

or from source

+

pip install git+https://github.com/OpenVoiceOS/ovos-backend-manager

+

Usage

+

It needs to run on the same machine as the personal backend, it directly interacts with the databases and configuration files

+

ovos-backend-manager will be available in the command line after installing

+

Available Backend Services

+

Supported Backends

+

ovos-core supports multiple backends under a single unified interface

+
    +
  • Personal backend - self hosted
  • +
  • Selene - https://api.mycroft.ai
  • +
  • OpenVoiceOS API Service - https://api.openvoiceos.com
  • +
  • Offline - support for setting your own api keys and query services directly
  • +
+

Developers do not need to worry about backend details in their applications and skills

+

Identity Information

+

A unique uuid and pairing information generated by registering with Home is stored in: +~/.config/mycroft/identity/identity2.json <-- DO NOT SHARE THIS WITH OTHERS!

+

This file uniquely identifies your device and should be kept safe

+

STT Plugin

+

a companion stt plugin is available to use a backend as remote STT provider

+

edit your configuration to use ovos-stt-plugin-selene

+
{
+  "stt": {
+    "module": "ovos-stt-plugin-selene"
+  }
+}
+
+

source code

+

Offline Backend

+

OVOS by default runs without a backend, in this case you will need to configure api keys manually

+

This can be done with OCA or by editing mycroft.conf

+

edit your configuration to use the offline backend

+
{
+  "server": {
+    "backend_type": "offline"
+  }
+}
+
+

Selene

+

The official mycroft home backend is called selene, users need to create an account and pair devices with the mycroft +servers.

+

This backend is not considered optional by MycroftAI but is not used by OVOS unless explicitly enabled

+

Selene is AGPL licensed: +- backend source code +- frontend source code

+

edit your configuration to use the selene backend

+
{
+  "server": {
+    "backend_type": "selene",
+    "url": "https://api.mycroft.ai",
+    "version": "v1",
+    "update": true,
+    "metrics": true,
+    "sync_skill_settings": true
+  }
+}
+
+

Personal Backend

+

Personal backend is a reverse engineered alternative to selene that predates it

+

It provides the same functionality for devices and packs some extra options

+

It is not intended to serve different users or thousands of devices, there are no user accounts!

+

This is currently the only way to run a vanilla mycroft-core device offline

+

edit your configuration to use your own personal backend instance

+
{
+  "server": {
+    "backend_type": "personal",
+    "url": "http://0.0.0.0:6712",
+    "version": "v1",
+    "update": true,
+    "metrics": true,
+    "sync_skill_settings": true
+  }
+}
+
+

source code

+

OVOS API Service

+

OVOS Api Service is not a full backend, it is a set of free proxy services hosted by the OVOS Team for usage in default +skills

+

device management functionality and user accounts do not exist, offline mode will be used for these apis

+

edit your configuration to use the OVOS backend

+
{
+  "server": {
+    "backend_type": "ovos",
+    "url": "https://api.openvoiceos.com"
+  }
+}
+
+

source code

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/config_gui/index.html b/unused/config_gui/index.html new file mode 100644 index 00000000..f68d5750 --- /dev/null +++ b/unused/config_gui/index.html @@ -0,0 +1,281 @@ + + + + + + + + OpenVoiceOS GUI Configuration - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS GUI Configuration

+

Work in Progress

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/config_homescreen/index.html b/unused/config_homescreen/index.html new file mode 100644 index 00000000..8c2c2a89 --- /dev/null +++ b/unused/config_homescreen/index.html @@ -0,0 +1,332 @@ + + + + + + + + OpenVoiceOS Home Screen - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Home Screen

+

The home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections.

+

+

Features

+

Night Mode Feature

+

The Night Mode feature lets you quickly switch your home screen into a dark standby clock, reducing the amount of light emitted by your device. This is especially useful if you are using your device in a dark room or at night. You can enable the night mode feature by tapping on the left edge pill button on the home screen.

+

+

Quick Actions Dashboard

+

The Quick Actions Dashboard provides you with a card-based interface to quickly access and add your most used action. The Quick Actions dashboard comes with a variety of pre-defined actions like the ability to quickly add a new alarm, start a new timer or add a new note. You can also add your own custom actions to the dashboard by tapping on the plus button in the top right corner of the dashboard. The Quick Actions dashboard is accessible by tapping on the right edge pill button on the home screen.

+

+

Application Launcher

+

OpenVoiceOS comes with support for dedicated voice applications. Voice Applications can be dedicated skills or PHAL plugins, providing their own dedicated user interface. The application launcher will show you a list of all available voice applications. You can access the application launcher by tapping on the center pill button on the bottom of the home screen.

+

+

Wallpapers

+

The home screen supports custom wallpapers and comes with a bunch of wallpapers to choose from. You can easily change your custom wallpaper by swiping from right to left on the home screen.

+

+

Widgets

+

+

Notifications Widget

+

The notifications widget provides you with a quick overview of all your notifications. The notifications bell icon will be displayed in the top left corner of the home screen. You can access the notifications overview by tapping on the bell icon when it is displayed.

+

Timer Widget

+

The timer widget is displayed in top left corner after the notifications bell icon. It will show up when you have an active timer running. Clicking on the timer widget will open the timers overview.

+

Alarm Widget

+

The alarm widget is displayed in top left corner after the timer widget. It will show up when you have an active alarm set. Clicking on the alarm widget will open the alarms overview.

+

Media Player Widget

+

The media player widget is displayed in the bottom of the home screen, It replaces the examples widget when a media player is active. The media player widget will show you the currently playing media and provide you with a quick way to pause, resume or skip the current media. You can also quickly access the media player by tapping the quick display media player button on the right side of the media player widget.

+

+

Configuration

+

Settings

+

The homescreen has several customizations available. This is sample settings.json file with all of the options explained

+
{
+    "__mycroft_skill_firstrun": false,
+    "weather_skill": "skill-weather.openvoiceos",
+    "datetime_skill": "skill-date-time.mycroftai",
+    "examples_skill": "ovos-skills-info.openvoiceos",
+    "wallpaper": "default.jpg",
+    "persistent_menu_hint": false,
+    "examples_enabled": true,
+    "randomize_examples": true,
+    "examples_prefix": true
+}
+
+
    +
  • __mycroft_skill_firstrun: This is automatically set on first load of skill
  • +
  • weather_skill: DEPRECATED and has no effect - PR pending
  • +
  • datetime_skill: Allows you to use a custom skill to display the date and time. Defaults to skill-ovos-date-time.openvoiceos
  • +
  • examples_skill: Allows use of a custom skill for the displayed examples. Defaults to ovos_skills_manager.utils.get_skills_example() function
  • +
  • wallpaper: Allows a custom wallpaper to be displayed. Use a complete url without any tilde ~
  • +
  • persistent_menu_hint: When true, displayes a hint of the pull-down menu at the top of the page
  • +
  • examples_enabled: When false, the examples at the bottom of the screen will be hidden
  • +
  • randomize_examples: When false, the rotation of the examples will follow the way they are loaded
  • +
  • examples_prefix: When false, the prefix 'Ask Me' will NOT be displayed with the examples
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/config_oca/index.html b/unused/config_oca/index.html new file mode 100644 index 00000000..13f54cb2 --- /dev/null +++ b/unused/config_oca/index.html @@ -0,0 +1,345 @@ + + + + + + + + OCA - OVOS Config Assistant - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OCA - OVOS Config Assistant

+

OCA is a user facing interface to configure ovos devices

+

Web UI

+

OCA provides a local Web UI similar to ovos-backend-manager, in here you can configure your device, view metrics, handle OAuth and more

+

CLI

+

A command line interface is available using ovos-config

+

More Information

+

Python utils

+
from ovos_config_assistant.module_helpers import pprint_core_module_info
+pprint_core_module_info()
+"""
+## Mycroft module info
+     can import mycroft     : True
+     is ovos-core           : True
+     mycroft module location: /home/user/ovos-core/mycroft
+
+## Downstream ovos.conf overrides
+Module: neon_core
+     can import neon_core     : False
+     neon_core module location: None
+     xdg compliance            : True
+     base xdg folder           : neon
+     mycroft config filename   : neon.conf
+     default mycroft.conf path :
+          /home/user/NeonCore/neon_core/configuration/neon.conf
+Module: hivemind
+     can import hivemind     : False
+     hivemind module location: None
+     xdg compliance            : True
+     base xdg folder           : hivemind
+     mycroft config filename   : hivemind.conf
+     default mycroft.conf path :
+          /home/user/PycharmProjects/ovos_workspace/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf
+
+## Downstream module overrides:
+Module: neon_speech
+     uses config from   : neon_core
+     can import neon_speech     : False
+     neon_speech module location: None
+Module: neon_audio
+     uses config from   : neon_core
+     can import neon_audio     : False
+     neon_audio module location: None
+Module: neon_enclosure
+     uses config from   : neon_core
+     can import neon_enclosure     : False
+     neon_enclosure module location: None
+Module: hivemind_voice_satellite
+     uses config from   : hivemind
+     can import hivemind_voice_satellite     : True
+     hivemind_voice_satellite module location: /home/user/HiveMind-voice-sat/hivemind_voice_satellite
+"""
+
+from ovos_config_assistant.config_helpers import pprint_ovos_conf
+pprint_ovos_conf()
+"""
+## OVOS Configuration
+ ovos.conf exists          : True
+      /home/user/.config/OpenVoiceOS/ovos.conf
+ xdg compliance            : True
+ base xdg folder           : mycroft
+ mycroft config filename   : mycroft.conf
+ default mycroft.conf path :
+      /home/user/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf
+"""
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/config_ovos_audio/index.html b/unused/config_ovos_audio/index.html new file mode 100644 index 00000000..8848a596 --- /dev/null +++ b/unused/config_ovos_audio/index.html @@ -0,0 +1,471 @@ + + + + + + + + Config ovos audio - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Audio Service

+

The audio service is responsible for loading TTS and Audio plugins

+

All audio playback is handled by this service

+

Native playback

+

Usually playback is triggered by some originating bus message, eg "recognizer_loop:utterance", this message contains metadata that is used to determine if playback should happen.

+

message.context may contain a source and destination, playback is only triggered if a message destination is a native_source or if missing (considered a broadcast).

+

This separation of native sources allows remote clients such as an android app to interact with OVOS without the actual device where ovos-core is running repeating all TTS and music playback out loud

+

You can learn more about message targeting here

+

By default, only utterances originating from the speech client and ovos cli are considered native

+

for legacy reasons the names for ovos cli and speech client are "debug_cli" and "audio" respectively

+

TTS

+

Two TTS plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used.

+

This allows you to have a lower quality offline voice as fallback to account for internet outages, this ensures your device can always give you feedback

+
"tts": {
+    "pulse_duck": false,
+    "module": "ovos-tts-plugin-mimic2",
+    "fallback_module": "ovos-tts-plugin-mimic"
+},
+
+

Audio

+

You can enable additional Audio plugins and define the native sources described above under the "Audio" section of mycroft.conf

+

ovos-core uses OCP natively for media playback, you can learn more about OCP here

+

OCP will decide when to call the Audio service and what plugin to use, the main use case is for headless setups without a GUI

+

NOTE: mycroft-core has a "default-backend" config option, in ovos-core this option has been deprecated and is always OCP.

+
"Audio": {
+    "native_sources": ["debug_cli", "audio"],
+
+    "backends": {
+      "OCP": {
+        "type": "ovos_common_play",
+        "active": true
+      },
+      "simple": {
+        "type": "ovos_audio_simple",
+        "active": true
+      },
+      "vlc": {
+        "type": "ovos_vlc",
+        "active": true
+      }
+    }
+},
+
+

Configuration

+ +

Reading Configuration

+

ovos_config.config.Configuration is a singleton that loads a single config +object. The configuration files loaded are determined by ovos.conf as described below and can be in either json or +yaml format.

+

if Configuration() is called the following configs would be loaded in this +order:

+
    +
  • {core-path}/configuration/mycroft.conf
  • +
  • os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/mycroft/mycroft.conf
  • +
  • os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH/neon/web_cache.json
  • +
  • ~/.mycroft/mycroft.conf (Deprecated)
  • +
  • XDG_CONFIG_DIRS + /mycroft/mycroft.conf
  • +
  • /etc/xdg/mycroft/mycroft.conf
  • +
  • XDG_CONFIG_HOME (default ~/.config) + /mycroft/mycroft.conf
  • +
+

When the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that +exist in multiple configuration files will be overridden by the last file to contain the value. This process results in +a minimal amount being written for a specific device and user, without modifying default distribution files.

+

Configuring Configuration

+

There are a couple of special configuration keys that change the way the configuration stack loads.

+
    +
  • Default config refers to the config specified at default_config_path in + ovos.conf (#1 {core-path}/configuration/mycroft.conf in the stack above).
  • +
  • System config refers to the config at /etc/{base_folder}/{config_filename} (#2 /etc/mycroft/mycroft.conf in the stack + above).
  • +
+

protected_keys

+

A "protected_keys" configuration section may be added to a Default or System Config file +(default /etc/mycroft/mycroft.conf). This configuration section specifies +other configuration keys that may not be specified in remote or user configurations. +Keys may specify nested parameters with . to exclude specific keys within nested dictionaries. +An example config could be:

+
{
+  "protected_keys": {
+    "remote": [
+      "gui_websocket.host",
+      "websocket.host"
+    ],
+    "user": [
+      "gui_websocket.host"
+    ]
+  }
+}
+
+

This example specifies that config['gui_websocket']['host'] may be specified in user configuration, but not remote. +config['websocket']['host'] may not be specified in user or remote config, so it will only consider default +and system configurations.

+

disable_user_config

+

If this config parameter is set to True in Default or System configuration, +no user configurations will be loaded (no XDG configuration paths).

+

disable_remote_config

+

If this config parameter is set to True in Default or System configuration, +the remote configuration (web_cache.json) will not be loaded.

+

Meta Configuration

+

while mycroft.conf configures the voice assistant, ovos.conf configures the library

+

what this means is that ovos.conf decides what files are loaded by the Configuration class described above, as an end user or skill developer you should never have to worry about this

+

all XDG paths across OpenVoiceOS packages build their paths taking ovos.conf into consideration

+

this feature is what allows downstream voice assistants such as neon-core to change their config files to neon.yaml

+

Using the above example, if Configuration() is called from neon-core, the following configs would be loaded in this +order:

+
    +
  • {core-path}/configuration/neon.yaml
  • +
  • os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/neon/neon.yaml
  • +
  • os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH/neon/web_cache.json
  • +
  • ~/.neon/neon.yaml (Deprecated)
  • +
  • XDG_CONFIG_DIRS + /neon/neon.yaml
  • +
  • /etc/xdg/neon/neon.yaml
  • +
  • XDG_CONFIG_HOME (default ~/.config) + /neon/neon.yaml
  • +
+

ovos.conf

+

The ovos_config package determines which config files to load based on ovos.conf. +get_ovos_config will return default values that load mycroft.conf unless otherwise configured.

+

ovos.conf files are loaded in the following order, with later files taking priority over earlier ones in the list:

+
    +
  • /etc/OpenVoiceOS/ovos.conf
  • +
  • /etc/mycroft/ovos.conf (Deprecated)
  • +
  • XDG_CONFIG_DIRS + /OpenVoiceOS/ovos.conf
  • +
  • /etc/xdg/OpenVoiceOS/ovos.conf
  • +
  • XDG_CONFIG_HOME (default ~/.config) + /OpenVoiceOS/ovos.conf
  • +
+

A simple ovos_config should have a structure like:

+
{
+  "base_folder": "mycroft",
+  "config_filename": "mycroft.conf",
+  "default_config_path": "<Absolute Path to Installed Core>/configuration/mycroft.conf",
+  "module_overrides": {},
+  "submodule_mappings": {}
+}
+
+
+

Note: default_config_path should always be an absolute path. This is generally +detected automatically, but any manual override must specify an absolute path to a json or yaml config file.

+
+

Non-Mycroft modules may specify alternate config paths. A call to get_ovos_config from +neon_core or neon_messagebus will return a configuration like:

+
{
+  "base_folder": "neon",
+  "config_filename": "neon.yaml",
+  "default_config_path": "/etc/example/config/neon.yaml",
+  "module_overrides": {
+    "neon_core": {
+      "base_folder": "neon",
+      "config_filename": "neon.yaml",
+      "default_config_path": "/etc/example/config/neon.yaml"
+    }
+  },
+  "submodule_mappings": {
+    "neon_messagebus": "neon_core",
+    "neon_speech": "neon_core",
+    "neon_audio": "neon_core",
+    "neon_gui": "neon_core"
+  }
+}
+
+

If get_ovos_config was called from mycroft with the same configuration file as the last example, +the returned configuration would be:

+
{
+  "base_folder": "mycroft",
+  "config_filename": "mycroft.conf",
+  "default_config_path": "<Path to Installed Core>/configuration/mycroft.conf",
+  "module_overrides": {
+    "neon_core": {
+      "base_folder": "neon",
+      "config_filename": "neon.yaml",
+      "default_config_path": "/etc/example/config/neon.yaml"
+    }
+  },
+  "submodule_mappings": {
+    "neon_messagebus": "neon_core",
+    "neon_speech": "neon_core",
+    "neon_audio": "neon_core",
+    "neon_gui": "neon_core"
+  }
+}
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/config_ovos_core/index.html b/unused/config_ovos_core/index.html new file mode 100644 index 00000000..04f5cbc3 --- /dev/null +++ b/unused/config_ovos_core/index.html @@ -0,0 +1,437 @@ + + + + + + + + Configuration - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Configuration

+

ovos-core reads from several config files and is able to combine them into one main configuration to be used by all of the OVOS modules.

+

The default configuration is locatated at <python install directory>/site-packages/ovos-config/mycroft/mycroft.conf

+

In this file, you can see all of the avaliable configuration values and an explination of its use.

+

The images will inclue a file at /etc/mycroft/mycroft.conf and values set there will override the system values.

+

DO NOT EDIT THESE FILES These files are default values, and will be overwritten on an update.

+

Next OVOS checks for a file in ~/.config/mycroft/web_cache.json. This file contains values retrieved from a remote server and will overwrite the previous two values. This one should also NOT be edited, it will be overwritten also.

+

The user configuration file is located in ~/.config/mycroft/mycroft.conf. This is the file that you should use to change default values to custom ones. When this document refers to Add this to config this is the file that should be modified.

+

This file needs to be a valid json or yaml file. OVOS knows how to handle both.

+ +

Reading Configuration

+

ovos_config.config.Configuration is a singleton that loads a single config +object. The configuration files loaded are determined by ovos.conf as described below and can be in either json or +yaml format.

+

if Configuration() is called the following configs would be loaded in this +order:

+
    +
  • {core-path}/configuration/mycroft.conf
  • +
  • os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/mycroft/mycroft.conf
  • +
  • os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH/neon/web_cache.json
  • +
  • ~/.mycroft/mycroft.conf (Deprecated)
  • +
  • XDG_CONFIG_DIRS + /mycroft/mycroft.conf
  • +
  • /etc/xdg/mycroft/mycroft.conf
  • +
  • XDG_CONFIG_HOME (default ~/.config) + /mycroft/mycroft.conf
  • +
+

When the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that +exist in multiple configuration files will be overridden by the last file to contain the value. This process results in +a minimal amount being written for a specific device and user, without modifying default distribution files.

+

Configuring Configuration

+

There are a couple of special configuration keys that change the way the configuration stack loads.

+
    +
  • Default config refers to the config specified at default_config_path in + ovos.conf (#1 {core-path}/configuration/mycroft.conf in the stack above).
  • +
  • System config refers to the config at /etc/{base_folder}/{config_filename} (#2 /etc/mycroft/mycroft.conf in the stack + above).
  • +
+

protected_keys

+

A "protected_keys" configuration section may be added to a Default or System Config file +(default /etc/mycroft/mycroft.conf). This configuration section specifies +other configuration keys that may not be specified in remote or user configurations. +Keys may specify nested parameters with . to exclude specific keys within nested dictionaries. +An example config could be:

+
{
+  "protected_keys": {
+    "remote": [
+      "gui_websocket.host",
+      "websocket.host"
+    ],
+    "user": [
+      "gui_websocket.host"
+    ]
+  }
+}
+
+

This example specifies that config['gui_websocket']['host'] may be specified in user configuration, but not remote. +config['websocket']['host'] may not be specified in user or remote config, so it will only consider default +and system configurations.

+

disable_user_config

+

If this config parameter is set to True in Default or System configuration, +no user configurations will be loaded (no XDG configuration paths).

+

disable_remote_config

+

If this config parameter is set to True in Default or System configuration, +the remote configuration (web_cache.json) will not be loaded.

+

Meta Configuration

+

while mycroft.conf configures the voice assistant, ovos.conf configures the library

+

what this means is that ovos.conf decides what files are loaded by the Configuration class described above, as an end user or skill developer you should never have to worry about this

+

all XDG paths across OpenVoiceOS packages build their paths taking ovos.conf into consideration

+

this feature is what allows downstream voice assistants such as neon-core to change their config files to neon.yaml

+

Using the above example, if Configuration() is called from neon-core, the following configs would be loaded in this +order:

+
    +
  • {core-path}/configuration/neon.yaml
  • +
  • os.environ.get('MYCROFT_SYSTEM_CONFIG') or /etc/neon/neon.yaml
  • +
  • os.environ.get('MYCROFT_WEB_CACHE') or XDG_CONFIG_PATH/neon/web_cache.json
  • +
  • ~/.neon/neon.yaml (Deprecated)
  • +
  • XDG_CONFIG_DIRS + /neon/neon.yaml
  • +
  • /etc/xdg/neon/neon.yaml
  • +
  • XDG_CONFIG_HOME (default ~/.config) + /neon/neon.yaml
  • +
+

ovos.conf

+

The ovos_config package determines which config files to load based on ovos.conf. +get_ovos_config will return default values that load mycroft.conf unless otherwise configured.

+

ovos.conf files are loaded in the following order, with later files taking priority over earlier ones in the list:

+
    +
  • /etc/OpenVoiceOS/ovos.conf
  • +
  • /etc/mycroft/ovos.conf (Deprecated)
  • +
  • XDG_CONFIG_DIRS + /OpenVoiceOS/ovos.conf
  • +
  • /etc/xdg/OpenVoiceOS/ovos.conf
  • +
  • XDG_CONFIG_HOME (default ~/.config) + /OpenVoiceOS/ovos.conf
  • +
+

A simple ovos_config should have a structure like:

+
{
+  "base_folder": "mycroft",
+  "config_filename": "mycroft.conf",
+  "default_config_path": "<Absolute Path to Installed Core>/configuration/mycroft.conf",
+  "module_overrides": {},
+  "submodule_mappings": {}
+}
+
+
+

Note: default_config_path should always be an absolute path. This is generally +detected automatically, but any manual override must specify an absolute path to a json or yaml config file.

+
+

Non-Mycroft modules may specify alternate config paths. A call to get_ovos_config from +neon_core or neon_messagebus will return a configuration like:

+
{
+  "base_folder": "neon",
+  "config_filename": "neon.yaml",
+  "default_config_path": "/etc/example/config/neon.yaml",
+  "module_overrides": {
+    "neon_core": {
+      "base_folder": "neon",
+      "config_filename": "neon.yaml",
+      "default_config_path": "/etc/example/config/neon.yaml"
+    }
+  },
+  "submodule_mappings": {
+    "neon_messagebus": "neon_core",
+    "neon_speech": "neon_core",
+    "neon_audio": "neon_core",
+    "neon_gui": "neon_core"
+  }
+}
+
+

If get_ovos_config was called from mycroft with the same configuration file as the last example, +the returned configuration would be:

+
{
+  "base_folder": "mycroft",
+  "config_filename": "mycroft.conf",
+  "default_config_path": "<Path to Installed Core>/configuration/mycroft.conf",
+  "module_overrides": {
+    "neon_core": {
+      "base_folder": "neon",
+      "config_filename": "neon.yaml",
+      "default_config_path": "/etc/example/config/neon.yaml"
+    }
+  },
+  "submodule_mappings": {
+    "neon_messagebus": "neon_core",
+    "neon_speech": "neon_core",
+    "neon_audio": "neon_core",
+    "neon_gui": "neon_core"
+  }
+}
+
+

Configure Skills

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/config_security/index.html b/unused/config_security/index.html new file mode 100644 index 00000000..d3d90e23 --- /dev/null +++ b/unused/config_security/index.html @@ -0,0 +1,307 @@ + + + + + + + + OpenVoiceOS Security - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Security

+

Securing SSH

+

Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication.

+

When connecting from a Linux or MacOS client

+

Create a keyfile (you can change ovos to whatever you want)

+
ssh-keygen -t ed25519 -f ~/.ssh/ovos
+
+

Copy to host (use the same filename as above, specify the user and hostname you are using)

+
ssh-copy-id -i ~/.ssh/ovos  ovos@mycroft
+
+

On your dekstop, edit ~/.ssh/config and add the following lines

+
Host rp2
+  user ovos
+  IdentityFile ~/.ssh/ovos
+
+

On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line:

+
PasswordAuthentication no
+
+

restart sshd or reboot

+
sudo systemctl restart sshd
+
+

Message Bus Security

+

Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system!

+

You can read more about the security issues over at Nhoya/MycroftAI-RCE

+

in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills

+

you can see a demonstration of this problem with BusBrickerSkill

+

"shared_connection": false ensures each skill gets its own websocket connection and avoids this problem

+

Additionally, it is recommended you change "host": "127.0.0.1", this will ensure no outside world connections are allowed

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/config_skillls/index.html b/unused/config_skillls/index.html new file mode 100644 index 00000000..3e826d4e --- /dev/null +++ b/unused/config_skillls/index.html @@ -0,0 +1,290 @@ + + + + + + + + Skill Settings - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Skill Settings

+

Each skill will have its own config file usually located at ~/.local/share/mycroft/skills/<skill_id>

+

Skill settings provide the ability for users to configure a Skill using the command line or a web-based interface.

+

This is often used to:

+
    +
  • Change default behaviors - such as the sound used for users alarms.
  • +
  • Authenticate with external services - such as Spotify
  • +
  • Enter longer data as text rather than by voice - such as the IP address of the users Home Assistant server.
  • +
+

Skill settings are completely optional.

+

Refer to each skill repository for valid configuration values.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/config_wake_word/index.html b/unused/config_wake_word/index.html new file mode 100644 index 00000000..610fad30 --- /dev/null +++ b/unused/config_wake_word/index.html @@ -0,0 +1,417 @@ + + + + + + + + Wake Word Plugins - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Wake Word Plugins

+

WakeWord plugins classify audio and report if a certain word or sound is present or not.

+

These plugins usually correspond to the name of the voice assistant, "hey mycroft", but can also be used for other purposes.

+

Unlike the original Mycroft assistant, OVOS supports multiple wakewords in any combination of engines.

+

ww

+

List of Wake Word plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginType
ovos-ww-plugin-pocketsphinxphonemes
ovos-ww-plugin-vosktext samples
ovos-ww-plugin-snowboymodel
ovos-ww-plugin-precisemodel
ovos-ww-plugin-precise-litemodel
ovos-ww-plugin-nyumayamodel
ovos-ww-plugin-nyumaya-legacymodel
neon_ww_plugin_efficientwordnetmodel
mycroft-porcupine-pluginmodel
ovos-ww-plugin-hotkeyskeyboard
+

Overview of Most Common Plugins

+

The default wake words for OVOS generally use one of two different plugins, Precise-lite (referred to here as Precise) or Vosk. Precise is typically the more accurate of the two because it is trained on recordings and uses an ML model. Vosk translates sounds to phonemes and will generally be more sensitive and prone to error.

+

Vosk

+

The primary benefit of Vosk wakewords is that they require no training or downloaded models. You can simply configure the wakeword and it will work. The downside is that you will usually get many false wakes, especially with short and common phonemes. Something like "Hey Neon" will trigger almost every time the "ee" sound is pronounced in English, while "Hey Ziggy" is much less likely to trigger because the phonemes are less common.

+

Note that Vosk wakewords consume a large amount of memory. Configuring multiple Vosk wakewords on a device with limited memory, like the Mycroft Mark 2, can cause performance issues.

+

To create a Vosk wakeword on your OVOS device, open the user configuration (defaults to ~/.config/mycroft/mycroft.conf) in your text editor of choice and add the following lines. This will enable wakewords for both "Hey Mycroft" and "Hey Ziggy."

+
"hotwords": {
+  "hey_neon": {
+    "module": "ovos-ww-plugin-vosk",
+    "active": true,
+    "listen": true,
+    "sound": "snd/start_listening.wav",
+    "debug": false,
+    "rule": "fuzzy",
+    "lang": "en",
+    "samples": ["hey neon"]
+  },
+  "hey_ziggy": {
+    "module": "ovos-ww-plugin-vosk",
+    "listen": true,
+    "active": true,
+    "sound": "snd/start_listening.wav",
+    "debug": false,
+    "rule": "fuzzy",
+    "lang": "en",
+    "samples": ["hey ziggy", "hay ziggy"]
+  }
+}
+
+

If you already have a hotwords section in your user configuration, the first and last lines are not necessary. Also, the most important section is "active": true, which tells the assistant to use the wakeword. If you want to disable a wakeword, you can set this to false. If enabling a wakeword, be sure to also set "listen": true.

+

Another important combination is "debug": true, which will print the phonemes to the logs when the wakeword is triggered. This can be useful for debugging issues. It can also tell you what combinations the speech-to-text engine is picking up when you try to activate it so you can add them to the samples array.

+

Those are two common default wakewords. You can also create a completely custom wakeword as follows:

+
"hotwords": {
+  "k9": {
+    "module": "ovos-ww-plugin-vosk",
+    "active": true,
+    "listen": true,
+    "sound": "snd/start_listening.wav",
+    "debug": true,
+    "rule": "fuzzy",
+    "lang": "en",
+    "samples": ["k9", "k 9", "kay nine", "k nine", "kay nein", "k nein"]
+  }
+}
+
+

OVOS community members have used Vosk for very creative wakewords. Please feel free to share your custom wakewords in the OVOS Matrix chat!

+

Precise-Lite (Precise)

+

NOTE: The original Precise engine is not actively maintained and is not recommended for new installations. Precise-Lite is a fork of Precise that is actively maintained. Please use that instead.

+

Precise-Lite wakewords require a pre-trained .tflite model to operate. OVOS maintains several pre-trained models of commonly requested wakewords. To use them, try this configuration:

+
"hotwords": {
+  "computer": {
+    "module": "ovos-ww-plugin-precise-lite",
+    "model": "https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/computer.tflite",
+    "active": true,
+    "listen": true,
+    "sound": "snd/start_listening.wav",
+    "expected_duration": 3,
+    "trigger_level": 3,
+    "sensitivity": 0.5
+   }
+}
+
+

Your OVOS device will automatically download the model if it isn't already on the device.

+

OVOS maintains the following models:

+
    +
  • android: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/android.tflite
  • +
  • computer: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/computer.tflite
  • +
  • hey_chatterbox: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_chatterbox.tflite
  • +
  • hey_firefox: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_firefox.tflite
  • +
  • hey_k9: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_k9.tflite
  • +
  • hey_kit: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_kit.tflite
  • +
  • hey_moxie: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_moxie.tflite
  • +
  • hey_mycroft: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_mycroft.tflite
  • +
  • hey_scout: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/hey_scout.tflite
  • +
  • marvin: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/marvin.tflite
  • +
  • o_sauro: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/o_sauro.tflite
  • +
  • sheila: https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/sheila.tflite
  • +
+

To use them, replace the model section with the link to the model you want to use. Then replace the key with the name of the model, e.g. instead of computer use android or marvin or whichever model you chose.

+

Community Precise Wakeword Models

+

In addition to the Precise wakeword models that OVOS maintains, the community has created many more models, and additional model requests are welcome! If you have a model you would like to see created, please open an issue with the name of the wakeword. The OVOS team will generate some synthetic samples and add it to the list of models to be created.

+

These synthetic models perform fairly well out of the box, but always work better with community-contributed recordings. Please see the README on the repo above for instructions on how to contribute recordings, and consider contributing to as many as you can!

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/context/index.html b/unused/context/index.html new file mode 100644 index 00000000..fab391d6 --- /dev/null +++ b/unused/context/index.html @@ -0,0 +1,408 @@ + + + + + + + + Conversational Context - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Conversational Context

+

NOTE: Conversational context is currently only available with the Adapt Intent Parser, and is not yet available for Padatious

+
+

How tall is John Cleese?

+
+

"John Cleese is 196 centimeters"

+
+

Where's he from?

+
+

"He's from England"

+

Context is added manually by the Skill creator using either the self.set_context() method or the @adds_context() decorator.

+

Consider the following intent handlers:

+
    @intent_handler(IntentBuilder().require('PythonPerson').require('Length'))
+    def handle_length(self, message):
+        python = message.data.get('PythonPerson')
+        self.speak('{} is {} cm tall'.format(python, length_dict[python]))
+
+    @intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom'))
+    def handle_from(self, message):
+        python = message.data.get('PythonPerson')
+        self.speak('{} is from {}'.format(python, from_dict[python]))
+
+

To interact with the above handlers the user would need to say

+
User: How tall is John Cleese?
+Mycroft: John Cleese is 196 centimeters
+User: Where is John Cleese from?
+Mycroft: He's from England
+
+

To get a more natural response the functions can be changed to let OVOS know which PythonPerson we're talking about by using the self.set_context() method to give context:

+
    @intent_handler(IntentBuilder().require('PythonPerson').require('Length'))
+    def handle_length(self, message):
+        # PythonPerson can be any of the Monty Python members
+        python = message.data.get('PythonPerson')
+        self.speak('{} is {} cm tall'.format(python, length_dict[python]))
+        self.set_context('PythonPerson', python)
+
+    @intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom'))
+    def handle_from(self, message):
+        # PythonPerson can be any of the Monty Python members
+        python = message.data.get('PythonPerson')
+        self.speak('He is from {}'.format(from_dict[python]))
+        self.set_context('PythonPerson', python)
+
+

When either of the methods are called the PythonPerson keyword is added to OVOS's context, which means that if there is a match with Length but PythonPerson is missing OVOS will assume the last mention of that keyword. The interaction can now become the one described at the top of the page.

+
+

User: How tall is John Cleese?

+
+

OVOS detects the Length keyword and the PythonPerson keyword

+
+

OVOS: 196 centimeters

+
+

John Cleese is added to the current context

+
+

User: Where's he from?

+
+

OVOS detects the WhereFrom keyword but not any PythonPerson keyword. The Context Manager is activated and returns the latest entry of PythonPerson which is John Cleese

+
+

OVOS: He's from England

+
+

The context isn't limited by the keywords provided by the current Skill. For example

+
    @intent_handler(IntentBuilder().require(PythonPerson).require(WhereFrom))
+    def handle_from(self, message):
+        # PythonPerson can be any of the Monty Python members
+        python = message.data.get('PythonPerson')
+        self.speak('He is from {}'.format(from_dict[python]))
+        self.set_context('PythonPerson', python)
+        self.set_context('Location', from_dict[python])
+
+

Enables conversations with other Skills as well.

+
User: Where is John Cleese from?
+Mycroft: He's from England
+User: What's the weather like over there?
+Mycroft: Raining and 14 degrees...
+
+

Using context to enable Intents

+

To make sure certain Intents can't be triggered unless some previous stage in a conversation has occurred. Context can be used to create "bubbles" of available intent handlers.

+
User: Hey Mycroft, bring me some Tea
+Mycroft: Of course, would you like Milk with that?
+User: No
+Mycroft: How about some Honey?
+User: All right then
+Mycroft: Here you go, here's your Tea with Honey
+
+
from mycroft.skills.context import adds_context, removes_context
+
+class TeaSkill(MycroftSkill):
+    @intent_handler(IntentBuilder('TeaIntent').require("TeaKeyword"))
+    @adds_context('MilkContext')
+    def handle_tea_intent(self, message):
+        self.milk = False
+        self.speak('Of course, would you like Milk with that?',
+                   expect_response=True)
+
+    @intent_handler(IntentBuilder('NoMilkIntent').require("NoKeyword").
+                                  require('MilkContext').build())
+    @removes_context('MilkContext')
+    @adds_context('HoneyContext')
+    def handle_no_milk_intent(self, message):
+        self.speak('all right, any Honey?', expect_response=True)
+
+    @intent_handler(IntentBuilder('YesMilkIntent').require("YesKeyword").
+                                  require('MilkContext').build())
+    @removes_context('MilkContext')
+    @adds_context('HoneyContext')
+    def handle_yes_milk_intent(self, message):
+        self.milk = True
+        self.speak('What about Honey?', expect_response=True)
+
+    @intent_handler(IntentBuilder('NoHoneyIntent').require("NoKeyword").
+                                  require('HoneyContext').build())
+    @removes_context('HoneyContext')
+    def handle_no_honey_intent(self, message):
+        if self.milk:
+            self.speak('Heres your Tea with a dash of Milk')
+        else:
+            self.speak('Heres your Tea, straight up')
+
+    @intent_handler(IntentBuilder('YesHoneyIntent').require("YesKeyword").
+                                require('HoneyContext').build())
+    @removes_context('HoneyContext')
+    def handle_yes_honey_intent(self, message):
+        if self.milk:
+            self.speak('Heres your Tea with Milk and Honey')
+        else:
+            self.speak('Heres your Tea with Honey')
+
+

When starting up only the TeaIntent will be available. When that has been triggered and MilkContext is added the MilkYesIntent and MilkNoIntent are available since the MilkContext is set. when a yes or no is received the MilkContext is removed and can't be accessed. In it's place the HoneyContext is added making the YesHoneyIntent and NoHoneyIntent available.

+

You can find an example Tea Skill using conversational context on Github.

+

As you can see, Conversational Context lends itself well to implementing a dialog tree or conversation tree.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/contributing/index.html b/unused/contributing/index.html new file mode 100644 index 00000000..3aaa92c9 --- /dev/null +++ b/unused/contributing/index.html @@ -0,0 +1,290 @@ + + + + + + + + Contributing to OpenVoiceOS - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Contributing to OpenVoiceOS

+

If this is your first PR, or you're not sure where to get started, +say hi in OpenVoiceOS Chat and a team member would be happy to mentor you. +Join the Discussions for questions and answers.

+ + + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/converse/index.html b/unused/converse/index.html new file mode 100644 index 00000000..bda79080 --- /dev/null +++ b/unused/converse/index.html @@ -0,0 +1,333 @@ + + + + + + + + Converse - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Converse

+

Each Skill may define a converse() method. This method will be called anytime the Skill has been recently active and a new utterance is processed.

+

The converse method expects a single argument which is a standard Mycroft Message object. This is the same object an intent handler receives.

+

Converse methods must return a Boolean value. True if an utterance was handled, otherwise False.

+

Basic usage

+

Let's use a version of the Ice Cream Skill we've been building up and add a converse method to catch any brief statements of thanks that might directly follow an order.

+
from mycroft import MycroftSkill, intent_handler
+
+
+class IceCreamSkill(MycroftSkill):
+    def __init__(self):
+        MycroftSkill.__init__(self)
+        self.flavors = ['vanilla', 'chocolate', 'mint']
+
+    @intent_handler('request.icecream.intent')
+    def handle_request_icecream(self):
+        self.speak_dialog('welcome')
+        selection = self.ask_selection(self.flavors, 'what.flavor')
+        self.speak_dialog('coming-right-up', {'flavor': selection})
+
+    def converse(self, message):
+        if self.voc_match(message.data['utterances'][0], 'Thankyou'):
+            self.speak_dialog("you-are-welcome")
+            return True
+
+
+def create_skill():
+    return IceCreamSkill()
+
+

In this example:

+
    +
  1. A User might request an ice cream which is handled by handle_request_icecream()
  2. +
  3. The Skill would be added to the system Active Skill list for up to 5 minutes.
  4. +
  5. Any utterance received by OVOS would trigger this Skills converse system whilst it is considered active.
  6. +
  7. If the User followed up with a pleasantry such as "Hey Mycroft, thanks" - the converse method would match this vocab against the Thankyou.voc file in the Skill and speak the contents of the you-are-welcome.dialog file. The method would return True and the utterance would be consumed meaning the intent parsing service would never be triggered.
  8. +
  9. Any utterance that did not match would be silently ignored and allowed to continue on to other converse methods and finally to the intent parsing service.
  10. +
+

Active Skill List

+

A Skill is considered active if it has been called in the last 5 minutes.

+

Skills are called in order of when they were last active. For example, if a user spoke the following commands:

+
+

Hey Mycroft, set a timer for 10 minutes

+

Hey Mycroft, what's the weather

+
+

Then the utterance "what's the weather" would first be sent to the Timer Skill's converse() method, then to the intent service for normal handling where the Weather Skill would be called.

+

As the Weather Skill was called it has now been added to the front of the Active Skills List. Hence, the next utterance received will be directed to:

+
    +
  1. WeatherSkill.converse()
  2. +
  3. TimerSkill.converse()
  4. +
  5. Normal intent parsing service
  6. +
+

Making a Skill Active

+

There are occasions where a Skill has not been triggered by the User, but it should still be considered "Active".

+

In the case of our Ice Cream Skill - we might have a function that will execute when the customers order is ready. At this point, we also want to be responsive to the customers thanks, so we call self.make_active() to manually add our Skill to the front of the Active Skills List.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/core/index.html b/unused/core/index.html new file mode 100644 index 00000000..25ec68a8 --- /dev/null +++ b/unused/core/index.html @@ -0,0 +1,319 @@ + + + + + + + + ovos-core - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

ovos-core

+

OpenVoiceOS is an open source platform for smart speakers and other voice-centric devices.

+

OVOS-core is a backwards-compatible descendant of Mycroft-core, the central component of Mycroft. It contains extensions and features not present upstream.

+

All Mycroft Skills and Plugins should work normally with OVOS-core.

+

OVOS-core is fully modular. Furthermore, common components have been repackaged as plugins. That means it isn't just a great assistant on its own, but also a pretty small library!

+

Getting Started

+

ovos-core is very modular, depending on where you are running ovos-core you may want to run only a subset of the services

+

by default ovos-core only installs the minimum components common to all services, for the purposes of this document we will assume you want a full install

+

if you want to finetune the components please replace [all] in commands below with the subset of desired extras, eg [skills,bus]

+

Installing ovos-core

+

ovos-core can be installed from pypi or from source

+

if install fails you may need to install some system dependencies, how to do this will depend on your distro

+
sudo apt install build-essential python3-dev swig libssl-dev libfann-dev portaudio19-dev libpulse-dev
+
+

Note: MycroftAI's dev_setup.sh does not exist in OVOS-core.

+

from source

+

We suggest you do this in a virtualenv:

+

pip install git+https://github.com/OpenVoiceOS/ovos-core[all]

+

from pypi

+

pip install ovos-core[all]

+

Running ovos-core

+

Developer launcher script

+

start-mycroft.sh is available to perform common tasks.

+

Assuming you installed ovos-core in your home directory, run:

+
    +
  • cd ~/ovos-core
  • +
  • ./start-mycroft.sh debug
  • +
+

The "debug" command will start the background services (microphone listener, skill, messagebus, and audio subsystems) as +well as bringing up a text-based Command Line Interface (CLI) you can use to interact with Mycroft and see the contents +of the various logs. Alternatively you can run ./start-mycroft.sh all to begin the services without the command line +interface. Later you can bring up the CLI using ./start-mycroft.sh cli.

+

The background services can be stopped as a group with:

+
    +
  • ./stop-mycroft.sh
  • +
+

Automatically on boot

+

We recommend you create system services to manage ovos instead of depending on the launcher script above

+

A good explanation can be found here https://github.com/j1nx/mycroft-systemd

+

A reference implementation can be found in ovos-buildroot

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/dev/index.html b/unused/dev/index.html new file mode 100644 index 00000000..be62c73a --- /dev/null +++ b/unused/dev/index.html @@ -0,0 +1,483 @@ + + + + + + + + Developer FAQ - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Developer FAQ

+ +

How do I know what is currently happening in the GUI?

+
from ovos_utils.gui import GUITracker
+from ovos_workshop.skills import OVOSSkill
+from mycroft import intent_handler
+
+
+class MyGUIEventTracker(GUITracker):
+    # GUI event handlers
+    # skill can/should subclass this
+
+    def on_idle(self, namespace):
+        print("IDLE", namespace)
+        timestamp = self.idle_ts
+
+    def on_active(self, namespace):
+        # NOTE: page has not been loaded yet
+        # event will fire right after this one
+        print("ACTIVE", namespace)
+        # check namespace values, they should all be set before this event
+        values = self.gui_values[namespace]
+
+    def on_new_page(self, page, namespace, index):
+        print("NEW PAGE", namespace, index, namespace)
+        # check all loaded pages
+        for n in self.gui_pages:  # list of named tuples
+            nspace = n.name  # namespace / skill_id
+            pages = n.pages  # ordered list of page uris
+
+    def on_gui_value(self, namespace, key, value):
+        # WARNING this will pollute logs quite a lot, and you will get
+        # duplicates, better to check values on a different event,
+        # demonstrated in on_active
+        print("VALUE", namespace, key, value)
+
+
+class MySkill(OVOSSkill): 
+    def initialize(self):
+        self.tracker = MyGUIEventTracker(bus=self.bus)
+
+    @intent_handler("gui.status.intent")
+    def handle_status_intent(self, message):
+        print("device has screen:", self.tracker.can_display())
+        print("mycroft-gui installed:", self.tracker.is_gui_installed())
+        print("gui connected:", self.tracker.is_gui_connected())
+        # TODO - speak or something
+
+    @intent_handler("list.idle.screens.intent")
+    def handle_idle_screens_intent(self, message):
+        # check registered idle screens
+        print("Registered idle screens:")
+        for name in self.tracker.idle_screens:
+            skill_id = self.tracker.idle_screens[name]
+            print("   - ", name, ":", skill_id)
+            # TODO - speak or something
+
+

How do I stop an intent mid execution?

+

Sometimes you want to abort a running intent immediately, the stop method may not be enough in some circumstances +we provide a killable_intent decorator in ovos_workshop that can be used to abort a running intent immediately

+

a common use case is for GUI interfaces where the same action may be done by voice or clicking buttons, in this case you may need to abort a running get_response loop

+
from ovos_workshop.skills import OVOSSkill
+from ovos_workshop.decorators import killable_intent
+from mycroft import intent_handler
+from time import sleep
+
+
+class Test(OVOSSkill):
+    """
+    send "mycroft.skills.abort_question" and confirm only get_response is aborted
+    send "mycroft.skills.abort_execution" and confirm the full intent is aborted, except intent3
+    send "my.own.abort.msg" and confirm intent3 is aborted
+    say "stop" and confirm all intents are aborted
+    """
+    def __init__(self):
+        super(Test, self).__init__("KillableSkill")
+        self.my_special_var = "default"
+
+    def handle_intent_aborted(self):
+        self.speak("I am dead")
+        # handle any cleanup the skill might need, since intent was killed
+        # at an arbitrary place of code execution some variables etc. might
+        # end up in unexpected states
+        self.my_special_var = "default"
+
+    @killable_intent(callback=handle_intent_aborted)
+    @intent_handler("test.intent")
+    def handle_test_abort_intent(self, message):
+        self.my_special_var = "changed"
+        while True:
+            sleep(1)
+            self.speak("still here")
+
+    @intent_handler("test2.intent")
+    @killable_intent(callback=handle_intent_aborted)
+    def handle_test_get_response_intent(self, message):
+        self.my_special_var = "CHANGED"
+        ans = self.get_response("question", num_retries=99999)
+        self.log.debug("get_response returned: " + str(ans))
+        if ans is None:
+            self.speak("question aborted")
+
+    @killable_intent(msg="my.own.abort.msg", callback=handle_intent_aborted)
+    @intent_handler("test3.intent")
+    def handle_test_msg_intent(self, message):
+        if self.my_special_var != "default":
+            self.speak("someone forgot to cleanup")
+        while True:
+            sleep(1)
+            self.speak("you can't abort me")
+
+

How do I send files over the bus?

+

Sometimes you may want to send files or binary data over the messagebus, ovos_utils provides some tools to make this easy

+

Sending a file

+
from ovos_utils.messagebus import send_binary_file_message, decode_binary_message
+from ovos_workshop.skills import OVOSSkill
+
+
+class MySkill(OVOSSkill): 
+    def initialize(self):
+        self.add_event("mycroft.binary.file", self.receive_file)
+
+    def receive_file(self, message):
+        print("Receiving file")
+        path = message.data["path"]  # file path, extract filename if needed
+        binary_data = decode_binary_message(message)
+        # TODO process data somehow
+
+    def send_file(self, my_file_path):
+        send_binary_file_message(my_file_path)
+
+

Sending binary data directly

+
from ovos_utils.messagebus import send_binary_data_message, decode_binary_message
+from ovos_workshop.skills import OVOSSkill
+
+
+class MySkill(OVOSSkill):
+    def initialize(self):
+        self.add_event("mycroft.binary.data", self.receive_binary)
+
+    def send_data(self, binary_data):
+        send_binary_data_message(binary_data)
+
+    def receive_binary(self, message):
+        print("Receiving binary data")
+        binary_data = decode_binary_message(message)
+         # TODO process data somehow
+
+

How do I manage remote skill settings?

+

To interact with skill settings via DeviceApi

+
from ovos_backend_client.settings import RemoteSkillSettings
+
+# in ovos-core skill_id is deterministic and safe
+s = RemoteSkillSettings("skill.author")
+# in mycroft-core please ensure a valid remote_id
+# in MycroftSkill class you can use
+# remote_id = self.settings_meta.skill_gid
+# s = RemoteSkillSettings("skill.author", remote_id="@|whatever_msm_decided")
+s.download()
+
+s.settings["existing_value"] = True
+s.settings["new_value"] = "will NOT show up in UI"
+s.upload()
+
+# auto generate new settings meta for all new values before uploading
+s.settings["new_value"] = "will show up in UI"
+s.generate_meta()  # now "new_value" is in meta
+s.upload()
+
+
+
+

How do I share data between devices?

+

by hijacking skill settings we allow storing arbitrary data via DeviceApi and use it across devices and skills

+
from ovos_backend_client.cloud import SeleneCloud
+
+cloud = SeleneCloud()
+cloud.add_entry("test", {"secret": "NOT ENCRYPTED MAN"})
+data = cloud.get_entry("test")
+
+

an encrypted version is also supported if you don't trust the backend!

+
from ovos_backend_client.cloud import SecretSeleneCloud
+
+k = "D8fmXEP5VqzVw2HE"  # you need this to read back the data
+cloud = SecretSeleneCloud(k)
+cloud.add_entry("test", {"secret": "secret data, selene cant read this"})
+data = cloud.get_entry("test")
+
+

+

How do I use Geolocation backend services?

+
from ovos_backend_client.api import GeolocationApi
+
+geo = GeolocationApi()
+data = geo.get_geolocation("Lisbon Portugal")
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/dev_ocp_skill/index.html b/unused/dev_ocp_skill/index.html new file mode 100644 index 00000000..f4573b92 --- /dev/null +++ b/unused/dev_ocp_skill/index.html @@ -0,0 +1,437 @@ + + + + + + + + Developing OCP Skills - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Developing OCP Skills

+

OVOS Common Play (OCP) is a full-fledged media player, compatible with the MPRIS standard. Developing a skill for OCP is similar to writing any other OVOS-compatible skill except basic intents and playing media are handled for the developer. This documentation is a quick start guide for developers hoping to write an OCP skill.

+

General Steps

+
    +
  • Create a skill class extending the OCP base class
  • +
  • In the __init__ method indicate the media types you want to handle
  • +
  • self.voc_match(phrase, "skill_name") to handle specific requests for your skill
  • +
  • self.remove_voc(phrase, "skill_name") to remove matched phrases from the search request
  • +
  • Implement the ocp_search decorator, as many as you want (they run in parallel)
  • +
  • The decorated method can return a list or be an iterator of result_dict (track or playlist)
  • +
  • The search function can be entirely inline or call another Python library, like pandorinha or plexapi
  • +
  • self.extend_timeout() to not let OCP call for a Generic search too soon
  • +
  • Place one in each search function so it's extended every time the skill is called
  • +
  • Implement a confidence score formula
  • +
  • Values are between 0 and 100
  • +
  • High confidence scores cancel other OCP skill searches
  • +
  • ocp_featured_media, return a playlist for the OCP menu if selected from GUI
  • +
  • Create a requirements.txt file with third-party package requirements
  • +
  • Create a skills.json file for skill metadata
  • +
+

The general interface that OCP expects to receive looks something like the following:

+
class OVOSAudioTrack(TypedDict):
+    uri: str  # URL/URI of media, OCP will handle formatting and file handling
+    title: str
+    media_type: ovos_plugin_common_play.MediaType
+    playback: ovos_plugin_common_play.PlaybackType
+    match_confidence: int  # 0-100
+    album: str | None  # Parsed even for movies and TV shows
+    artist: str | None  # Parsed even for movies and TV shows
+    length: int | str | None  # in milliseconds, if present
+    image: str | None
+    bg_image: str | None
+    skill_icon: str | None  # Optional filename for skill icon
+    skill_id: str | None  # Optional ID of skill to distinguish where results came from
+
+

OCP Skill Template

+
from os.path import join, dirname
+
+from ovos_plugin_common_play.ocp import MediaType, PlaybackType
+from ovos_utils.parse import fuzzy_match
+from ovos_workshop.skills.common_play import OVOSCommonPlaybackSkill, \
+    ocp_search
+
+
+class MySkill(OVOSCommonPlaybackSkill):
+    def __init__(...):
+        super(....)
+        self.supported_media = [MediaType.GENERIC,
+                                MediaType.MUSIC]   # <- these are the only media_types that will be sent to your skill
+        self.skill_icon = join(dirname(__file__), "ui", "pandora.jpeg")
+
+    # score
+    @staticmethod
+    def calc_score(phrase, match, base_score=0, exact=False):
+         # implement your own logic here, assing a val from 0 - 100 per result
+        if exact:
+            # this requires that the result is related
+            if phrase.lower() in match["title"].lower():
+                match["match_confidence"] = max(match["match_confidence"], 80)
+            elif phrase.lower() in match["artist"].lower():
+                match["match_confidence"] = max(match["match_confidence"], 85)
+            elif phrase.lower() == match["station"].lower():
+                match["match_confidence"] = max(match["match_confidence"], 70)
+            else:
+                return 0
+
+        title_score = 100 * fuzzy_match(phrase.lower(),
+                                        match["title"].lower())
+        artist_score = 100 * fuzzy_match(phrase.lower(),
+                                         match["artist"].lower())
+        if artist_score > 85:
+            score += artist_score * 0.85 + title_score * 0.15
+        elif artist_score > 70:
+            score += artist_score * 0.6 + title_score * 0.4
+        elif artist_score > 50:
+            score += title_score * 0.5 + artist_score * 0.5
+        else:
+            score += title_score * 0.8 + artist_score * 0.2
+        score = min((100, score))
+        return score
+
+    @ocp_search()
+    def search_my_skill(self, phrase, media_type=MediaType.GENERIC):
+        # match the request media_type
+        base_score = 0
+        if media_type == MediaType.MUSIC:
+            base_score += 10
+        else:
+            base_score -= 15  # some penalty for proof of concept
+
+        explicit_request = False
+        if self.voc_match(phrase, "mySkillNameVoc"):
+            # explicitly requested our skill
+            base_score += 50
+            phrase = self.remove_voc(phrase, "mySkillNameVoc")  # clean up search str
+            explicit_request = True
+            self.extend_timeout(1)  # we know our skill is slow, ask OCP for more time
+
+        for r in self.search_my_results(phrase):
+            yield {
+                "match_confidence": self.calc_score(phrase, r, base_score,
+                                                    exact=not explicit_request),
+                "media_type": MediaType.MUSIC,
+                "length": r["duration"] * 1000,  # seconds to milliseconds
+                "uri": r["uri"],
+                "playback": PlaybackType.AUDIO,
+                "image": r["image"],
+                "bg_image": r["bg_image"],
+                "skill_icon": self.skill_icon,
+                "title": r["title"],
+                "artist": r["artist"],
+                "album": r["album"],
+                "skill_id": self.skill_id
+            }
+
+
+

skill.json template

+
{
+  "title": "Plex OCP Skill",
+  "url": "https://github.com/d-mcknight/skill-plex",
+  "summary": "[OCP](https://github.com/OpenVoiceOS/ovos-ocp-audio-plugin) skill to play media from [Plex](https://plex.tv).",
+  "short_description": "[OCP](https://github.com/OpenVoiceOS/ovos-ocp-audio-plugin) skill to play media from [Plex](https://plex.tv).",
+  "description": "",
+  "examples": [
+    "Play Charles Mingus",
+    "Play Jamie Cullum on Plex",
+    "Play the movie Ghostbusters",
+    "Play the movie Ghostbusters on Plex",
+    "Play Star Trek the Next Generation on Plex",
+    "Play the tv show Star Trek the Next Generation on Plex"
+  ],
+  "desktopFile": false,
+  "warning": "",
+  "systemDeps": false,
+  "requirements": {
+    "python": ["plexapi~=4.13", "ovos-workshop~=0.0.11"],
+    "system": {},
+    "skill": []
+  },
+  "incompatible_skills": [],
+  "platforms": ["i386", "x86_64", "ia64", "arm64", "arm"],
+  "branch": "master",
+  "license": "BSD-3-Clause",
+  "icon": "https://freemusicarchive.org/legacy/fma-smaller.jpg",
+  "category": "Music",
+  "categories": ["Music", "Daily"],
+  "tags": ["music", "NeonAI", "NeonGecko Original", "OCP", "Common Play"],
+  "credits": ["NeonGeckoCom", "NeonDaniel"],
+  "skillname": "skill-plex",
+  "authorname": "d-mcknight",
+  "foldername": null
+}
+
+

Installing an OCP Skill

+

OCP Skills are installed like any other OVOS skill. The preferred pattern is to release a pip package for your OCP skill and install it directly, but skills may also be installed directly from any pip-supported source such as git+https://github.com/OpenVoiceOS/skill-ovos-youtube-music.

+

Once a skill has been installed a restart of the mycroft-skills, ovos-skills, or neon-skills service will be required.

+

Need Help?

+

Say hi in OpenVoiceOS Chat and a team member would be happy to assist you.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/dinkum/index.html b/unused/dinkum/index.html new file mode 100644 index 00000000..f7dc22d7 --- /dev/null +++ b/unused/dinkum/index.html @@ -0,0 +1,331 @@ + + + + + + + + Dinkum - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Dinkum

+

What is Dinkum

+

Mycroft Mark2 shipped with a new version of mycroft called "dinkum", this is a total overhaul of mycroft-core and +incompatible

+

mycroft-core is now referred to as "Classic Core" by MycroftAI

+

MycroftAI now provides what they call sandbox images, to add to the confusion those only work in the mark 2 and "Classic +Core" means the mark-ii/latest branch of mycroft-core, this is a derivative version of the branch that was used in the dev kits (mark-ii/qa) and is also +backwards incompatible, changes in this branch were not done via PRs and had no review or community input

+

Mark2 useful links:

+ +

Dinkum vs ovos-core

+

you can find mycroft's guide to porting skills to dinkum here https://mycroft-ai.gitbook.io/mark-ii/differences-to-classic-core/porting-classic-core-skills

+

mark2/qa brought some changes to mycroft-core, not all of them backwards compatible and some that were contentious +within the community.

+
    +
  • VAD - VAD has been added to the mark-ii, but it is hardcoded to silero, this feature has been adopted via OPM, it is an important part of ovos-core listening modes introduced in version 0.0.5
  • +
  • self.resources - resource file loading was overhauled, this feature has been + improved (ovos-core/pull/130 + ovos-core/pull/131 + ovos-core/pull/135 + ovos-core/pull/170) + and ported to OVOS and is also available in OVOSkill class (OVOS-workshop/pull/30) for usage in classic core
  • +
  • audio hal - audio playback was rewritten from scratch, audio plugin support has been removed, OVOS will not adopt this new approach but keep improving the previous one
  • +
  • skill states - converse method introduced skill states, this changed some core assumptions behind converse method and + active skills, OVOS will not adopt skill states, see community discussion + here mycroft-core/pull/2901 + mycroft-core/pull/2906
  • +
  • pure regex intents - pure regex intents have been introduced, we strongly recommend you use padatious instead if this is desired, regex makes language support really hard, let the intent engines do their jobs
  • +
  • adapt fork - a fork of adapt is used in the mark2, it introduces the exactly and excludes methods. excludes will be added upstream in adapt/pull/156. Any skill using these new methods will be incompatible with most core versions
  • +
  • activities - an activity is just a set of bus messages to indicate something started and ended, it is a reimplementation of an already existing feature, in ovos we use the native events from the self.add_event skill method
  • +
+

dinkum contains all changes above and also brought further changes to the table

+
    +
  • sessions - in dinkum session handling is done by skills, it completely ignores the message.context mechanism and existing session_id, in ovos we believe session should come in the message and handled by the clients (eg, a chat user or a hivemind client....), in ovos we are expanding the original session concept ovos-core/pull/160
  • +
  • dbus-hal - a dbus service specific to the mk2 has been introduced, in ovos we have a generic PHAL service and companion plugins to interface with mk2 hardware instead, this component is mark2 specific and should be ignored in the ovos ecosystem
  • +
+

Any skills using these new "features" will not work outside the mark2

+

FAQ

+

Do OVOS skills run in dinkum?

+

No, not even classic core skills run in dinkum. We have no plans to support this

+

Do Dinkum skills run in ovos?

+

No, dinkum is designed in a very incompatible way, the mycroft module is not always mycroft-core and the MycroftSkill class is not always a MycroftSkill, we have no intention of transparently loading dinkum skills in ovos-core

+

We have a small proof of concept tool to convert a dinkum skill into an ovos/classic core compatible skill, see https://github.com/OpenVoiceOS/undinkumfier

+

Does OCP work in dinkum?

+

No, Audio plugin support has been removed, you can run OCP standalone but will be missing the compatibility layers and can't load OCP skills anyway

+

It could be made to work but this is not in the roadmap, PRs will be accepted and reviewed

+

Does PHAL work in dinkum?

+

It should! We don't explicitly target or test it with dinkum, but it is a fairly standalone component

+

Does OPM work in dinkum?

+

STT , TTS and WW plugins should work, We don't explicitly target or test compatibility, PRs will be accepted and reviewed

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/first_boot/index.html b/unused/first_boot/index.html new file mode 100644 index 00000000..5ae37500 --- /dev/null +++ b/unused/first_boot/index.html @@ -0,0 +1,286 @@ + + + + + + + + Booting your OpenVoiceOS device. - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Booting your OpenVoiceOS device.

+

Depending on which image you downloaded you will first see the boot splash which indicates the Operating System is booting. For the buildroot edition the below boot splash will be shown.

+

+

If this is the very first time you boot your device, booting might take a bit longer as normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device. +Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown.

+

+

Again, if this is the first time you boot your device this might take a bit longer as the ovos-core configuration is populated and skills are being setup.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/g2p_plugins/index.html b/unused/g2p_plugins/index.html new file mode 100644 index 00000000..dc85e793 --- /dev/null +++ b/unused/g2p_plugins/index.html @@ -0,0 +1,400 @@ + + + + + + + + Grapheme to Phoneme Plugins - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Grapheme to Phoneme Plugins

+

Grapheme to Phoneme is the process of converting text into a set of "sound units" called phonemes

+

These plugins are used to auto generate mouth movements / visemes in the TTS stage, they can also be used to help +configuring wake words or to facilitate training of TTS systems

+

These plugins can provide phonemes either in ARPA or IPA alphabets, an automatic conversion will happen behind the scenes when needed

+

Mouth movements are generated via a mapping of ARPA to VISEMES,

+

Visemes are predefined mouth positions, timing per phonemes will default to 0.4 seconds if the plugin does not report a duration

+

visemes

+

Mapping based on Jeffers phoneme to viseme map, seen in table 1, partially based on the "12 mouth shapes visuals seen here

+

List of G2P plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginType
neon-g2p-cmudict-pluginARPA
neon-g2p-phoneme-guesser-pluginARPA
neon-g2p-mimic-pluginARPA
neon-g2p-mimic2-pluginARPA
neon-g2p-espeak-pluginIPA
neon-g2p-gruut-pluginIPA
+

Standalone Usage

+

All G2P plugins can be used as follows

+

+utterance = "hello world"
+word = "hello"
+lang="en-us"
+
+plug = G2pPlugin()
+
+# convert a word into a list of phonemes
+phones = plug.get_ipa(word, lang)
+assert phones == ['h', 'ʌ', 'l', 'oʊ']
+
+phones = plug.get_arpa(word, lang)
+assert phones == ['HH', 'AH', 'L', 'OW']
+
+# convert a utterance into a list of phonemes
+phones = plug.utterance2arpa(utterance, lang)
+assert phones == ['HH', 'AH', 'L', 'OW', '.', 'W', 'ER', 'L', 'D']
+
+phones = plug.utterance2ipa(utterance, lang)
+assert phones == ['h', 'ʌ', 'l', 'oʊ', '.', 'w', 'ɝ', 'l', 'd']
+
+# convert a utterance into a list of viseme, duration pairs
+visemes = plug.utterance2visemes(utterance, lang)
+assert visemes == [('0', 0.0775), ('0', 0.155), ('3', 0.2325), ('2', 0.31), ('2', 0.434), ('2', 0.558), ('3', 0.682), ('3', 0.806)]
+
+

Plugin Template

+
from ovos_plugin_manager.templates.g2p import Grapheme2PhonemePlugin
+from ovos_utils.lang.visimes import VISIMES
+
+# base plugin class
+class MyARPAG2PPlugin(Grapheme2PhonemePlugin):
+    def __init__(self, config=None):
+        self.config = config or {}
+
+    def get_arpa(self, word, lang, ignore_oov=False):
+        phones = []  # TODO implement
+        return phones
+
+    def get_durations(self, utterance, lang="en", default_dur=0.4):
+        words = utterance.split()
+        phones = [self.get_arpa(w, lang) for w in utterance.split()]
+        dur = default_dur  # TODO this is plugin specific
+        return [(pho, dur) for pho in phones]
+
+    def utterance2visemes(self, utterance, lang="en", default_dur=0.4):
+        phonemes = self.get_durations(utterance, lang, default_dur)
+        return [(VISIMES.get(pho[0].lower(), '4'), float(pho[1]))
+                for pho in phonemes]
+
+
+

If your plugin uses IPA instead of ARPA simply replace get_arpa with get_ipa

+
from ovos_plugin_manager.templates.g2p import Grapheme2PhonemePlugin
+from ovos_utils.lang.visimes import VISIMES
+
+# base plugin class
+class MyIPAG2PPlugin(Grapheme2PhonemePlugin):
+    def __init__(self, config=None):
+        self.config = config or {}
+
+    def get_ipa(self, word, lang, ignore_oov=False):
+        phones = []  # TODO implement
+        return phones
+
+    def get_durations(self, utterance, lang="en", default_dur=0.4):
+        # auto converted to arpa if ipa is implemented
+        phones = [self.get_arpa(w, lang) for w in utterance.split()]
+        dur = default_dur  # TODO this is plugin specific
+        return [(pho, dur) for pho in phones]
+
+    def utterance2visemes(self, utterance, lang="en", default_dur=0.4):
+        phonemes = self.get_durations(utterance, lang, default_dur)
+        return [(VISIMES.get(pho[0].lower(), '4'), float(pho[1]))
+                for pho in phonemes]
+
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/gs_first_boot/index.html b/unused/gs_first_boot/index.html new file mode 100644 index 00000000..d1c5abb9 --- /dev/null +++ b/unused/gs_first_boot/index.html @@ -0,0 +1,339 @@ + + + + + + + + Booting your OpenVoiceOS device. - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Booting your OpenVoiceOS device.

+

Each image has it's own first boot process.

+

Buildroot

+

When you first boot the Buildroot image, you will be greated with an OVOS splash screen as shown below.

+

+

As this is the first time you have booted your device, it might take a bit longer than normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device. +Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown.

+

+

Again, since this is your first boot, this might take a bit longer as the ovos-core configuration is populated and skills are being setup.

+

Raspbian

+

The Raspbian image is headless, and therefore you will not see these images. You can still monitor the boot procees by attaching a screen and follow the boot process from the command line.

+

Setting up your Wi-Fi network connection

+

The buildroot image supports setting up the network via two options.

+

+
    +
  • On a mobile device
  • +
  • On the OpenVoiceOS device itself.
  • +
+

You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work. This includes the date and time as a Raspberry Pi does not have a Real Time Clock and therefore does not know this data without being told)

+

On Mobile Setup

+

This is the defult option for the headless images

+

Choosing this option will create a temporarily open network - hotspot called "OVOS" to which you can connect from your mobile device. The Raspbian image will give a voice prompt to connect to the hotspot and direct you to a webpage that will allow you to connect your device to WiFi.

+

+

On your mobile device go into Settings -> Wi-Fi Settings and the "OVOS" open network will appear in its list.

+

+

Connect your device with the OVOS network and a webpage will open to configure the Wi-Fi network on your OpenVoiceOS device. (NOTE: On newer mobile operating systems the captive portal capture has changed and the website will not automatically be opened. If this is the case you can open a browser manually and go to http://start.OpenVoiceOS.com) +The following webpage will be shown;

+

+

Select your Wi-Fi network from the list, insert your password and press the "Connect" button.

+

+

If everything went fine, you will soon see the green "connected" screen, Buildroot only, on your OpenVoiceOS device. The Raspbian image does NOT have a completed prompt

+

On Device Setup

+

Not avaliable on headless images

+

Choosing this option allows you to set up your Wi-Fi network straight on your OpenVoiceOS device. If selected a screen with the available networks will be shown on your OpenVoiceOS device.

+

+

Select your network from the list and tap / click on it to allow you to insert your password. If you have a touch screen an on-screen keyboard will appear when you tap the password field. If not use a keyboard.

+

+

When you have inserted your password, click / tap the connect button and after a short connecting animation, if all went fine you will see the green "connected" screen on your OpenVoiceOS device.

+

(Re)configure your network from the drop-down menu

+

If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the "Wi-Fi" icon. This brings you to the same on-device configuration screen.

+

+

From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks.

+

+

Selecting Your Backend

+

What is a backend ?

+

No backend - No calling home

+

This is the suggested method and is default with the headless images

+

Only the Buildroot image will have these options no further action is required for the headless images

+

Select A Text To Speech (TTS) Engine: A text-to-speech (TTS) system converts normal language text into speech, select an engine from the list

+

TTS

+

Select A Speech To Text (STT) Engine: A speech-to-text (STT) system converts human speech to text, select an engine from the list

+

STT

+

Personal backend - Host your own

+

Personal Backend

+

Personal backend is a reverse engineered alternative to selene and requires the backend to be hosted locally.

+
    +
  • Install and Configure your personal backend, information available on: https://github.com/OpenVoiceOS/ovos-personal-backend
  • +
  • The gui on device will display a setup page to enter the host address of your hosted backend on your device
  • +
  • Pairing with your personal backend happens automatically once you hit the confirm button with the correct host address
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/gs_installation_choices/index.html b/unused/gs_installation_choices/index.html new file mode 100644 index 00000000..175b0d87 --- /dev/null +++ b/unused/gs_installation_choices/index.html @@ -0,0 +1,290 @@ + + + + + + + + Welcome To OpenVoiceOS - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Welcome To OpenVoiceOS

+

Where to get OVOS?

+

OVOS is in early stages, we publish our Raspberry Pi images for download but expect new bugs and new fixes on every release, we are not yet stable!

+

These images are development images in alpha stage, bugs and incomplete features are guaranteed.

+

Installation choices

+

You can install OVOS either as an image, container, or manually.

+

There are currently two image choices for OVOS. Buildroot, and Raspbian. You can also build images from scratch for both options. (Details in the works)

+

Docker containers are avaliable Windows, Macs, and Linux

+

In most cases images provide for the easiest install if your hardware is supported. Check the Quick Start Guide for help getting runnin quickly.

+

Building your own image can provide for a complete custom build in a package, but is more of an involved process. If you're familiar with Docker, then that option can provide a quick install.

+

Guides on building images is located in our technical docs

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/gs_installing_image/index.html b/unused/gs_installing_image/index.html new file mode 100644 index 00000000..71314210 --- /dev/null +++ b/unused/gs_installing_image/index.html @@ -0,0 +1,301 @@ + + + + + + + + Welcome To OpenVoice OS Getting Started User Guide (Images) - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Welcome To OpenVoice OS Getting Started User Guide (Images)

+

Which Image should I choose?

+

OVOS provides a couple of different images specificaly for the Raspberry Pi. The most advanced and featureful is the Buildroot image. If you want a GUI, this is currently your best choice.

+

Buildroot Latest Image

+ +

Raspbian Latest Images

+ +

Default users

+
    +
  • BuildRoot: Username: mycroft | password: mycroft
  • +
  • Raspbian: Username: ovos | password: ovos
  • +
+

Flashing your image

+

Flashing your image to your sdcard or USB drive is not different from flashing any other image. For the non-technical users we advise to use the flashing utility from the Raspberry Pi Foundation which you can find here. +Under "CHOOSE OS" select custom at the very bottom of the list and browse to the downloaded image file. It is not required to unzip / unpack the image as the Raspberry Pi imager software can do that for you on the fly. +Under "CHOOSE STORAGE" select your sdcard or USB device.

+

Specific instructions for each image can be found on thier respective Github pages

+

If you have a Raspberry Pi 4 we recommend to use a good USB3.1 device. If you have a Raspberry Pi 3, use a proper sdcard. (From fast to slow: USB3.1 - sdcard - USB2)

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/gs_next_steps/index.html b/unused/gs_next_steps/index.html new file mode 100644 index 00000000..8607a5ce --- /dev/null +++ b/unused/gs_next_steps/index.html @@ -0,0 +1,304 @@ + + + + + + + + OpenVoiceOS Quickstart - Next Steps - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Quickstart - Next Steps

+

Woo Woo!! You Have A Running OVOS Device!! Now what?

+

Prebuilt images come with a default set of skills installed, including, but not limited to the date/time, and weather. Give them a shot.

+

Speak these commands and enjoy the spoils

+

Hey Mycroft, what time is it?

+

Hey Mycroft, what is today's date?

+

Hey Mycroft, what is the weather today?

+

Hey Mycroft, will it rain today?

+

While there are several default skills installed, there are many more avaliabe to be used. The link below will show you how to find and install more skills.

+ +

But wait, there's more!!

+

OVOS ships with a default TTS (Text to Speech) engine which speaks in the origional Alan-Pope voice that Mycroft used. There are MANY more to choose from. The following link will help you choose and configure a different voice for your assistant.

+ +

Your OVOS assistant uses a "wake word" which lets it know it is time to start listening to your commands. By default, the phrase to wake OVOS is Hey Mycroft. This, like most things in OVOS is totally configurable. Follow the link to learn more.

+ +

OVOS ships with default services avaliabe to the public to use. These include public TTS and STT servers, a weather API provided by link to weather provider, access to Wolfram, and more. Part of being an open and private system, you can also change these to your own prefrences.

+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/gui_extensions/index.html b/unused/gui_extensions/index.html new file mode 100644 index 00000000..65a5d369 --- /dev/null +++ b/unused/gui_extensions/index.html @@ -0,0 +1,317 @@ + + + + + + + + GUI Extensions - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

GUI Extensions

+

OVOS Core supports a GUI Extension framework which allows the GUI service to incorporate additional behaviour for a +specific platform. GUI Extensions currently supported:

+

Smartspeaker Extension

+

This extension is responsible for managing the smartspeaker GUI interface behaviour, it supports homescreens and +homescreen management. Enabling the smartspeaker GUI extension:

+
"gui": {
+    "extension": "smartspeaker",
+    "idle_display_skill": "skill-ovos-homescreen.openvoiceos"
+}
+
+

Bigscreen Extension

+

This extension is responsible for managing the plasma bigscreen GUI interface behaviour, it supports window management +and window behaviour control on specific window managers like Kwin. Enabling the Bigscreen GUI extension:

+
"gui": {
+    "extension": "bigscreen"
+}
+
+

Mobile Extension

+

This extension is responsible for managing the mobile GUI interface behaviour, it supports homescreens and additionally +adds support for global page back navigation. Enabling the Mobile GUI extension:

+
"gui": {
+    "extension": "mobile",
+    "idle_display_skill": "skill-android-homescreen.openvoiceos",
+}
+
+

Generic Extension

+

This extension provides a generic GUI interface and does not add any additional behaviour, +it optionally supports homescreens if the platform or user manually enables it. +This extension is enabled by default when no other extension is specified.

+
"gui": {
+    "idle_display_skill": "skill-ovos-homescreen.openvoiceos",
+    "extension": "generic",
+    "generic": {
+        "homescreen_supported": false
+    }
+}
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/gui_intro/index.html b/unused/gui_intro/index.html new file mode 100644 index 00000000..f629d59f --- /dev/null +++ b/unused/gui_intro/index.html @@ -0,0 +1,291 @@ + + + + + + + + Introduction - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Introduction

+

OVOS devices with displays provide skill developers the opportunity to create skills that can be empowered by both voice and screen interaction.

+

The display interaction technology is based on the QML user interface markup language that gives you complete freedom to create in-depth innovative nteractions without boundaries or provide you with simple templates within the Mycroft GUI framework that allow minimalistic display of text and images based on your skill development specifics and preferences.

+

Framework

+

Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt.

+

Introduction to QML

+

QML user interface markup language is a declarative language built on top of Qt's existing strengths designed to describe the user interface of a program: both what it looks like, and how it behaves. QML provides modules that consist of sophisticated set of graphical and behavioral building elements.

+

A collection of resources to familiarize you with QML and Kirigami Framework.

+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/gui_protocol/index.html b/unused/gui_protocol/index.html new file mode 100644 index 00000000..a4b96736 --- /dev/null +++ b/unused/gui_protocol/index.html @@ -0,0 +1,286 @@ + + + + + + + + Gui protocol - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Protocol

+

The gui service in ovos-core will expose a websocket to +the GUI clients following the protocol outlined here

+

The transport protocol works between gui service and the gui clients, mycroft does not directly use the protocol but instead communicates with the gui service via the standard mycroft bus

+

OVOS images are powered by ovos-shell, the client side +implementation of the gui protocol

+

The GUI library which implements the protocol lives in the mycroft-gui repository.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/gui_service/index.html b/unused/gui_service/index.html new file mode 100644 index 00000000..5bf1d06a --- /dev/null +++ b/unused/gui_service/index.html @@ -0,0 +1,302 @@ + + + + + + + + GUI Service - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

GUI Service

+

OVOS uses the standard mycroft-gui framework, you can find the official +documentation here

+

The GUI service provides a websocket for gui clients to connect to, it is responsible for implementing the gui protocol +under ovos-core.

+

You can find indepth documentation in the dedicated GUI section of these docs

+

Configuration

+

The gui service has a few sections in mycroft.conf

+
"gui": {
+    "idle_display_skill": "skill-ovos-homescreen.openvoiceos",
+    "extension": "generic",
+    "generic": {
+        "homescreen_supported": false
+    }
+},
+
+"gui_websocket": {
+    "host": "0.0.0.0",
+    "base_port": 18181,
+    "route": "/gui",
+    "ssl": false
+},
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/guidelines/index.html b/unused/guidelines/index.html new file mode 100644 index 00000000..9ecf8274 --- /dev/null +++ b/unused/guidelines/index.html @@ -0,0 +1,845 @@ + + + + + + + + Voice User Interface Design Guidelines - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Voice User Interface Design Guidelines

+

Through these guidelines you will learn how to use principles of Voice User Interface Design to build more effective +skills. These tools will help define and validate the features of the skill before diving deep into development.

+

This guide will cover some methods to use that can help plan, prototype and test your skill during the early design stages.

+

The full original guide can be found over at the mycroft documentation

+

Interactions

+

Intents

+

Let's start with an example. A user in Melbourne, Australia might want to know about the weather. To ask for this +information, they might say:

+
+

"Hey Mycroft, what's today's weather like?"

+

"Hey Mycroft, what's the weather like in Melbourne?"

+

"Hey Mycroft, weather"

+
+

Even though these are three different expressions, for most of us they probably have roughly the same meaning. In each +case we would assume the user expects OVOS to respond with today's weather for their current location.

+

It is up us as Skill creators to teach OVOS the variety of ways that a user might express the same intent. This is a key +part of the design process. It is the key difference between a Skill that kind of works if you know what to say, and a +Skill that feels intuitive and natural to talk to.

+

This is handled by an intent parser whose job it is to learn from your Skill what intents it can handle, and extract +from the user's speech and key information that might be useful for your Skill. In this case it might include the +specified date and location.

+

Statements and Prompts

+

You can think of Prompts as questions and Statements as providing information to the user that does not need a +follow-up response. For example a weather forecast like this would be considered a statement:

+
+

Today’s forecast is sunny with a high of 60 and a low of 45.

+
+

Statements

+

For a lot of skills the conversation might end with a simple statement from OVOS, and no further action is necessary. +Try to imagine what the user is trying to accomplish, if a simple statement gets the job done there is no reason to keep +the conversation rolling, and in fact a follow-up might annoy the user with unnecessary interaction.

+

Prompts

+

It may be tempting to always give users specific instructions like traditional automated phones systems (Interactive +Voice Response). Many phone systems are notorious for being too verbose and difficult to follow.

+

With OVOS we’re trying to break that mold and make the interaction natural. If you follow the phone system method you +may be giving the user the exact phrasing to say, but you’re also taking up valuable time and training them to think the +system is very narrow in capability. In the event that the user does give a response that your skill can not handle, +create follow-up prompts that steer the user back on track.

+

Remember, there are no errors in a cooperative conversation.

+

Avoid

+ + + + + + + + + + + + + +
Speaker
MycroftHow many people are playing? For example, you can say 2 players.
+

Better

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Speaker
MycroftHow many players?
UserMy Brother and Myself
MycroftI’m sorry, what was the number of players?
UserTwo
+

In the first example the user is told explicitly what they can say, but the prompt is unnatural, we don’t typically +suggest responses to a conversation partner in real life. These long-winded prompts can become tiresome if they are used +repeatedly throughout the skill. Remember the phone system example, typically poorly designed automated phone systems +inundate the user with many options and additional instructions at every step of the interaction.

+

In the second example we see a better prompt, although the user gives a response that is easy for a human to understand +it is more difficult to design a skill to understand. Instead, the skill designer can apply a re-prompt strategy that +steers the user back on track and doesn't require them to start the interaction over. In this case the re-prompt changes +the wording slightly to make it clear that the user needs to say a number. The next time the user interacts with this +Skill, they will likely say a number in their first interaction.

+

Determining whether to respond to the user with a statement or a prompt can be a bit tricky. When the user is +somewhat vague it is tempting to assume the user’s intent in order to speed along the interaction. Sometimes this is the +right approach and sometimes it is not. If you are very confident in the user’s input then it is alright to assume, and +respond with a statement. For example in the Wikipedia Skill the Wikipedia API is used to confidently select the best response.

+

Wikipedia Skill

+ + + + + + + + + + + + + + + + + +
Speaker
UserTell me about Abraham Lincoln
MycroftAbraham Lincoln was an American statesman and lawyer who served as the 16th president of the UnitedStates from 1861 to 1865. Lincoln led the nation through its greatest moral, constitutional, and political crisis in the American Civil War.
+

In contrast, let’s look at an example of where a follow-up prompt is a better approach. You’re working on a skill that +gives important information about birds, such as wingspan, laden and unladen airspeed, etc..

+

Avoid

+ + + + + + + + + + + + + + + + + +
Speaker****
Userwhat is the airspeed velocity of an unladen swallow?
MycroftThe unladen airspeed of the european swallow is 20.1 miles per hour.
+

Better

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Speaker
UserWhat is the airspeed velocity of an unladen Swallow?
MycroftWhat do you mean, African or European Swallow?
UserEuropean Swallow.
MycroftThe unladen airspeed of the European Swallow is 20.1 miles per hour.
+

In the first example Mycroft assumes the user is referring to the European Swallow, however there is nothing to give +confidence to that assumption. In this case it’s dangerous to assume the User meant the European Swallow since it is +just as likely they meant the African Swallow. When there is no confidence in one answer versus another, OVOS should +follow up with a prompt as demonstrated in the second example.

+

Another approach to solving this problem is to offer the most likely answer and inform the user that there are other +answers available with a statement.

+

Confirmations

+

Confirmation approaches can also be defined by Statements or Prompts , but when we talk about them in the context of +confirmations we call them Implicit and Explicit.

+

Implicit Confirmation

+

This type of confirmation is also a statement. The idea is to parrot the information back to the user to confirm that it +was correct, but not require additional input from the user. The implicit confirmation can be used in a majority of +situations. For example Mycroft’s Timer Skill uses implicit confirmation when creating new timers.

+

Timer Skill

+ + + + + + + + + + + + + + + + + +
Speaker
UserHey Mycroft set a timer for 5 minutes
MycroftAlright, I’m starting a 5 minute timer
+

Avoid

+ + + + + + + + + + + + + + + + + +
Speaker
UserHey Mycroft, add bananas to my shopping list
Mycroftdone
+

Better

+ + + + + + + + + + + + + + + + + +
Speaker
UserHey Mycroft, add bananas to my shopping list
MycroftAdding bananas
+

Explicit Confirmation

+

This type of confirmation requires an input from the user to verify everything is correct. Only use explicit +confirmation when needed as it can be tedious for the user to always confirm their input. This strategy is best when +there are multiple steps, and it would be difficult to fix a misstep without starting over, or when deleting or +modifying an existing multistep input.

+

Avoid

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Speaker
UserHey Mycroft, set an alarm for 9:00 am tomorrow
MycroftOk, I’ve set an alarm for 9:00 am tomorrow, is that correct?
UserYes
Mycroft9:00 am alarm set
+

Better

+ + + + + + + + + + + + + + + + + +
Speaker
UserHey Mycroft, set an alarm for 9:00 am tomorrow
MycroftI've set an alarm for tomorrow at 9:00 am, 16 hours from now
+

The first example requires the user to confirm the alarm. Since the interaction is quite short it isn’t necessary to use +explicit confirmation.

+

Avoid

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Speaker
UserHey Mycroft, set a calendar event for a meeting with Chris Stevens
MycroftWhen is the event?
UserTomorrow at 11:00 am
Mycroftshould I send an invite to Chris Stevens?
UserYes
MycroftAlright, I've created a meeting for Tomorrow
MycroftAlright, I’ve created a meeting for Tomorrow at 11:00 am with Chris Stevens
UserCancel my 11:00 am meeting tomorrow
MycroftDone
+

Better

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Speaker
UserHey Mycroft, set a calendar event for a meeting with Chris Stevens
MycroftWhen is the event?
UserTomorrow at 11:00 am
Mycroftshould I send an invite to Chris Stevens?
UserYes
MycroftAlright, I’ve created a meeting for Tomorrow at 11:00 am with Chris Stevens
UserCancel my 11:00 am meeting tomorrow
MycroftAre you sure you want to delete your meeting tomorrow at 11:00 am?
UserYes
+

Both examples use explicit and implicit confirmation correctly until the end of the interaction when the user wants to +cancel the event. Unlike setting a simple timer or an alarm creating this calendar event requires multiple steps and +would require reasonable effort to recreate. There may also be significant consequences if the request was +misinterpreted and the meeting was cancelled unintentionally.

+

Conversations

+

Any time the user needs to input a lot of information or the user needs to sort through a variety of options a conversation will be needed. +Users may be used to systems that require them to separate input into different chunks. +For example, many personal navigation devices (GPS units) require users to input the street number, street name, city, and state separately, like this:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Speaker
Navigation DeviceWhat is the street number
User123
Navigation DeviceWhat is the street name
UserMain Street
Navigation DeviceWhat is the city
UserSpringfield
Navigation DeviceOregon
+

This was a limitation of the technology, both the STT and intent parser were not capable of handling an entire address. +With OVOS you do not need to separate the address into multiple inputs. +It’s also easy to think like a traditional GUI and translate what are typically separate fields into prompts for the user, +but that will only slow down the interaction.

+

The best solution is to allow the user to input information both ways, all at once, or in pieces. +For example the Alarm skill allows the user to set a complicated alarm all in one go, or in steps.

+

In Steps:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Speaker
UserHey Mycroft, set a recurring alarm
MycroftFor what day?
Userweekdays
MycroftFor what time?
User7 am
MycroftI've set a recurring Alarm for 7 am on weekdays
+

All at once:

+ + + + + + + + + + + + + + + + + +
Speaker
UserHey Mycroft, set a recurring alarm for weekdays at 7 am
MycroftI've set a recurring Alarm for 7 am on weekdays
+

Context

+

Conversational context is a given when speaking to another person, but not always when interacting with a computer based system. Many skills are designed with only a quick back and forth interaction in mind, and can operate well without conversational context, but conversational context can be a powerful addition to a skill and can make the interaction much more life like. Here are some examples of how to use context in conversation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Speaker
UserWho is Shaquille O'Neal
MycroftShaquille O'Neal is an American former professional basketball player who is a sports analyst on the television program Inside the NBA on TNT. He is considered one of the greatest players in National Basketball Association history
UserHow tall is he?
Mycroftseven foot 1 inches tall
Userwhen did he retire
Mycroft2011
+

Error Handling

+

Inevitably, the user will say something that your skill can’t handle. It’s best not to think of these as errors on the part of the user, remember there aren’t really errors in conversations. Conversations are naturally cooperative with both parties seeking to continue to a successful outcome. Keeping that in mind you should do your best to avoid Error Handling that is a dead-end.

+

Avoid

+ + + + + + + + + + + + + + + + + +
Speaker
UserTimer
MycroftI'm sorry I can't help you with that
+

Better

+ + + + + + + + + + + + + + + + + + + + + +
Speaker
UserTimer
MycroftA timer for how long?
User5 minutes
+

In the first example OVOS does not give the user any options to finish the job they set out to do. Try to avoid situations where the user has to start the conversation over.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/howto_secure_ssh/index.html b/unused/howto_secure_ssh/index.html new file mode 100644 index 00000000..c8092c24 --- /dev/null +++ b/unused/howto_secure_ssh/index.html @@ -0,0 +1,299 @@ + + + + + + + + How to Secure SSH - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

How to Secure SSH

+

Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication.

+

When connecting from a Linux or MacOS client

+

Create a keyfile (you can change ovos to whatever you want)

+
ssh-keygen -t ed25519 -f ~/.ssh/ovos
+
+

Copy to host (use the same filename as above, specify the user and hostname you are using)

+
ssh-copy-id -i ~/.ssh/ovos  ovos@mycroft
+
+

On your dekstop, edit ~/.ssh/config and add the following lines

+
Host rp2
+  user ovos
+  IdentityFile ~/.ssh/ovos
+
+

On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line:

+
PasswordAuthentication no
+
+

restart sshd or reboot

+
sudo systemctl restart sshd
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/images_manjaro/index.html b/unused/images_manjaro/index.html new file mode 100644 index 00000000..f30efb52 --- /dev/null +++ b/unused/images_manjaro/index.html @@ -0,0 +1,377 @@ + + + + + + + + ovos-image-arch-recipe - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

ovos-image-arch-recipe

+

NOTE This repo has been archived and will no longer be updated

+

Make a manjaro based OpenVoiceOS image

+

source code: https://github.com/OpenVoiceOS/ovos-image-arch-recipe/

+

Building

+

Docker Automated Image Building

+

The included Dockerfile can be used to build a default image in a Docker environment.

+

The following dependencies must be installed on the build system before running the +container:

+ +

First, create the Docker container:

+
docker build . -t ovos-image-builder
+
+

Then, run the container to create a OVOS Image. Set CORE_REF to the branch of +ovos-core that you want to build and RECIPE_REF to the branch of ovos-image-recipe +you want to use. Set MAKE_THREADS to the number of threads to use for make processes.

+
docker run \
+-v /home/${USER}/output:/output:rw \
+-v /run/systemd/resolve:/run/systemd/resolve \
+-e CORE_REF=${CORE_REF:-dev} \
+-e RECIPE_REF=${RECIPE_REF:-master} \
+-e MAKE_THREADS=${MAKE_THREADS:-4} \
+--privileged \
+--network=host \
+--name=ovos-image-builder \
+ovos-image-builder
+
+

The entire build process will generally take several hours; it takes 1-2 hours +on a build server with 2x Xeon Gold 5118 CPUs (48T Total).

+

Interactive Image Building

+

The scripts in the automation directory are available to help automate building a default image. +For building an image interactively:

+
bash automation/prepare.sh
+bash /tmp/run_scripts.sh
+
+

The below documentation describes how to manually build an image using the individual scripts in this repository.

+

Steps

+

Getting Started

+

The scripts and overlay files in this repository are designed to be applied to a base image +as the root user. It is recommended to apply these scripts to a clean base image. +Instructions are available at opensource.com.

+
+

Note: The GUI shell is not installable under some base images

+
+

For each step except boot_overlay, the directory corresponding +to the step should be copied to the mounted image and the script run from a terminal +chroot-ed to the image. If running scripts from a booted image, they should be +run as root.

+

Preparation

+

From the host system where this repository is cloned, running prepare.sh <base_image> +will copy boot overlay files, mount the image, mount DNS resolver config from the host system, +copy all other image overlay files to /tmp, and chroot into the image. From here, you can +run any/all of the following scripts to prepare the image before cleaning up

+

core_configuration

+

Configures user accounts and base functionality for RPi. ovos user is created with proper permissions here.

+

At this stage, a booted image should resize its file system to fill the drive it is flashed to. Local login and +ssh connections should use ovos/ovos to authenticate and be prompted to change password on login.

+

network_manager

+

Adds Balena wifi-connect to enable a portal for connecting the Pi device to a Wi-Fi network.

+

A booted image will now be ready to connect to a network via SSID OVOS.

+

sj201

+

For SJ201 board support, the included script will build/install drivers, add required overlays, install required system +packages, and add a systemd service to flash the SJ201 chip on boot. This will modify pulseaudio and potentially overwrite +any previous settings.

+
+

Note: Running this scripts grants GPIO permissions to the gpio group. Any user that interfaces with the SJ201 board +should be a member of the gpio group. Group permissions are not modified by this script

+
+

Audio devices should now show up with pactl list. +Audio devices can be tested in the image by recording a short audio clip and playing it back.

+
parecord test.wav
+paplay test.wav
+
+

embedded_shell

+

Installs ovos-shell and mycroft-gui-app. Adds and enables ovos-gui.service to start the shell +on system boot.

+

The image should now boot to the GUI shell.

+

ovos_core

+

Installs ovos-core and dependencies. Configures services for core modules.

+

At this stage, the image is complete and when booted should start OVOS.

+

dashboard

+

Installs the OVOS Dashboard and service +to start the dashboard from the GUI.

+

From the GUI Settings -> Developer Settings menu, Enable Dashboard will now start +the dashboard for remote access to device diagnostics.

+

camera

+

Installs libcamera and other dependencies for using a CSI camera.

+

The default camera skill can be used to take a photo; libcamera-apps are also +installed for testing via CLI.

+

splash_screen

+

Enables a custom splash screen and disables on-device TTY at boot.

+

On boot, a static image should be shown until the GUI Shell starts.

+

Clean Up

+

cleanup.sh removes any temporary files from the mounted image before unmounting it. +After running cleanup.sh, the image is ready to burn to a drive and boot.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/install_image/index.html b/unused/install_image/index.html new file mode 100644 index 00000000..cf595578 --- /dev/null +++ b/unused/install_image/index.html @@ -0,0 +1,281 @@ + + + + + + + + OpenVoiceOS Architetcture - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Architetcture

+

Work in Progress

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/install_images/index.html b/unused/install_images/index.html new file mode 100644 index 00000000..b2e8dedc --- /dev/null +++ b/unused/install_images/index.html @@ -0,0 +1,313 @@ + + + + + + + + Installing OVOS using a prebuilt image - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Installing OVOS using a prebuilt image

+

Prebuilt images

+

OVOS provides a couple of prebuilt images to use with a Raspberry Pi

+

Buildroot

+

This is the most advanced image that OVOS provides and is intended to be used as a complete system with a GUI. This image has auto-detection of a large amount of hardware, including but not limited to respeaker microphones, and the sj201 sound board that is used by the Mark2.

+

Buildroot images are avaliable for download here. Decompress this file and continue to the next section Burning the image to a SD card or USB drive

+

Raspbian-ovos

+

This is a new image created as a headless device. This is supposed to be optimized to run on less powerful devices such as a RPi3 which does not have the power to run a GUI. This image is still in heavy development and expect some bugs while using.

+

Raspbian-ovos images are avaliable for download here. Unzip this file and continue to the next section Burning the image to a SD card or USB drive

+

Burning an image

+

There are a few ways to burn your image to a drive to use in a Raspberry PI. Both methods described below will work for either OVOS image that you would like to try.

+

Raspberry PI Imager

+

This method can be used with a Linux or Windows host machine.

+

The people at raspberry pi provide a great little program made for burning an image to a device. You can get it here. The team at OVOS has tried to make the setup as simple as possible, and if you follow the steps below, you should have no problems on your first boot.

+
    +
  • Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with sudo raspi-imager.
  • +
  • For "Choose OS", select "Use custom" and select the OVOS image file downloaded from the step above.
  • +
  • For "Choose Storage", select your removable boot media, probably something like "Internal SD Card Reader".
  • +
  • Then select "Write". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot.
  • +
+

Linux dd command

+

Be careful with the dd command, you can easily render your computer useless if the command is entered wrong

+
    +
  • Find the location of your boot medium with the lsblk command.
  • +
  • It should show up as something like sdb. If you are unsure what drive it is, remove it, and run the command again. The one that is missing, is the drive you want to use.
  • +
  • Write the image to the disk with the dd command.
  • +
+

WARNING: Please make sure you are writing to the correct location, as this is the step that can screw everything up

+
    +
  • sudo dd if=<path_to_unzipped_image> of=<path_to_boot_medium> bs=4M status=progress
  • +
  • This step will take several minutes to complete.
  • +
  • When the command is done, and the prompt apears again, finish the process with the command sudo sync
  • +
+

With either method used, you should now have a bootable disk to use with your Raspberry PI

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/install_ovos_images/index.html b/unused/install_ovos_images/index.html new file mode 100644 index 00000000..34f509dc --- /dev/null +++ b/unused/install_ovos_images/index.html @@ -0,0 +1,313 @@ + + + + + + + + Installing OVOS using a prebuilt image - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Installing OVOS using a prebuilt image

+

Prebuilt images

+

OVOS provides a couple of prebuilt images to use with a Raspberry Pi

+

Buildroot

+

This is the most advanced image that OVOS provides and is intended to be used as a complete system with a GUI. This image has auto-detection of a large amount of hardware, including but not limited to respeaker microphones, and the sj201 sound board that is used by the Mark2.

+

Buildroot images are avaliable for download here. Decompress this file and continue to the next section Burning the image to a SD card or USB drive

+

Raspbian-ovos

+

This is a new image created as a headless device. This is supposed to be optimized to run on less powerful devices such as a RPi3 which does not have the power to run a GUI. This image is still in heavy development and expect some bugs while using.

+

Raspbian-ovos images are avaliable for download here. Unzip this file and continue to the next section Burning the image to a SD card or USB drive

+

Burning an image

+

There are a few ways to burn your image to a drive to use in a Raspberry PI. Both methods described below will work for either OVOS image that you would like to try.

+

Raspberry PI Imager

+

This method can be used with a Linux or Windows host machine.

+

The people at raspberry pi provide a great little program made for burning an image to a device. You can get it here. The team at OVOS has tried to make the setup as simple as possible, and if you follow the steps below, you should have no problems on your first boot.

+
    +
  • Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with sudo raspi-imager.
  • +
  • For "Choose OS", select "Use custom" and select the OVOS image file downloaded from the step above.
  • +
  • For "Choose Storage", select your removable boot media, probably something like "Internal SD Card Reader".
  • +
  • Then select "Write". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot.
  • +
+

Linux dd command

+

Be careful with the dd command, you can easily render your computer useless if the command is entered wrong

+
    +
  • Find the location of your boot medium with the lsblk command.
  • +
  • It should show up as something like sdb. If you are unsure what drive it is, remove it, and run the command again. The one that is missing, is the drive you want to use.
  • +
  • Write the image to the disk with the dd command.
  • +
+

WARNING: Please make sure you are writing to the correct location, as this is the step that can screw everything up

+
    +
  • sudo dd if=<path_to_unzipped_image> of=<path_to_boot_medium> bs=4M status=progress
  • +
  • This step will take several minutes to complete.
  • +
  • When the command is done, and the prompt apears again, finish the process with the command sudo sync
  • +
+

With either method used, you should now have a bootable disk to use with your Raspberry PI

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/install_phal_plugins/index.html b/unused/install_phal_plugins/index.html new file mode 100644 index 00000000..8908c760 --- /dev/null +++ b/unused/install_phal_plugins/index.html @@ -0,0 +1,280 @@ + + + + + + + + Install phal plugins - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ + + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/install_raspbian/index.html b/unused/install_raspbian/index.html new file mode 100644 index 00000000..98f1fbc1 --- /dev/null +++ b/unused/install_raspbian/index.html @@ -0,0 +1,420 @@ + + + + + + + + raspbian-ovos (headless) - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

raspbian-ovos (headless)

+

Purpose of this guide

+

This guide describes two ways to create a headless OVOS system suitable for running on a Raspberry Pi 3 or 4. You can either download and burn a prebuilt image to an installation medium like an SD card, or you can use your own installation of the Raspberry PI OS and run an OVOS install script.

+

The RPi3 does not have the processing power to reliably run ovos-shell, the GUI system for OVOS, but has plenty to run the rest of the stack.

+

By the end of the guide, you should have a running OVOS stack, (messagebus, phal, skills, listener, and audio), along with a "lite" version of RaspberryPiOS. Which means you also have a package manager (apt) available to you.

+

OVOS source files used by this guide can be found at raspbian-ovos. Any issues or pull requests should be made in this repository.

+

Raspberry Pi Imager is available here. There have been issues when using Raspberry Pi Imager to burn pre-built images. From Linux we have had success starting the imager with the command sudo raspi-imager.

+

Use a pre-built OVOS/PI Environment image saved to an SD card

+

Download a pre-built OVOS/PI image from our raspbian-ovos download site.

+

Here are two methods to install your OVOS/PI image file onto your SD card.

+

Method 1: Write the pre-built OVOS/PI Environment image using Raspberry Pi Imager

+
    +
  • Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with "sudo raspi-imager".
  • +
  • For "Choose OS", select "Use custom" and select the OVOS/PI image file downloaded from the OpenVoiceOS Downloads Site.
  • +
  • For "Choose Storage", select your removable boot media, probably something like "Internal SD Card Reader".
  • +
  • Then select "Write". Do not click the cog. Do not change the username. Username is ovos and is built in. Do not enter WiFi credentials, you will add them at first boot.
  • +
+

Upon completion, you should have a bootable SD card or USB drive.

+

Method 2: Write the OVOS/PI Environment image using the Linux dd command

+

Be careful with the dd command, you can easily render your computer useless

+
    +
  • If your downloaded image is zipped, unzip the image to the directory of your choice unzip <path-to-zipped-image>
  • +
  • Check where your sd card or usb drive is located using the lsusb command.
  • +
  • It should be visible as sdxx
  • +
  • Write the unzipped image to the disk sudo dd if=<path-to-unzipped-image> of=<path-to-sd-card> bs=4M status=progress
  • +
+

Upon completion, you should have a bootable SD card or USB drive.

+

First Boot and Login to the pre-built OVOV/PI Environment image

+

Insert the SD card, hook up your audio, and turn on your OVOS Pi.

+

This image comes with a predefined user, ovos with password ovos. It is recommended that you change your password on first login.

+

sudo passwd ovos

+

Enter your new password twice.

+

On first boot, you will be voice-prompted to connect to SSID OVOS and go to the website start.openvoiceos.com. This is not the official OVOS website but a local hotspot that the image has created on your Raspberry Pi.

+

Then from a computer that supports wireless networking, connect to the SSID 'OVOS' and go to the website 'start.openvoiceos.com'. There you can enter the credentials of your WiFi network. If your sound isn't working, no worries, you can keep scanning your computer's list of nearby SSIDs until you see OVOS, and then connect to the network without hearing the verbal prompt.

+

This image on a RPi3B takes several minutes to boot before you hear the audio prompt, and several more minutes to finish booting. If you don't think it's working, please wait up to 3 minutes each time before thinking something went wrong. You can also follow progress in the OVOS log files found in ~/.local/state/mycroft/*.log.

+

If for some reason this method does not work, sudo raspi-config and nmtui are also available.

+

The section below is for advanced usage and is NOT currently recommended

+

Use an OVOS/PI environment created by cloning a git repository into your own Raspberry Pi OS environment

+

Step 1: Two ways to create a Raspberry Pi OS Lite (64 bit) boot SD card

+

There are lots of guides, but this one is the official guide +Our experience with Linux is to invoke the raspi-imager with sudo raspi-imager.

+
    +
  • Insert your boot media into your PC.
  • +
  • Start up the Raspberry Pi Imager. On Linux, start Raspberry Pi Imager with "sudo raspi-imager".
  • +
  • For "Choose OS", select "Raspberry Pi OS Other" and then "Raspberry Pi OS Lite (64 bit)".
  • +
  • For "Choose Storage", select your removable boot media, probably something like "Internal SD Card Reader" or "USB Flash Disk".
  • +
+

From here you must choose one of two methods:

+

Method 1: Tried, tested, and reliable, Pi needs a monitor to complete the install

+

Here we use the Raspberry Pi Imager to write your media without selecting any Advanced Options.

+
    +
  • Click "Write".
  • +
  • After the write completes, insert your SD card in your Pi and boot your newly created medium.
  • +
  • Create user ovos with a password of your choosing.
  • +
  • The system will reboot and ask you to log in. Log in as ovos.
  • +
+

Run the command sudo raspi-config to enter the Pi setup utility.

+

We will be running the OVOS environment as a regular user, and we want OVOS to start at boot time, so we want to autologin.

+

Enter the System Options page.

+

Enter the Boot / Autologin page. +- Use the second option in the menu, Console Autologin. + - This enables OVOS to start up automatically at boot time.

+

Now we will enable a few interface options. This will allow us to access our device from a ssh shell and prep the PI for other devices that may be used. Some microphone hats require SPI, or I2C (Respeaker, AIY-Voicebonnet, etc).

+

Go back to the main menu and enter the Interface Options page. +- Enable SSH, SPI, and I2C. + - After SSH is enabled, the rest of the guide can be done from a remote computer.

+

Go back to the main menu and enter the Localisation Options page. +- Configure Locale, Timezone, WLAN Country.

+

You will need an internet connection to complete the rest of the guide

+

** Optional: Setup WIFI **

+
    +
  • Return to the main menu and enter System Options again.
  • +
  • Enter the Wireless LAN section and follow the prompts.
  • +
  • Further down from Wireless Lan is Hostname. Choose a name for your OVOS device and enter it there.
  • +
  • Exit out of the raspi-config tool. Next find your IP address. The command is the same if you used the WiFi setup or have a LAN connected.
  • +
  • Enter the command ip addr.
  • +
+

In the output, if things were configured correctly, there will be one or more lines that are relevant. Find the device that you used to connect, WiFi will start with something like wlan and a LAN connection should begin with eth or enp or something similar. In the device section, there is an inet entry. The number located there is your local IP address. It should be in the format 192.168.x.xxx or something similar. Write this down, or remember it. You will be using it to log in with an SSH shell.

+

Now the device setup is done. Exit raspi-config and reboot.

+

sudo reboot now

+

Method 2: Use Raspberry Pi Imager advanced options to install without a monitor

+

Here we use the Raspberry Pi Imager to write your media and let the Imager handle your network and SSH setup.

+

If your network cannot locate computers by their hostnames, this method will not easily work for you. In other words, if you cannot ping a network connection with a host name, and you need to use an IP address to ping other network computers, use Method 1 described above. If you are comfortable looking up the OVOS computer's IP address using your router or other network software, Method 2 will still work.

+

Instead of selecting "Write", click on the cog in the lower right of the Imager panel to open Raspberry Pi Imager advanced options.

+

In this new panel, check the boxes for:

+
    +
  • "Set hostname" - hostname for your OVOS device goes here.
  • +
  • "Enable SSH" - this is how you will log into your headless Raspberry Pi. Use password authentication.
  • +
  • "Enter a username and password". For this installation we are using ovos as the username.
  • +
  • "Configure wireless LAN" (if you're using a wireless connection). Enter your SSID and wireless password.
  • +
+

Click "Save", then click "Write". Once writing is complete, move the SD card to your OVOS device and boot up.

+

After logging in as ovos, run the command sudo raspi-config to enter the Pi setup utility.

+

We will be running the OVOS environment as a regular user, and we want OVOS to start at boot time, so we want to autologin.

+

Enter the System Options page.

+

Enter the Boot / Autologin page.

+
    +
  • Use the second option in the menu, Console Autologin.
  • +
  • This enables OVOS to start up automatically at boot time.
  • +
+

Now we will enable a few interface options. This will allow us to access our device from a ssh shell and prep the PI for other devices that may be used. Some microphone hats require SPI, or I2C (Respeaker, AIY-Voicebonnet, etc).

+

Go back to the main menu and enter the Interface Options page.

+
    +
  • Enable SPI, and I2C.
  • +
+

Go back to the main menu and enter the Localisation Options page.

+
    +
  • Configure Locale, Timezone, WLAN Country.
  • +
+

Now the device setup is done. Exit raspi-config and reboot.

+

sudo reboot now

+

Step 2: ssh to your OVOS device

+

*** From this point on, you should be able to access your device from any SSH terminal. ***

+

For guide for how to do this, see raspberrypi documentation remote-access

+

From a linux machine, open a terminal and enter the command ssh ovos@<your-remembered-IP-address> or ssh ovos@<your-hostname>. There will be a warning making sure you want to connect to this device. Enter yes, and when asked, enter the password for ovos that you made earlier in the setup. +ovos

+

As a final configuration step, make sure your system is up to date.

+

sudo apt -y update && sudo apt -y upgrade

+

We should be done with the basic setup now. You should have a running RaspberryPiOS device with the user ovos

+

Step 3: Install ovos-core

+

There are some recommendations to use a venv for OVOS. This guide DOES NOT do that. The OVOS headless stack on a RPi3 is about all it can handle. It is assumed that this is a dedicated OVOS device, therefore no venv is required.

+

We will be cloning code from a git repository, so before starting we need to install git.

+

sudo apt install git

+

We will also be installing everything to the user environment instead of system wide. As ovos is the only user, this should be fine.

+

Although not strictly necessary, we assume that we're starting in the ovos home directory.

+

cd ~

+

Clone the repository

+

git clone https://github.com/OpenVoiceOS/raspbian-ovos.git

+

cd raspbian-ovos

+

Run the install script and follow the prompts. It's fine to say yes "Y" to everything.

+

./manual_user_install.sh

+

You should now have a running OVOS device!!

+

Check your installation with

+

systemctl --user status ovos-*

+

The full OVOS can take a few minutes to load (especially on a Pi 3), but the processes should all eventually say active (running), except for ovos.service which should say active (exited)

+

You can also track progress by watching the files in ~/.local/state/mycroft/*.log. Once things slow down you can try saying "Hey Mycroft". In a few seconds (the first time is slow) you should hear a 'ding' from the system. Then say "What day is it". After a delay you should hear information about today's date.

+

Final thoughts

+

Often the audio can take some tuning, and in general is not covered here. Pulseaudio should be running, check with systemctl --user status pulseaudio. Each piece of hardware is different to set up. I am sure there is a guide somewhere for your hardware. One thing to mention, this is a full raspbian install, so installing drivers should work also.

+

Once the OVOS processes are running, if you don't hear a 'ding' after two or three times saying "Hey Mycroft", start up alsamixer and make sure your microphone is recognized and the volume is turned up. At least one USB microphone (mine) defaults to "Auto Gain Control" which needs to be turned off and replaced by turning up the microphone volume. You may also need to turn up the speaker volume.

+

This installation of ovos-core only has a few default skills shipped with it. Check this page for more information on skills.

+

Please enter suggestions and support requests on our raspbian-ovos github page. Thank you!

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/install_skills/index.html b/unused/install_skills/index.html new file mode 100644 index 00000000..02bf1f6e --- /dev/null +++ b/unused/install_skills/index.html @@ -0,0 +1,338 @@ + + + + + + + + Installing New Skills - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Installing New Skills

+

There are a few ways to install skills in ovos. The preferred way is with pip and a setup.py file.

+

Finding Skills

+

Most skills are found throughout github. The official skills can be found with a simple search in the OVOS github page. There are a few other places they can be found. Neon AI has several skills, and a search through github will for sure find more.

+

Installing a found skill

+

pip install

+

The preferred method is with pip. If a skill has a setup.py file, it can be installed this way.

+

The syntax is pip install git+<github/repository.git>.

+

ex. pip install git+https://github.com/OpenVoiceOS/skill-ovos-date-time.git should install the ovos-date-time skill

+

They can be installed locally also.

+

Clone the repository

+

git clone https://github.com/OpenVoiceOS/skill-ovos-date-time

+

pip install ./skill-ovos-date-time

+

After installing skills this way, ovos skills service needs to be restarted

+

systemctl --user restart ovos-skills

+

git install

+

This is NOT the preferred method and is here for backward compatabity with the origional mycroft-core skills.

+

Skills can also be directly cloned to the skill directory, usually located at ~/.local/share/mycroft/skills/

+

enter the skill directory

+

cd ~/.local/share/mycroft/skills

+

and clone the found skill here with git

+

git clone <github/repository.git>

+

ex. git clone https://github.com/OpenVoiceOS/skill-ovos-date-time.git will install the ovos-date-time skill.

+

A restart of the ovos-skills service is not required when installing this way.

+

Information on Skills

+

Configuring Skills

+

Depreciated

+

The OVOS skills manager is in need of some love, and when official skill stores are created, this will be updated to use the new methods. Until then, this method is NOT recomended, and NOT supported. The following is included just as refrence.

+

OVOS skills manager

+

Install skills from any appstore!

+

The mycroft-skills-manager alternative that is not vendor locked, this means you must use it responsibly!

+

Do not install random skills, different appstores have different policies!

+

Keep in mind any skill you install can modify mycroft-core at runtime, and very likely has +root access if you are running on a raspberry pi

+

Supported stores

+
    +
  • OVOS - this one is really a proof of concept for now, stay tuned!
  • +
  • Mycroft Marketplace - the official mycroft skills store, all skills are +reviewed by humans!
  • +
  • Pling - the official plasma bigscreen skills store, skills are accepted +by default and only removed if flagged as malicious
  • +
  • Andlo's skill list - not a real appstore, this is a web scrapped +automatically generated list of 900+ skills from all over github, there +is no review at all, it will catch malicious skills
  • +
+

OpenVoiceOS Skill Manager

+
pip install ovos-skills-manager
+
+

Enable a skill store

+
osm enable --appstore [ovos|mycroft|pling|andlo|all]
+
+

Search for a skill and install it

+
osm install --search
+
+

See more osm commands

+
osm --help
+osm install --help
+
+

More Information

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/intents/index.html b/unused/intents/index.html new file mode 100644 index 00000000..3ea77ae5 --- /dev/null +++ b/unused/intents/index.html @@ -0,0 +1,314 @@ + + + + + + + + Intents - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Intents

+

A user can accomplish the same task by expressing their intent in multiple ways. The role of the intent parser is to +extract from the user's speech key data elements that specify their intent in more detail. This data can then be passed +to other services, such as Skills to help the user accomplish their intended task.

+

Example: Julie wants to know about today's weather in her current location, which is Melbourne, Australia.

+
+

"hey mycroft, what's today's weather like?"

+

"hey mycroft, what's the weather like in Melbourne?"

+

"hey mycroft, weather"

+
+

Even though these are three different expressions, for most of us they probably have roughly the same meaning. In each +case we would assume the user expects OVOS to respond with today's weather for their current location. The role of an +intent parser is to determine what this intent is.

+

In the example above, we might extract data elements like:

+
    +
  • weather - we know that Julie wants to know about the weather, but she has not been specific about the type of + weather, such as wind, precipitation, snowfall or the risk of fire danger from bushfires. Melbourne, Australia + rarely experiences snowfall, but falls under bushfire risk every summer.
  • +
  • location - Julie has stipulated her location as Melbourne, but she does not state that she means Melbourne, + Australia. How do we distinguish this from Melbourne, Florida, United States?
  • +
  • date - Julie has been specific about the timeframe she wants weather data for - today. But how do we know what + today means in Julie's timezone. Melbourne, Australia is between 14-18 hours ahead of the United States. We don't want + to give Julie yesterday's weather, particularly as Melbourne is renowned for having changeable weather.
  • +
+

OVOS has two separate Intent parsing engines each with their own strengths. +Each of these can be used in most situations, however they will process the utterance in different ways.

+

Example based intents are trained on whole phrases. These intents are generally more accurate however require you to include sample phrases that cover the +breadth of ways that a User may ask about something.

+

**Keyword / Rule based ** these intents look for specific required keywords. They are more flexible, but since these are essentially rule based this can result in a lot of false matches. +A badly designed intent may totally throw the intent parser off guard. The main advantage of keyword based intents is the integration with conversational context, they facilitate continuous dialogs

+

OVOS is moving towards a plugin system for intent engines, currently only the default MycroftAI intent parsers are supported

+
    +
  • Padatious is a light-weight neural network that is trained on whole phrases. You can find the official documentation here
  • +
  • Adapt is a keyword based parser. You can find the official documentation here
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/lang_plugins/index.html b/unused/lang_plugins/index.html new file mode 100644 index 00000000..62032cfc --- /dev/null +++ b/unused/lang_plugins/index.html @@ -0,0 +1,371 @@ + + + + + + + + Language Detection/Translation Plugins - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Language Detection/Translation Plugins

+

These plugins can be used to detect the language of text and to translate it

+

They are not used internally by ovos-core but are integrated with external tools

+

neon-core also makes heavy use of OPM language plugins

+

List of Language plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginDetectTxOfflineType
neon-lang-plugin-cld2yesnoyesFOSS
neon-lang-plugin-cld3yesnoyesFOSS
neon-lang-plugin-langdetectyesnoyesFOSS
neon-lang-plugin-fastlangyesnoyesFOSS
neon-lang-plugin-lingua_podreyesnoyesFOSS
neon-lang-plugin-libretranslateyesyesnoAPI (self hosted)
neon-lang-plugin-apertiumnoyesnoAPI (self hosted)
neon-lang-plugin-amazon_translateyesyesnoAPI (key)
neon-lang-plugin-google_translateyesyesnoAPI (key)
+

Open Linguistika

+

Open Linguistika is a tool to allow Mycroft Skill developers working on GUI’s to easily translate their GUI’s to other languages.

+

For Mycroft’s GUI, the UI interface used currently by Mycroft for which you can find QML files under the UI directory of skills, is based on Qt. Mycroft GUI uses Qt’s translation mechanism to translate GUI’s to other languages.

+

To get your skills GUI translated and ready for other languages involves several manual steps from running Qt tools like lupdate against each QML UI file for each translatable language to running Qt’s tool lrelease for specific language targets to compile a language for the QT environment to understand. To make your developer experience smarter and easier the OpenVoiceOS team is introducing an all-in-one toolkit for GUI language translations.

+

The Open Linguistika toolkit allows developers to use auto-translate from various supported translator providers, and additionally support more languages, with the possibility for manual translations without having to go through the different Qt tools and command chain required to manually support a skill GUI for a different language.

+

As a GUI skill Developer, the only know-how you need is to add the translation calls to your skill QML files, Developers can get more information about how to add them here Internationalization and Localization with Qt Quick | Qt 6.3.

+

The “TLDR” version is that for every hard-coded string in your QML UI skill file you need to decorate your strings with the qsTr() decorator and your model list elements with the QT_TR_NOOP() decorator.

+

Open Linguistika when installed by default on your distribution by choice, currently supports 6 European languages and 2 auto-translation providers.

+

The tool provides extensibility through its JSON configuration interface to add more language support, where using a simple JSON language addition mechanism you can extend the tool to support a number of additional languages you would like to support for your skills UI. You can read more about adding additional languages on the tool’s GitHub repository.

+

How-To-Use Demo: +Watch the video

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/linguistika/index.html b/unused/linguistika/index.html new file mode 100644 index 00000000..f87266fb --- /dev/null +++ b/unused/linguistika/index.html @@ -0,0 +1,290 @@ + + + + + + + + Open Linguistika - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Open Linguistika

+

Open Linguistika is a tool to allow Mycroft Skill developers working on GUI’s to easily translate their GUI’s to other languages.

+

For Mycroft’s GUI, the UI interface used currently by Mycroft for which you can find QML files under the UI directory of skills, is based on Qt. Mycroft GUI uses Qt’s translation mechanism to translate GUI’s to other languages.

+

To get your skills GUI translated and ready for other languages involves several manual steps from running Qt tools like lupdate against each QML UI file for each translatable language to running Qt’s tool lrelease for specific language targets to compile a language for the QT environment to understand. To make your developer experience smarter and easier the OpenVoiceOS team is introducing an all-in-one toolkit for GUI language translations.

+

The Open Linguistika toolkit allows developers to use auto-translate from various supported translator providers, and additionally support more languages, with the possibility for manual translations without having to go through the different Qt tools and command chain required to manually support a skill GUI for a different language.

+

As a GUI skill Developer, the only know-how you need is to add the translation calls to your skill QML files, Developers can get more information about how to add them here Internationalization and Localization with Qt Quick | Qt 6.3.

+

The “TLDR” version is that for every hard-coded string in your QML UI skill file you need to decorate your strings with the qsTr() decorator and your model list elements with the QT_TR_NOOP() decorator.

+

Open Linguistika when installed by default on your distribution by choice, currently supports 6 European languages and 2 auto-translation providers.

+

The tool provides extensibility through its JSON configuration interface to add more language support, where using a simple JSON language addition mechanism you can extend the tool to support a number of additional languages you would like to support for your skills UI. You can read more about adding additional languages on the tool’s GitHub repository.

+

How-To-Use Demo: +Watch the video

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/manjaro/index.html b/unused/manjaro/index.html new file mode 100644 index 00000000..65fadf39 --- /dev/null +++ b/unused/manjaro/index.html @@ -0,0 +1,376 @@ + + + + + + + + ovos-image-arch-recipe - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

ovos-image-arch-recipe

+

Make a manjaro based OpenVoiceOS image

+

source code: https://github.com/OpenVoiceOS/ovos-image-arch-recipe/

+

Building

+

Docker Automated Image Building

+

The included Dockerfile can be used to build a default image in a Docker environment.

+

The following dependencies must be installed on the build system before running the +container:

+ +

First, create the Docker container:

+
docker build . -t ovos-image-builder
+
+

Then, run the container to create a OVOS Image. Set CORE_REF to the branch of +ovos-core that you want to build and RECIPE_REF to the branch of ovos-image-recipe +you want to use. Set MAKE_THREADS to the number of threads to use for make processes.

+
docker run \
+-v /home/${USER}/output:/output:rw \
+-v /run/systemd/resolve:/run/systemd/resolve \
+-e CORE_REF=${CORE_REF:-dev} \
+-e RECIPE_REF=${RECIPE_REF:-master} \
+-e MAKE_THREADS=${MAKE_THREADS:-4} \
+--privileged \
+--network=host \
+--name=ovos-image-builder \
+ovos-image-builder
+
+

The entire build process will generally take several hours; it takes 1-2 hours +on a build server with 2x Xeon Gold 5118 CPUs (48T Total).

+

Interactive Image Building

+

The scripts in the automation directory are available to help automate building a default image. +For building an image interactively:

+
bash automation/prepare.sh
+bash /tmp/run_scripts.sh
+
+

The below documentation describes how to manually build an image using the individual scripts in this repository.

+

Steps

+

Getting Started

+

The scripts and overlay files in this repository are designed to be applied to a base image +as the root user. It is recommended to apply these scripts to a clean base image. +Instructions are available at opensource.com.

+
+

Note: The GUI shell is not installable under some base images

+
+

For each step except boot_overlay, the directory corresponding +to the step should be copied to the mounted image and the script run from a terminal +chroot-ed to the image. If running scripts from a booted image, they should be +run as root.

+

Preparation

+

From the host system where this repository is cloned, running prepare.sh <base_image> +will copy boot overlay files, mount the image, mount DNS resolver config from the host system, +copy all other image overlay files to /tmp, and chroot into the image. From here, you can +run any/all of the following scripts to prepare the image before cleaning up

+

core_configuration

+

Configures user accounts and base functionality for RPi. ovos user is created with proper permissions here.

+

At this stage, a booted image should resize its file system to fill the drive it is flashed to. Local login and +ssh connections should use ovos/ovos to authenticate and be prompted to change password on login.

+

network_manager

+

Adds Balena wifi-connect to enable a portal for connecting the Pi device to a Wi-Fi network.

+

A booted image will now be ready to connect to a network via SSID OVOS.

+

sj201

+

For SJ201 board support, the included script will build/install drivers, add required overlays, install required system +packages, and add a systemd service to flash the SJ201 chip on boot. This will modify pulseaudio and potentially overwrite +any previous settings.

+
+

Note: Running this scripts grants GPIO permissions to the gpio group. Any user that interfaces with the SJ201 board +should be a member of the gpio group. Group permissions are not modified by this script

+
+

Audio devices should now show up with pactl list. +Audio devices can be tested in the image by recording a short audio clip and playing it back.

+
parecord test.wav
+paplay test.wav
+
+

embedded_shell

+

Installs ovos-shell and mycroft-gui-app. Adds and enables ovos-gui.service to start the shell +on system boot.

+

The image should now boot to the GUI shell.

+

ovos_core

+

Installs ovos-core and dependencies. Configures services for core modules.

+

At this stage, the image is complete and when booted should start OVOS.

+

dashboard

+

Installs the OVOS Dashboard and service +to start the dashboard from the GUI.

+

From the GUI Settings -> Developer Settings menu, Enable Dashboard will now start +the dashboard for remote access to device diagnostics.

+

camera

+

Installs libcamera and other dependencies for using a CSI camera.

+

The default camera skill can be used to take a photo; libcamera-apps are also +installed for testing via CLI.

+

splash_screen

+

Enables a custom splash screen and disables on-device TTY at boot.

+

On boot, a static image should be shown until the GUI Shell starts.

+

Clean Up

+

cleanup.sh removes any temporary files from the mounted image before unmounting it. +After running cleanup.sh, the image is ready to burn to a drive and boot.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/playing_music/index.html b/unused/playing_music/index.html new file mode 100644 index 00000000..478117be --- /dev/null +++ b/unused/playing_music/index.html @@ -0,0 +1,304 @@ + + + + + + + + Playing music - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Playing music

+

For playing music (and video as discussed within the next chapter), OpenVoiceOS uses OCP (OpenVoiceOS Common Play) and is basically a full fledge multimedia player on its own designed around open standards like MPRIS and with the vision of being fully integrated within the OpenVoiceOS software stack.

+

Skills designed for OCP provide search results for OCP (think about them as media providers/catalogs/scrapers), OCP will play the best search result for you. +OpenVoiceOS comes with a few OCP skills pre-installed, however more can be installed just like any other OVOS skill.

+

You can find more OCP skills in the awesome-ocp-skills list

+

Youtube Music

+

A voiceassistant with smartspeaker functionality should be able to play music straight out of the box. For that reason the buildroot edition of OpenVoiceOS comes with the Youtube Music OCP Skill pre-installed. +Just ask it to play something will start playback from Youtube assuming the asked sonmg is present on Youtube ofcourse.

+
+

Hey Mycroft, play disturbed sound of silence

+
+

This should just start playing utilizing OCP as shown below. More information about the full functionality of OCP can be found at its own chapter. +

+

Play the news

+

Nothing more relaxing after you woke up, cancelling your alarm set on you OpenVoiceOS device than listening to your favorite news station while drinking some coffee (No OpenVoiceOS can not make you that coffee yet).

+
+

Hey Mycroft, play the BBC news

+
+

+

Some more features that come out of the box

+

The whole OCP framework has some benefits and features that are not skill specific, such as "Playlists" and a view of the search results. You can access those by swiping to the right when something is playing.

+

+

Homescreen widget

+

The homescreen skill that comes pre-installed with OpenVoiceOS also comes with a widget for the OCP framework.

+

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/playing_video/index.html b/unused/playing_video/index.html new file mode 100644 index 00000000..85d7c1db --- /dev/null +++ b/unused/playing_video/index.html @@ -0,0 +1,284 @@ + + + + + + + + Playing video - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Playing video

+

Although the screen used on your OpenVoiceOS device might be small, the whole OCP mediaplaying frame does support video playback.

+

You can find video OCP skills in the same awesome-ocp-skills list. The fourth column, "playback type" shows which type of payer is used for that specific skill.

+

If you use a skill that utilizes the "video player" the below will be shown on your OpenVoiceOS it's screen at playback.

+

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/plugin_intro/index.html b/unused/plugin_intro/index.html new file mode 100644 index 00000000..53a73ddd --- /dev/null +++ b/unused/plugin_intro/index.html @@ -0,0 +1,280 @@ + + + + + + + + Plugin intro - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ + + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/prompts/index.html b/unused/prompts/index.html new file mode 100644 index 00000000..cc44eade --- /dev/null +++ b/unused/prompts/index.html @@ -0,0 +1,380 @@ + + + + + + + + Prompts - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Prompts

+

Here we look at how to implement the most common types of prompts. For more information on conversation design see the Voice User Interface Design Guidelines.

+

Get Response

+

Any Skill can request a response from the user - making a statement or asking a question before the microphone is activated to record the User's response.

+

The base implementation of this is the get_response() method.

+

To see it in action, let's create a simple Skill that asks the User what their favorite flavor of ice cream is.

+
from mycroft import MycroftSkill, intent_handler
+
+
+class IceCreamSkill(MycroftSkill):
+    @intent_handler('set.favorite.intent')
+    def handle_set_favorite(self):
+        favorite_flavor = self.get_response('what.is.your.favorite.flavor')
+        self.speak_dialog('confirm.favorite.flavor', {'flavor': favorite_flavor})
+
+
+def create_skill():
+    return IceCreamSkill()
+
+

In this Skill we have used get_response() and passed it the name of our dialog file 'what.is.your.favorite.flavor.dialog'. This is the simplest form of this method. It will speak dialog from the given file, then activate the microphone for 3-10 seconds allowing the User to respond. The transcript of their response will then be assigned to our variable favorite_flavor. To confirm that we have heard the User correctly we then speak a confirmation dialog passing the value of favorite_flavor to be spoken as part of that dialog.

+

Optional Arguments

+

The get_response() method also takes the following optional arguments:

+
    +
  • data (dict) - used to populate the dialog file, just like speak_dialog()
  • +
  • validator (function) - returns a boolean to define whether the response meets some criteria for success
  • +
  • on_fail (function) - returns a string that will be spoken if the validator returns False
  • +
  • num_retries (int) - number of times the system should repeat the question to get a successful result
  • +
+

Yes / No Questions

+

ask_yesno() checks if the response contains "yes" or "no" like phrases.

+

The vocab for this check is sourced from the Skills yes.voc and no.voc files (if they exist), as well as mycroft-cores defaults (contained within mycroft-core/res/text/en-us/yes.voc). A longer phrase containing the required vocab is considered successful e.g. both "yes" and "yeah that would be great thanks" would be considered a successful "yes".

+

If "yes" or "no" responses are detected, then the method will return the string "yes" or "no". If the response does not contain "yes" or "no" vocabulary then the entire utterance will be returned. If no speech was detected indicating the User did not respond, then the method will return None.

+

Let's add a new intent to our IceCreamSkill to see how this works.

+
from mycroft import MycroftSkill, intent_handler
+
+
+class IceCreamSkill(MycroftSkill):
+    @intent_handler('do.you.like.intent')
+    def handle_do_you_like(self):
+        likes_ice_cream = self.ask_yesno('do.you.like.ice.cream')
+        if likes_ice_cream == 'yes':
+            self.speak_dialog('does.like')
+        elif likes_ice_cream == 'no':
+            self.speak_dialog('does.not.like')
+        else:
+            self.speak_dialog('could.not.understand')
+
+
+def create_skill():
+    return IceCreamSkill()
+
+

In this example we have asked the User if they like ice cream. We then speak different dialog whether they respond yes or no. We also speak some error dialog if neither yes nor no are returned.

+

Providing a list of options

+

ask_selection() provides a list of options to the User for them to select from. The User can respond with either the name of one of these options or select with a numbered ordinal eg "the third".

+

This method automatically manages fuzzy matching the users response against the list of options provided.

+

Let's jump back into our IceCreamSkill to give the User a list of options to choose from.

+
from mycroft import MycroftSkill, intent_handler
+
+
+class IceCreamSkill(MycroftSkill):
+    def __init__(self):
+        MycroftSkill.__init__(self)
+        self.flavors = ['vanilla', 'chocolate', 'mint']
+
+    @intent_handler('request.icecream.intent')
+    def handle_request_icecream(self):
+        self.speak_dialog('welcome')
+        selection = self.ask_selection(self.flavors, 'what.flavor')
+        self.speak_dialog('coming.right.up', {'flavor': selection})
+
+
+def create_skill():
+    return IceCreamSkill()
+
+

In this example we first speak some welcome.dialog. The list of flavors is then spoken, followed by the what.flavor.dialog. Finally, we confirm the Users selection by speaking coming.right.up.dialog

+

Optional arguments

+

There are two optional arguments for this method.

+

min_conf (float) defines the minimum confidence level for fuzzy matching the Users response against the list of options. numeric (bool) if set to True will speak the options as a numbered list eg "One, vanilla. Two, chocolate. Or three, mint"

+

Returning responses to the intent parser

+

So far we have looked at ways to prompt the User, and return their response directly to our Skill. It is also possible to speak some dialog, and activate the listener, directing the response back to the standard intent parsing engine. We may do this to let the user trigger another Skill, or because we want to make use of our own intents to handle the response.

+

To do this, we use the expect_response parameter of the speak_dialog() method.

+
from mycroft import MycroftSkill, intent_handler
+
+
+class IceCreamSkill(MycroftSkill):
+    def __init__(self):
+        MycroftSkill.__init__(self)
+        self.flavors = ['vanilla', 'chocolate', 'mint']
+
+    @intent_handler('request.icecream.intent')
+    def handle_request_icecream(self):
+        self.speak_dialog('welcome')
+        selection = self.ask_selection(self.flavors, 'what.flavor')
+        self.speak_dialog('coming.right.up', {'flavor': selection})
+        self.speak_dialog('now.what', expect_response=True)
+
+
+def create_skill():
+    return IceCreamSkill()
+
+

Here we have added a new dialog after confirming the Users selection. We may use it to tell the User other things they can do with their OVOS device while they enjoy their delicious ice cream.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/qml_guidelines/index.html b/unused/qml_guidelines/index.html new file mode 100644 index 00000000..64f24e32 --- /dev/null +++ b/unused/qml_guidelines/index.html @@ -0,0 +1,410 @@ + + + + + + + + Qml guidelines - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

QML Design Guidelines

+

An introduction to QML and additional documentation are available here

+

Mycroft-GUI frameworks provides you with some base delegates you should use when designing your QML GUI. +The base delegates provide you with a basic presentation layer for your skill with some property assignments that can help you set up background images, background dim to give you the control you need for rendering an experience.

+

Before we dive deeper into the Design Guidelines, lets look at some concepts that a GUI developer should learn about:

+

Units & Theming

+

Units:

+

Mycroft.Units.GridUnit is the fundamental unit of space that should be used for all sizing inside the QML UI, expressed in pixels. Each GridUnit is predefined as 16 pixels

+
// Usage in QML Components example
+width: Mycroft.Units.gridUnit * 2 // 32px Wide
+height: Mycroft.Units.gridUnit // 16px Tall
+
+

Theming:

+

OVOS Shell uses a custom Kirigami Platform Theme plugin to provide global theming to all our skills and user interfaces, which also allows our GUI's to be fully compatible with the system themes on platforms that are not running the OVOS Shell.

+

Kirigami Theme and Color Scheme guide is extensive and can be found here

+

OVOS GUI's developed to follow the color scheme depend on only a subset of available colors, mainly:

+
    +
  1. +

    Kirigami.Theme.backgroundColor = Primary Color (Background Color: This will always be a dark palette or light palette depending on the dark or light chosen color scheme)

    +
  2. +
  3. +

    Kirigami.Theme.highlightColor = Secondary Color (Accent Color: This will always be a standout palette that defines the themes dominating color and can be used for buttons, cards, borders, highlighted text etc.)

    +
  4. +
  5. +

    Kirigami.Theme.textColor = Text Color (This will always be an opposite palette to the selected primary color)

    +
  6. +
+

QML Delegate Design Best Practise

+

Let's look at this image and qml example below, this is a representation of the Mycroft Delegate: +

+
    +
  1. +

    When designing your first QML file, it is important to note the red triangles in the above image, these triangles represent the margin from the screen edge the GUI needs to be designed within, these margins ensure your GUI content does not overlap with features like edge lighting and menus in the platforms that support it like OVOS-Shell

    +
  2. +
  3. +

    The content items and components all utilize the selected color scheme, where black is the primary background color, red is our accent color and white is our contrasting text color

    +
  4. +
+

Let's look at this in QML:

+
import ...
+import Mycroft 1.0 as Mycroft
+
+Mycroft.Delegate {
+    skillBackgroundSource: sessionData.exampleImage
+    leftPadding: 0
+    rightPadding: 0
+    topPadding: 0
+    bottomPadding: 0
+
+    Rectangle {
+        anchors.fill: parent
+        // Setting margins that need to be left for the screen edges
+        anchors.margins: Mycroft.Units.gridUnit * 2
+
+        //Setting a background dim using our primary theme / background color on top of our skillBackgroundSource image for better readability and contrast
+        color: Qt.rgba(Kirigami.Theme.backgroundColor.r, Kirigami.Theme.backgroundColor.g, Kirigami.Theme.backgroundColor.b, 0.3)
+
+        Kirigami.Heading {
+            level: 2
+            text: "An Example Pie Chart"
+            anchors.top: parent.top
+            anchors.left: parent.left
+            anchors.right: parent.right
+            height: Mycroft.Units.gridUnit * 3
+            // Setting the text color to always follow the color scheme for this item displayed on the screen
+            color: Kirigami.Theme.textColor
+        }
+
+        PieChart {
+            anchors.centerIn: parent
+            pieColorMinor: Kirigami.Theme.backgroundColor // As in the image above the minor area of the pie chart uses our primary color
+            pieColorMid: Kirigami.Theme.highlightColor // As in the image above the middle area is assigned the highlight or our accent color
+            pieColorMajor: Kirigami.Theme.textColor // As in the image above the major area is assigned the text color
+        }
+    }
+}
+
+

QML Delegate Multi Platform and Screen Guidelines

+

OVOS Skill GUIs are designed to be multi-platform and screen friendly, to support this we always try to support both Horizontal and Vertical display's. Let's look at an example and a general approach to writing multi resolution friendly UI's

+

Let's look at these images below that represent a Delegate as seen in a Horizontal screen: +

+

Let's look at these images below that represent a Delegate as seen in a Vertical screen: +

+
    +
  1. When designing for different screens it is preferred to utilize Grids, GridLayouts and GridViews this allows easier content placement as one can control the number of columns and rows displayed on the screen
  2. +
  3. It is also recommended to use Flickables when you believe your content is going to not fit on the screen, this allows for content to always be scrollable. To make it easier to design scrollable content, Mycroft GUI provides you with a ready to use Mycroft.ScrollableDelegate.
  4. +
  5. It is also preferred to use the width vs height comparison on the root delegate item to know when the screen should be using a vertical layout vs horizontal layout
  6. +
+

Let's look at this in QML:

+
import ...
+import Mycroft 1.0 as Mycroft
+
+Mycroft.Delegate {
+    id: root
+    skillBackgroundSource: sessionData.exampleImage
+    leftPadding: 0
+    rightPadding: 0
+    topPadding: 0
+    bottomPadding: 0
+    property bool horizontalMode: width >= height  ? 1 : 0 // Using a ternary operator to detect if width of the delegate is greater than the height, which provides if the delegate is in horizontalMode
+
+    Rectangle {
+        anchors.fill: parent
+        // Setting margins that need to be left for the screen edges
+        anchors.margins: Mycroft.Units.gridUnit * 2
+
+        //Setting a background dim using our primary theme / background color on top of our skillBackgroundSource image for better readability and contrast
+        color: Qt.rgba(Kirigami.Theme.backgroundColor.r, Kirigami.Theme.backgroundColor.g, Kirigami.Theme.backgroundColor.b, 0.3)
+
+        Kirigami.Heading {
+            level: 2
+            text: "An Example Pie Chart"
+            // Setting the text color to always follow the color scheme
+            color: Kirigami.Theme.textColor
+        }
+
+        GridLayout {
+            id: examplesGridView
+            // Checking if we are in horizontal mode, we should display two columns to display the items in the image above, or if we are in vertical mode, we should display a single column only
+            columns: root.horizontalMode ? 2 : 1 
+
+            Repeater {
+                model: examplesModel
+                delegates: ExamplesDelegate {
+                    ...
+                }
+            }
+        }
+    }
+}
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/scraps/index.html b/unused/scraps/index.html new file mode 100644 index 00000000..aa723fcc --- /dev/null +++ b/unused/scraps/index.html @@ -0,0 +1,308 @@ + + + + + + + + Scraps - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

from downloading image

+
    +
  • Buildroot SSH Details: Username: mycroft | password: mycroft
  • +
  • Manjaro SSH Details for Respeaker Image: Username: mycroft | password: 12345
  • +
  • +

    Manjaro SSH Details for Mark-2/DevKit Image: Username: ovos | password: ovos

    +
  • +
  • +

    buildroot

    +
  • +
  • manjaro
  • +
+

From Backend

+

Admin Api (personal backend only!)

+

Since local backend does not provide a web ui a admin api +can be used to manage your devices

+

A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device OpenVoiceOS is all about choice, We currently support 3 backend types

+

Select Backend

+

Selene Backend

+

The mycroft backend connects your device to mycroft servers and allows you to use their web interface to manage your device, this requires paring and all your Speech to text queries are processed via this backend

+

Personal Backend

+

The personal backend is a choice for users who would like to self-host their own backend on the device or in their personal home network, this backend requires additional setup but also provides a cool web interface to configure your device and manage your settings

+

No Backend

+

Open Voice OS by default comes with no backend, we do not really believe you need a backend to do anything, this is the best choice for +your device if you wish to run completely locally, we provide you with a whole list of Speech to text and Text to speech online and offline options to choose from, All communication to the outside world happens from your own device without data sharing

+

From First Run

+

Setting up your device at first run.

+

At first run of your OpenVoiceOS device a first run setup wizard is started that guides you through the process of setting up your device.

+

Welcome Screen

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/selecting_backend/index.html b/unused/selecting_backend/index.html new file mode 100644 index 00000000..7b440050 --- /dev/null +++ b/unused/selecting_backend/index.html @@ -0,0 +1,280 @@ + + + + + + + + Selecting backend - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ + + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/server-stuff/index.html b/unused/server-stuff/index.html new file mode 100644 index 00000000..c316716f --- /dev/null +++ b/unused/server-stuff/index.html @@ -0,0 +1,313 @@ + + + + + + + + Server stuff - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Using a Personal Backend

+

This is concidered an advanced function and is unnecessasary for normal usage

+

The default for ovos-core is no backend.

+

You can go without a backend and go offline and use our free proxy for API services with no accounts.

+

This setup requires there to be a running personal-backend. Refer to this Github page for details.

+

Gui Configuration

+

If your instalation has the skill-ovos-setup installed, you will have a gui available to perform the setup of your device to use the personal backend that you setup.

+

NOTE it is NOT advised to install this skill manually, as it can cause issues if OVOS was not configured to use it. Skip to the Manual Configuration section for headless devices or if this skill was not pre-installed.

+

On first boot, you will be presented with a screen to choose a backend option.

+

NOTE The Selene backend shown in the image is no longer available as an option

+

+

Select Personal Backend from the options. The next screen will allow you to enter the IP address of your personal backend server.

+

+

Enter the IP address and Port number of your personal backend

+

eg. 192.168.1.xxx:6712

+

If everything is entered correctly, and you backend is running, you should see a screen showing that your connection was successful. You should now be able to configure your device with your backend.

+

Manual Configuration

+

This section requires shell access to the device, either with direct connection, or SSH.

+

The local file ~/.config/mycroft/mycroft.conf contains local settings that the user has specified. This file may not exist, and will have to be created to continue.

+

Open the file to edit it

+

nano ~/.config/mycroft/mycroft.conf

+

Add this section to your file. This file must be in valid json or yaml format.

+
{
+    "server": {
+        "url": "http://<your_server_IP_address:port_number>",
+        "version": "v1",
+        "update": true,
+        "metrics": true
+        }
+}
+
+

You will also have to make sure there is not an identity file already configured

+

rm ~/.config/mycroft/identity/identity2.json

+

Restart your device, and you should be connected to your backend.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/setup_network/index.html b/unused/setup_network/index.html new file mode 100644 index 00000000..891a7999 --- /dev/null +++ b/unused/setup_network/index.html @@ -0,0 +1,309 @@ + + + + + + + + Setting up your Wi-Fi network connection - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Setting up your Wi-Fi network connection

+

Depending on which image you downloaded you will be greeted by the network configuration screen with either one or two option. The buildroot image supports setting up the network via two options.

+

+
    +
  • On a mobile device
  • +
  • On the OpenVoiceOS device itself.
  • +
+

You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work.)

+

On Mobile Setup

+

Choosing this option will create a temporarily open network - hotspot called "OVOS" to which you can connect from your mobile device.

+

+

On your mobile device go into Settings -> Wi-Fi Settings and the "OVOS" open network will appear in its list.

+

+

Connect your device with the OVOS network and a webpage will open to configure the Wi-Fi network on your OpenVoiceOS device. (NOTE: On newer mobile operating systems the captive portal capture has changed and the website will not automatically be opened. If this is the case you can open a browser manually and go to http://172.16.127.1 ) +The following webpage will be shown;

+

+

Select your Wi-Fi network from the list, insert your password and press the "Connect" button.

+

+

If everything went fine, you will soon see the green "connected" screen on your OpenVoiceOS device.

+

On Device Setup

+

Choosing this option allows you to set up your Wi-Fi network straight on your OpenVoiceOS device. If selected a screen with the available networks will be shown on your OpenVoiceOS device.

+

+

Select your network from the list and tap / click on it to allow you to insert your password. If you have a touch screen an on-screen keyboard will appear when you tap the password field. If not use a keyboard.

+

+

When you have inserted your password, click / tap the connect button and after a short connecting animation, if all went fine you will see the green "connected" screen on your OpenVoiceOS device.

+

(Re)configure your network from the drop-down menu

+

If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the "Wi-Fi" icon. This brings you to the same on-device configuration screen.

+

+

From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks.

+

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/setup_ovos/index.html b/unused/setup_ovos/index.html new file mode 100644 index 00000000..02a41290 --- /dev/null +++ b/unused/setup_ovos/index.html @@ -0,0 +1,316 @@ + + + + + + + + Setting up your device at first run. - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Setting up your device at first run.

+

At first run of your OpenVoiceOS device a first run setup wizard is started that guides you through the process of setting up your device.

+

Welcome Screen

+

Selecting Your Backend

+

What is a backend ?

+

A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device OpenVoiceOS is all about choice, We currently support 3 backend types

+

Select Backend

+

Selene Backend

+

The mycroft backend connects your device to mycroft servers and allows you to use their web interface to manage your device, this requires paring and all your Speech to text queries are processed via this backend

+

Personal Backend

+

The personal backend is a choice for users who would like to self-host their own backend on the device or in their personal home network, this backend requires additional setup but also provides a cool web interface to configure your device and manage your settings

+

No Backend

+

Open Voice OS by default comes with no backend, we do not really believe you need a backend to do anything, this is the best choice for +your device if you wish to run completely locally, we provide you with a whole list of Speech to text and Text to speech online and offline options to choose from, All communication to the outside world happens from your own device without data sharing

+

Mycroft A.I. - Selene Backend

+

The Pairing Process

+

Selene

+

The GUI will now show you a Pairing Code, This pairing code needs to be entered on the mycroft backend which you can find online at https://account.mycroft.ai

+
    +
  • Create an account using your email id on https://account.mycroft.ai
  • +
  • Head over to https://account.mycroft.ai/devices/add
  • +
  • Enter the pairing code, a unique device name, and location settings
  • +
  • Click next on the web interface, your device should now be paired
  • +
+

No backend - No calling home

+

Select A Text To Speech (TTS) Engine: A text-to-speech (TTS) system converts normal language text into speech, select an engine from the list

+

TTS

+

Select A Speech To Text (STT) Engine: A speech-to-text (STT) system converts human speech to text, select an engine from the list

+

STT

+

Personal backend - Host your own

+

Personal Backend

+

Personal backend is a reverse engineered alternative to selene and requires the backend to be hosted locally.

+
    +
  • Install and Configure your personal backend, information available on: https://github.com/OpenVoiceOS/ovos-personal-backend
  • +
  • The gui on device will display a setup page to enter the host address of your hosted backend on your device
  • +
  • Pairing with your personal backend happens automatically once you hit the confirm button with the correct host address
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/shell/index.html b/unused/shell/index.html new file mode 100644 index 00000000..b9bfcff9 --- /dev/null +++ b/unused/shell/index.html @@ -0,0 +1,327 @@ + + + + + + + + OVOS Shell - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OVOS Shell

+

OVOS-shell is the OpenVoiceOS client implementation of the mycroft-gui library used in our embedded device images

+

Plugins

+

OVOS-shell is tightly coupled to PHAL, the following companion plugins should be installed if you are using ovos-shell

+ +

Alternative Clients

+

Other distributions may offer alternative implementations such as:

+ +

Configuration

+

The Shell can be configured in a few ways.

+

GUI

+

Display settings

+

+

Color Theme editor

+

+

Shell Options

+

~/.config/OpenvoiceOS/OvosShell.conf can be edited to change shell options that +may also be changed via UI. An example config would look like:

+
[General]
+fakeBrightness=1
+menuLabels=true
+
+

Themes

+

Shell themes can be included in /usr/share/OVOS/ColorSchemes/ or +~/.local/share/OVOS/ColorSchemes/ in json format. Note that colors should include +an alpha value (usually FF).

+
{
+  "name": "Neon Green",
+  "primaryColor": "#FF072103",
+  "secondaryColor": "#FF2C7909",
+  "textColor": "#FFF1F1F1"
+}
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/speech_service/index.html b/unused/speech_service/index.html new file mode 100644 index 00000000..df9d8bd0 --- /dev/null +++ b/unused/speech_service/index.html @@ -0,0 +1,432 @@ + + + + + + + + Speech Client - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Speech Client

+

The speech client is responsible for loading STT, VAD and Wake Word plugins

+

Speech is transcribed into text and forwarded to the skills service

+

Hotwords

+

OVOS allows you to load any number of hot words in parallel and trigger different actions when they are +detected

+

each hotword can do one or more of the following:

+
    +
  • trigger listening, also called a wake_word
  • +
  • play a sound
  • +
  • emit a bus event
  • +
  • take ovos-core out of sleep mode, also called a wakeup_word or standup_word
  • +
  • take ovos-core out of recording mode, also called a stop_word
  • +
+

To add a new hotword add its configuration under "hotwords" section.

+

By default, all hotwords are disabled unless you set "active": true. +Under the "listener" setting a main wake word and stand up word are defined, those will be automatically enabled unless you set "active": false. +This is usually not desired unless you are looking to completely disabled wake word usage

+
"listener": {
+    // Default wake_word and stand_up_word will be automatically set to active
+    // unless explicitly disabled under "hotwords" section
+    "wake_word": "hey mycroft",
+    "stand_up_word": "wake up"
+},
+
+"hotwords": {
+    "hey mycroft": {
+        "module": "ovos-ww-plugin-precise",
+        "version": "0.3",
+        "model": "https://github.com/MycroftAI/precise-data/raw/models-dev/hey-mycroft.tar.gz",
+        "phonemes": "HH EY . M AY K R AO F T",
+        "threshold": 1e-90,
+        "lang": "en-us",
+        "listen": true,
+        "sound": "snd/start_listening.wav"
+    },
+    "wake up": {
+        "module": "ovos-ww-plugin-pocketsphinx",
+        "phonemes": "W EY K . AH P",
+        "threshold": 1e-20,
+        "lang": "en-us",
+        "wakeup": true
+    }
+},
+
+

STT

+

Two STT plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used.

+

This allows you to have a lower accuracy offline model as fallback to account for internet outages, this ensures your device never becomes fully unusable

+
"stt": {
+    "module": "ovos-stt-plugin-server",
+    "fallback_module": "ovos-stt-plugin-vosk",
+    "ovos-stt-plugin-server": {"url": "https://stt.openvoiceos.com/stt"}
+},
+
+

Listener

+

You can modify microphone settings and enable additional features under the listener section such as wake word / utterance recording / uploading

+
"listener": {
+    "sample_rate": 16000,
+
+    // if enabled the noise level is saved to a ipc file, useful for
+    // debuging if microphone is working but writes a lot to disk,
+    // recommended that you set "ipc_path" to a tmpfs
+    "mic_meter_ipc": true,
+
+    // Set 'save_path' to configure the location of files stored if
+    // 'record_wake_words' and/or 'save_utterances' are set to 'true'.
+    // WARNING: Make sure that user 'mycroft' has write-access on the
+    // directory!
+    // "save_path": "/tmp",
+    // Set 'record_wake_words' to save a copy of wake word triggers
+    // as .wav files under: /'save_path'/mycroft_wake_words
+    "record_wake_words": false,
+    // Set 'save_utterances' to save each sentence sent to STT -- by default
+    // they are only kept briefly in-memory.  This can be useful for for
+    // debugging or other custom purposes.  Recordings are saved
+    // under: /'save_path'/mycroft_utterances/<TIMESTAMP>.wav
+    "save_utterances": false,
+    "wake_word_upload": {
+      "disable": false,
+      "url": "https://training.mycroft.ai/precise/upload"
+    },
+
+    // Override as SYSTEM or USER to select a specific microphone input instead of
+    // the PortAudio default input.
+    //   "device_name": "somename",  // can be regex pattern or substring
+    //       or
+    //   "device_index": 12,
+
+    // Stop listing to the microphone during playback to prevent accidental triggering
+    // This is enabled by default, but instances with good microphone noise cancellation
+    // can disable this to listen all the time, allowing 'barge in' functionality.
+    "mute_during_output" : true,
+
+    // How much (if at all) to 'duck' the speaker output during listening.  A
+    // setting of 0.0 will not duck at all.  A 1.0 will completely mute output
+    // while in a listening state.  Values in between will lower the volume
+    // partially (this is optional behavior, depending on the enclosure).
+    "duck_while_listening" : 0.3,
+
+    // In milliseconds
+    "phoneme_duration": 120,
+    "multiplier": 1.0,
+    "energy_ratio": 1.5,
+
+    // Settings used by microphone to set recording timeout
+    "recording_timeout": 10.0,
+    "recording_timeout_with_silence": 3.0,
+
+    // instant listen is an experimental setting, it removes the need for
+    // the pause between "hey mycroft" and starting to speak the utterance,
+    //however it might slightly downgrade STT accuracy depending on engine used
+    "instant_listen": false
+},
+
+

VAD

+

Voice Activity Detection is used by the speech client to determine when a user stopped speaking, this indicates the voice command is ready to be executed. +Several VAD strategies are supported

+
"listener": {
+
+    // Voice Activity Detection is used to determine when speech ended
+    "VAD": {
+        // silence method defined the main vad strategy
+        // valid values:
+        //   VAD_ONLY - Only use vad
+        //   RATIO_ONLY - Only use max/current energy ratio threshold
+        //   CURRENT_ONLY - Only use current energy threshold
+        //   VAD_AND_RATIO - Use vad and max/current energy ratio threshold
+        //   VAD_AND_CURRENT - Use vad and current energy threshold
+        //   ALL - Use vad, max/current energy ratio, and current energy threshold
+        // NOTE: if a vad plugin is not available method will fallback to RATIO_ONLY
+        "silence_method": "vad_and_ratio",
+        // Seconds of speech before voice command has begun
+        "speech_seconds": 0.1,
+        // Seconds of silence before a voice command has finished
+        "silence_seconds": 0.5,
+        // Seconds of audio to keep before voice command has begun
+        "before_seconds": 0.5,
+        // Minimum length of voice command (seconds)
+        // NOTE: max_seconds uses recording_timeout listener setting
+        "min_seconds": 1,
+        // Ratio of max/current energy below which audio is considered speech
+        "max_current_ratio_threshold": 2,
+        // Energy threshold above which audio is considered speech
+        // NOTE: this is dynamic, only defining start value
+        "initial_energy_threshold": 1000.0,
+        // vad module can be any plugin, by default it is not used
+        // recommended plugin: "ovos-vad-plugin-silero"
+        "module": "",
+        "ovos-vad-plugin-silero": {"threshold": 0.2},
+        "ovos-vad-plugin-webrtcvad": {"vad_mode": 3}
+    }
+},
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/spotifyd/index.html b/unused/spotifyd/index.html new file mode 100644 index 00000000..71e207c1 --- /dev/null +++ b/unused/spotifyd/index.html @@ -0,0 +1,287 @@ + + + + + + + + Spotifyd - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Spotifyd

+

Spotifyd is able to advertise itself on the network without credentials and using zeroconf authentication from Spotify Connect on your mobile device. This is the default configuration shipped with the buildroot image. If for whatever reason zeroconf is not properly working on your network, or you want spotifyd to log in itself you can configure your username and password combination within it's configuration file by uncommenting and configuring the username and password variables within ~/.config/spotifyd/spotifyd.conf and reboot the device or run systemctl --user restart spotifyd.

+

Open spotify on you mobile device and go to the Devices menu within the Settings or tap the devices menu icon on the left bottom of the now playing screen. An OpenVoiceOS "speaker" device will be present which you can select as output device. +

+

When you play something on Spotify the music will come from your OpenVoiceOS device which will be indicated by the "OPENVOICEOS" indicator on the device menu icon on the top bottom of the now playing screen on your mobile device. +

+

As Spotifyd has full MPRIS support including audio player controls, the full OCP now playing screen will be shown on your OpenVoiceOS device as shown below, just like playing something from YouTube as shown above. +

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/standard_skills/index.html b/unused/standard_skills/index.html new file mode 100644 index 00000000..f225aace --- /dev/null +++ b/unused/standard_skills/index.html @@ -0,0 +1,320 @@ + + + + + + + + Standard Skills Usage - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Standard Skills Usage

+

Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage.

+

Date / Time skill

+

You can ask your device what time or date it is just in case you lost your watch.

+
+

Hey Mycroft, what time is it?

+
+

Time

+
+

Hey Mycroft, what is the date?

+
+

Date

+

Setting an alarm

+

Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device.

+
+

Hey Mycroft, set an alarm for 8 AM.

+
+

Alarm

+

Setting of timers

+

Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers.

+
+

Hey Mycroft, set a timer for 5 minutes.

+
+

Timer

+

You can always set more timers and even name them, so you know which timers is for what.

+
+

Hey, Mycroft, set another timer called rice cooking for 7 minutes.

+
+

Timers

+

Asking the weather

+

You can ask your device what the weather is or would be at any given time or place.

+
+

Hey Mycroft, what is the weather like today?

+
+

Weather 1

+

The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen.

+

Weather 2

+

File Browser

+

The file browser allows you to browse the filesystem in your device and any connected media, you can view images and play music and videos.

+

KDEConnect integration allows you to share files with your mobile devices

+

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/statements/index.html b/unused/statements/index.html new file mode 100644 index 00000000..27e62fb2 --- /dev/null +++ b/unused/statements/index.html @@ -0,0 +1,390 @@ + + + + + + + + Statements - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Statements

+

Editors Note This will probably move

+

Speaking a statement

+

One of OVOS's most important core capabilities is to convert text to speech, that is, to speak a statement.

+

Within a Skill's Intent handler, you may pass a string of text to OVOS and OVOS will speak it. For example: self.speak('this is my statement') That's cool and fun to experiment with, but passing strings of text to Mycroft doesn't help to make Mycroft a multilingual product. Rather than hard-coded strings of text, OVOS has a design pattern for multilingualism.

+

Multilingualism

+

To support multilingualism, the text that OVOS speaks must come from a file. That file is called a dialog file. The dialog file contains statements (lines of text) that a listener in a particular language would consider to be equivalent. For instance, in USA English, the statements "I am okay" and "I am fine" are equivalent, and both of these statements might appear in a dialog file used for responding to the USA English question: "How are you?".

+

By convention, the dialog filename is formed by dot connected words and must end with ".dialog". The dialog filename should be descriptive of the contents as a whole. Sometimes, the filename describes the question being answered, and other times, the filename describes the answer itself. For the example above, the dialog filename might be: how.are.you.dialog or i.am.fine.dialog.

+

Multilingualism is accomplished by translating the dialog files into other languages, and storing them in their own directory named for the country and language. The filenames remain the same. Using the same filenames in separate language dependent directories allows the Skills to be language agnostic; no hard-coded text strings. Adjust the language setting for your Device **** and OVOS uses the corresponding set of dialog files. If the desired file does not exist in the directory for that language, Mycroft will use the file from the USA English directory.

+

As an example of the concept, the contents of how.are.you.dialog in the directory for the French language in France (fr-fr) might include the statement: "Je vais bien".

+

The Tomato Skill Revisited

+

To demonstrate the multilingualism design pattern, we examine the usage of the speak_dialog() method in the Tomato Skill .

+

The Tomato Skill has two Intents: one demonstrates simple, straightforward statements, and the other demonstrates the use of variables within a statement.

+

Simple statement

+

The first Intent within the Tomato Skill, what.is.a.tomato.intent, handles inquiries about tomatoes, and the dialog file, tomato.description.dialog, provides the statements for OVOS to speak in reply to that inquiry.

+

Sample contents of the Intent and dialog files:

+

what.is.a.tomato.intent

+
what is a tomato
+what would you say a tomato is
+describe a tomato
+what defines a tomato
+
+

tomato.description.dialog

+
The tomato is a fruit of the nightshade family
+A tomato is an edible berry of the plant Solanum lycopersicum
+A tomato is a fruit but nutrionists consider it a vegetable
+
+

Observe the statements in the tomato.description.dialog file. They are all acceptable answers to the question: "What is a tomato?" Providing more than one statement in a dialog file is one way to make OVOS to seem less robotic, more natural. +OVOS will randomly select one of the statements.

+

The Tomato Skill code snippet:

+
@intent_handler('what.is.a.tomato.intent')
+def handle_what_is(self, message):
+    """Speaks a statement from the dialog file."""
+    self.speak_dialog('tomato.description')
+
+

With the Tomato Skill installed, if the User utters **** "Hey Mycroft, what is a tomato?", the Intent handler method handle_what_is() will be called.

+

Inside handle_what_is(), we find: self.speak_dialog('tomato.description')

+

As you can probably guess, the parameter 'tomato.description' is the dialog filename without the ".dialog" extension. Calling this method opens the dialog file, selects one of the statements, and converts that text to speech. OVOS will speak a statement from the dialog file. In this example, OVOS might say "The tomato is a fruit of the nightshade family".

+

Remember, OVOS has a language setting that determines from which directory to find the dialog file.

+

File locations

+

The Skill Structure section describes where to place the Intent file and dialog file. Basically, there are two choices:

+
    +
  1. Put both files in locale/en-us
  2. +
  3. Put the dialog file in dialog/en-us , and put the Intent file in vocab/en-us
  4. +
+

Statements with variables

+

The second Padatious Intent, do.you.like.intent, demonstrates the use of variables in the Intent file and in one of the dialog files:

+

do.you.like.intent

+
do you like tomatoes
+do you like {type} tomatoes
+
+

like.tomato.type.dialog

+
I do like {type} tomatoes
+{type} tomatoes are my favorite
+
+

like.tomato.generic.dialog

+
I do like tomatoes
+tomatoes are my favorite
+
+

Compare these two dialog files. The like.tomato.generic.dialog file contains only simple statements. The statements in the like.tomato.type.dialog file include a variable named type. The variable is a placeholder in the statement specifying where text may be inserted. The speak_dialog() method accepts a dictionary as an optional parameter. If that dictionary contains an entry for a variable named in the statement, then the value from the dictionary will be inserted at the placeholder's location.

+

Dialog file variables are formed by surrounding the variable's name with curly braces. +In OVOS parlance, curly braces are known as a mustache.

+

For multi-line dialog files, be sure to include the same variable on all lines.

+

The Tomato Skill code snippet:

+
 @intent_handler('do.you.like.intent')
+    def handle_do_you_like(self, message):
+        tomato_type = message.data.get('type')
+        if tomato_type is not None:
+            self.speak_dialog('like.tomato.type',
+                              {'type': tomato_type})
+        else:
+            self.speak_dialog('like.tomato.generic')
+
+

When the User utters "Hey Mycroft, do you like RED tomatoes?", the second of the two Intent lines "do you like {type} tomatoes" is recognized by Mycroft, and the value 'RED' is returned in the message dictionary assigned to the 'type' entry when handle_do_you_like() is called.

+

The line tomato_type = message.data.get('type') extracts the value from the dictionary for the entry 'type'. In this case, the variable tomato_type will receive the value 'RED', and speak_dialog()will be called with the 'like.tomato.type' dialog file, and a dictionary with 'RED' assigned to 'type'. The statement "I do like {type} tomatoes" might be randomly selected, and after insertion of the value 'RED' for the placeholder variable {type}, OVOS would say: "I do like RED tomatoes".

+

Should the User utter "Hey Mycroft, do you like tomatoes?", the first line in the Intent file "do you like tomatoes" is recognized. There is no variable in this line, and when handle_do_you_like() is called, the dictionary in the message is empty. This means tomato_type is None,speak_dialog('like.tomato.generic') would be called, and Mycroft might reply with "Yes, I do like tomatoes".

+

Waiting for speech

+

By default, the speak_dialog() method is non-blocking. That is any code following the call to speak_dialog() will execute whilst OVOS is talking. This is useful to allow your Skill to perform actions while it is speaking.

+

Rather than telling the User that we are fetching some data, then going out to fetch it, we can do the two things simultaneously providing a better experience.

+

However, there are times when we need to wait until the statement has been spoken before doing something else. We have two options for this.

+

Wait Parameter

+

We can pass a wait=True parameter to our speak_dialog() method. This makes the method blocking and no other code will execute until the statement has been spoken.

+
@intent_handler('what.is.a.tomato.intent')
+def handle_what_is(self, message):
+    """Speaks a statement from the dialog file.
+       Waits (i.e. blocks) within speak_dialog() until
+       the speaking has completed. """
+    self.speak_dialog('tomato.description', wait=True)
+    self.log.info("I waited for you")
+
+

wait_while_speaking

+

The mycroft.audio.wait_while_speaking() method allows us to execute some code, then wait for OVOS to finish speaking.

+
@intent_handler('what.is.a.tomato.intent')
+def handle_what_is(self, message):
+    """Speaks a statement from the dialog file.
+       Returns from speak_dialog() before the
+       speaking has completed, and logs some info.
+       Then it, waits for the speech to complete. """
+    self.speak_dialog('tomato.description')
+    self.log.info("I am executed immediately")
+    wait_while_speaking()
+    self.log.info("But I waited for you") 
+
+

Here we have executed one line of code immediately. Our Skill will then wait for the statement from i.do.like.dialog to be spoken before executing the final line of code.

+

Using translatable resources

+

There may be a situation where the dialog file and the speak_dialog() method do not give the Skill enough flexibility. For instance, there may be a need to manipulate the statement from the dialog file before having it spoken by OVOS.

+

The MycroftSkill class provides four multilingual methods to address these needs. Each method uses a file, and multilingualism is accomplished using the country/language directory system.

+

The translate() method returns a random string from a ".dialog" file (modified by a data dictionary).

+

The translate_list() method returns a list of strings from a ".list" file (each modified by the data dictionary). Same as translate_template() just with a different file extension.

+

The translate_namedvalue() method returns a dictionary formed from CSV entries in a ".value" file.

+

The translate_template() method returns a list of strings from a ".template" file (each modified by the data dictionary). Same as translate_list() just with a different file extension.

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/stt_backends/index.html b/unused/stt_backends/index.html new file mode 100644 index 00000000..9d07cd09 --- /dev/null +++ b/unused/stt_backends/index.html @@ -0,0 +1,281 @@ + + + + + + + + Speech to Text backends - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Speech to Text backends

+

You can run a local nemo instance using ovos-stt-server

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/stt_plugins/index.html b/unused/stt_plugins/index.html new file mode 100644 index 00000000..230325e1 --- /dev/null +++ b/unused/stt_plugins/index.html @@ -0,0 +1,402 @@ + + + + + + + + STT Plugins - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

STT Plugins

+

STT plugins are responsible for converting spoken audio into text

+

List of STT plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginOfflineType
ovos-stt-plugin-voskyesFOSS
ovos-stt-plugin-chromiumnoAPI (free)
neon-stt-plugin-google_cloud_streamingnoAPI (key)
neon-stt-plugin-scribosermoyesFOSS
neon-stt-plugin-sileroyesFOSS
neon-stt-plugin-polyglotyesFOSS
neon-stt-plugin-deepspeech_stream_localyesFOSS
ovos-stt-plugin-selenenoAPI (free)
ovos-stt-plugin-http-servernoAPI (self hosted)
ovos-stt-plugin-pocketsphinxyesFOSS
+

Standalone Usage

+

STT plugins can be used in your owm projects as follows

+
from speech_recognition import Recognizer, AudioFile
+
+plug = STTPlug()
+
+# verify lang is supported
+lang = "en-us"
+assert lang in plug.available_languages
+
+# read file
+with AudioFile("test.wav") as source:
+    audio = Recognizer().record(source)
+
+# transcribe AudioData object
+transcript = plug.execute(audio, lang)
+
+

Plugin Template

+
from ovos_plugin_manager.templates.stt import STT
+
+
+# base plugin class
+class MySTTPlugin(STT):
+    def __init__(self, *args, **kwargs):
+        super().__init__(*args, **kwargs)
+        # read config settings for your plugin
+        lm = self.config.get("language-model")
+        hmm = self.config.get("acoustic-model")
+
+    def execute(self, audio, language=None):
+        # TODO - convert audio into text and return string
+        transcript = "You said this"
+        return transcript
+
+    @property
+    def available_languages(self):
+        """Return languages supported by this STT implementation in this state
+        This property should be overridden by the derived class to advertise
+        what languages that engine supports.
+        Returns:
+            set: supported languages
+        """
+        # TODO - what langs can this STT handle?
+        return {"en-us", "es-es"}
+
+
+# sample valid configurations per language
+# "display_name" and "offline" provide metadata for UI
+# "priority" is used to calculate position in selection dropdown 
+#       0 - top, 100-bottom
+# all other keys represent an example valid config for the plugin 
+MySTTConfig = {
+    lang: [{"lang": lang,
+            "display_name": f"MySTT ({lang}",
+            "priority": 70,
+            "offline": True}]
+    for lang in ["en-us", "es-es"]
+}
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/stt_server/index.html b/unused/stt_server/index.html new file mode 100644 index 00000000..46af84c5 --- /dev/null +++ b/unused/stt_server/index.html @@ -0,0 +1,312 @@ + + + + + + + + OpenVoiceOS STT HTTP Server - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS STT HTTP Server

+

Turn any OVOS STT plugin into a microservice!

+

Install

+

pip install ovos-stt-http-server

+

Usage

+
ovos-stt-server --help
+usage: ovos-stt-server [-h] [--engine ENGINE] [--port PORT] [--host HOST]
+
+options:
+  -h, --help       show this help message and exit
+  --engine ENGINE  stt plugin to be used
+  --port PORT      port number
+  --host HOST      host
+
+

Companion plugin

+

Use with OpenVoiceOS companion plugin

+

Docker Template

+

you can create easily create a docker file to serve any plugin

+
FROM python:3.7
+
+RUN pip3 install ovos-stt-http-server==0.0.1
+
+RUN pip3 install {PLUGIN_HERE}
+
+ENTRYPOINT ovos-stt-http-server --engine {PLUGIN_HERE}
+
+

build it

+
docker build . -t my_ovos_stt_plugin
+
+

run it

+
docker run -p 8080:9666 my_ovos_stt_plugin
+
+

Each plugin can provide its own Dockerfile in its repository using ovos-stt-http-server

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/translate_server/index.html b/unused/translate_server/index.html new file mode 100644 index 00000000..de9f79fd --- /dev/null +++ b/unused/translate_server/index.html @@ -0,0 +1,325 @@ + + + + + + + + OpenVoiceOS Translate Server - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS Translate Server

+

Turn any OVOS Language plugin into a microservice!

+

Use with OpenVoiceOS companion plugin

+

Install

+

pip install ovos-translate-server

+

Usage

+
ovos-translate-server --help
+usage: ovos-translate-server [-h] [--tx-engine TX_ENGINE]
+                   [--detect-engine DETECT_ENGINE] [--port PORT] [--host HOST]
+
+optional arguments:
+  -h, --help            show this help message and exit
+  --tx-engine TX_ENGINE
+                        translate plugin to be used
+  --detect-engine DETECT_ENGINE
+                        lang detection plugin to be used
+  --port PORT           port number
+  --host HOST           host
+
+
+

eg, to use the Google Translate plugin ovos-translate-server --tx-engine googletranslate_plug --detect-engine googletranslate_detection_plug

+

then you can do get requests

+
    +
  • http://0.0.0.0:9686/translate/en/o meu nome é Casimiro (auto detect source lang)
  • +
  • http://0.0.0.0:9686/translate/pt/en/o meu nome é Casimiro (specify source lang)
  • +
  • http://0.0.0.0:9686/detect/o meu nome é Casimiro
  • +
+

Docker Template

+

you can create easily crete a docker file to serve any plugin

+
FROM python:3.7
+
+RUN pip3 install ovos-utils==0.0.15
+RUN pip3 install ovos-plugin-manager==0.0.4
+RUN pip3 install ovos-translate-server==0.0.1
+
+RUN pip3 install {PLUGIN_HERE}
+
+ENTRYPOINT ovos-translate-server --tx-engine {PLUGIN_HERE} --detect-engine {PLUGIN_HERE}
+
+

build it

+
docker build . -t my_ovos_translate_plugin
+
+

run it

+
docker run -p 8080:9686 my_ovos_translate_plugin
+
+

Each plugin can provide its own Dockerfile in its repository using ovos-translate-server

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/tts_backends/index.html b/unused/tts_backends/index.html new file mode 100644 index 00000000..2b756a95 --- /dev/null +++ b/unused/tts_backends/index.html @@ -0,0 +1,318 @@ + + + + + + + + OpenVoiceOS TTS Server - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

OpenVoiceOS TTS Server

+

Turn any OVOS TTS plugin into a microservice!

+

Install

+

pip install ovos-tts-server

+

Usage

+
ovos-tts-server --help
+usage: ovos-tts-server [-h] [--engine ENGINE] [--port PORT] [--host HOST] [--cache]
+
+options:
+  -h, --help       show this help message and exit
+  --engine ENGINE  tts plugin to be used
+  --port PORT      port number
+  --host HOST      host
+  --cache          save every synth to disk
+
+

eg, to use the GladosTTS plugin ovos-tts-server --engine neon-tts-plugin-glados --cache

+

then do a get request http://192.168.1.112:9666/synthesize/hello

+

Companion plugin

+

Use with OpenVoiceOS companion plugin

+

Docker Template

+

you can create easily crete a docker file to serve any plugin

+
FROM python:3.7
+
+RUN pip3 install ovos-utils==0.0.15
+RUN pip3 install ovos-plugin-manager==0.0.4
+RUN pip3 install ovos-tts-server==0.0.1
+
+RUN pip3 install {PLUGIN_HERE}
+
+ENTRYPOINT ovos-tts-server --engine {PLUGIN_HERE} --cache
+
+

build it

+
docker build . -t my_ovos_tts_plugin
+
+

run it

+
docker run -p 8080:9666 my_ovos_tts_plugin
+
+

use it http://localhost:8080/synthesize/hello

+

Each plugin can provide its own Dockerfile in its repository using ovos-tts-server

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/tts_plugins/index.html b/unused/tts_plugins/index.html new file mode 100644 index 00000000..5badebaa --- /dev/null +++ b/unused/tts_plugins/index.html @@ -0,0 +1,443 @@ + + + + + + + + TTS Plugins - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

TTS Plugins

+

TTS plugins are responsible for converting text into audio for playback

+

List of TTS plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginOfflineType
ovos-tts-plugin-mimicyesFOSS
ovos-tts-plugin-mimic2noAPI (free)
ovos-tts-plugin-mimic3yesFOSS
ovos-tts-plugin-maryttsnoAPI (self hosted)
neon-tts-plugin-larynx_servernoAPI (self hosted)
ovos-tts-server-pluginnoAPI (self hosted)
ovos-tts-plugin-picoyesFOSS
neon-tts-plugin-gladosyesFOSS
neon-tts-plugin-mozilla_localyesFOSS
neon-tts-plugin-pollynoAPI (key)
ovos-tts-plugin-voicerssnoAPI (key)
ovos-tts-plugin-google-TXnoAPI (free)
ovos-tts-plugin-responsivevoicenoAPI (free)
neon-tts-plugin-mozilla_remotenoAPI (self hosted)
neon-tts-plugin-tacotron2yesFOSS
ovos-tts-plugin-espeakNGyesFOSS
ovos-tts-plugin-cotoviayesFOSS
ovos-tts-plugin-catotronnoAPI (self hosted)
ovos-tts-plugin-softcatalanoAPI (self hosted)
ovos-tts-plugin-SAMyesAbandonware
ovos-tts-plugin-beepspeakyesFun
+

Plugin Template

+
from ovos_plugin_manager.templates.tts import TTS
+
+
+# base plugin class
+class MyTTSPlugin(TTS):
+    def __init__(self, *args, **kwargs):
+        # in here you should specify if your plugin return wav or mp3 files
+        # you should also specify any valid ssml tags
+        ssml_tags = ["speak", "s", "w", "voice", "prosody", 
+                     "say-as", "break", "sub", "phoneme"]
+        super().__init__(*args, **kwargs, audio_ext="wav", ssml_tags=ssml_tags)
+        # read config settings for your plugin if any
+        self.pitch = self.config.get("pitch", 0.5)
+
+    def get_tts(self, sentence, wav_file):
+        # TODO - create TTS audio @ wav_file (path)
+        return wav_file, None
+
+    @property
+    def available_languages(self):
+        """Return languages supported by this TTS implementation in this state
+        This property should be overridden by the derived class to advertise
+        what languages that engine supports.
+        Returns:
+            set: supported languages
+        """
+        # TODO - what langs can this TTS handle?
+        return {"en-us", "es-es"}
+
+
+
+# sample valid configurations per language
+# "display_name" and "offline" provide metadata for UI
+# "priority" is used to calculate position in selection dropdown 
+#       0 - top, 100-bottom
+# all other keys represent an example valid config for the plugin 
+MyTTSConfig = {
+    lang: [{"lang": lang,
+            "display_name": f"MyTTS ({lang}",
+            "priority": 70,
+            "offline": True}]
+    for lang in ["en-us", "es-es"]
+}
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/vad_plugins/index.html b/unused/vad_plugins/index.html new file mode 100644 index 00000000..b9932dd5 --- /dev/null +++ b/unused/vad_plugins/index.html @@ -0,0 +1,301 @@ + + + + + + + + VAD Plugins - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

VAD Plugins

+

Voice Activity Detection is the process of determining when speech starts and ends in a piece of audio

+

VAD plugins classify audio and report if it contains speech or not

+

List of VAD plugins

+ + + + + + + + + + + + + + + + + +
PluginType
ovos-vad-plugin-sileromodel
ovos-vad-plugin-webrtcvadmodel
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/why/index.html b/unused/why/index.html new file mode 100644 index 00000000..0a3b638f --- /dev/null +++ b/unused/why/index.html @@ -0,0 +1,293 @@ + + + + + + + + About OpenVoiceOS - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

About OpenVoiceOS

+

Introducing OpenVoiceOS - The Free and Open-Source Personal Assistant and Smart Speaker.

+

OpenVoiceOS is a new player in the smart speaker market, offering a powerful and flexible alternative to proprietary solutions like Amazon Echo and Google Home.

+

With OpenVoiceOS, you have complete control over your personal data and the ability to customize and extend the functionality of your smart speaker.

+

Built on open-source software, OpenVoiceOS is designed to provide users with a seamless and intuitive voice interface for controlling their smart home devices, playing music, setting reminders, and much more.

+

The platform leverages cutting-edge technology, including machine learning and natural language processing, to deliver a highly responsive and accurate experience.

+

In addition to its voice capabilities, OpenVoiceOS features a touch-screen GUI made using QT5 and the KF5 framework.

+

The GUI provides an intuitive, user-friendly interface that allows you to access the full range of OpenVoiceOS features and functionality.

+

Whether you prefer voice commands or a more traditional touch interface, OpenVoiceOS has you covered.

+

One of the key advantages of OpenVoiceOS is its open-source nature, which means that anyone with the technical skills can contribute to the platform and help shape its future.

+

Whether you're a software developer, data scientist, or just someone with a passion for technology, you can get involved and help build the next generation of personal assistants and smart speakers.

+

With OpenVoiceOS, you have the option to run the platform fully offline, giving you complete control over your data and ensuring that your information is never shared with third parties. This makes OpenVoiceOS the perfect choice for anyone who values privacy and security.

+

So if you're looking for a personal assistant and smart speaker that gives you the freedom and control you deserve, be sure to check out OpenVoiceOS today!

+

Disclaimer: This post was written in collaboration with ChatGPT

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/unused/ww_plugins/index.html b/unused/ww_plugins/index.html new file mode 100644 index 00000000..0e09d478 --- /dev/null +++ b/unused/ww_plugins/index.html @@ -0,0 +1,509 @@ + + + + + + + + Wake Word Plugins - OVOS User Documentation + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • »
  • + +
  • +
  • +
+
+
+
+
+ +

Wake Word Plugins

+

WakeWord plugins classify audio and report if a certain word or sound is present or not

+

These plugins usually correspond to the name of the voice assistant, "hey mycroft", but can also be used for other purposes

+

ww

+

List of Wake Word plugins

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PluginType
ovos-ww-plugin-pocketsphinxphonemes
ovos-ww-plugin-vosktext samples
ovos-ww-plugin-snowboymodel
ovos-ww-plugin-precisemodel
ovos-ww-plugin-precise-litemodel
ovos-ww-plugin-nyumayamodel
ovos-ww-plugin-nyumaya-legacymodel
neon_ww_plugin_efficientwordnetmodel
mycroft-porcupine-pluginmodel
ovos-ww-plugin-hotkeyskeyboard
+

Standalone Usage

+

first lets get some boilerplate ouf of the way for the microphone handling logic

+
import pyaudio
+
+# helper class
+class CyclicAudioBuffer:
+    def __init__(self, duration=0.98, initial_data=None,
+                 sample_rate=16000, sample_width=2):
+        self.size = self.duration_to_bytes(duration, sample_rate, sample_width)
+        initial_data = initial_data or self.get_silence(self.size)
+        # Get at most size bytes from the end of the initial data
+        self._buffer = initial_data[-self.size:]
+
+    @staticmethod
+    def duration_to_bytes(duration, sample_rate=16000, sample_width=2):
+        return int(duration * sample_rate) * sample_width
+
+    @staticmethod
+    def get_silence(num_bytes):
+        return b'\0' * num_bytes
+
+    def append(self, data):
+        """Add new data to the buffer, and slide out data if the buffer is full
+        Arguments:
+            data (bytes): binary data to append to the buffer. If buffer size
+                          is exceeded the oldest data will be dropped.
+        """
+        buff = self._buffer + data
+        if len(buff) > self.size:
+            buff = buff[-self.size:]
+        self._buffer = buff
+
+    def get(self):
+        """Get the binary data."""
+        return self._buffer
+
+# pyaudio params
+FORMAT = pyaudio.paInt16
+CHANNELS = 1
+RATE = 16000
+CHUNK = 1024
+MAX_RECORD_SECONDS = 20
+SAMPLE_WIDTH = pyaudio.get_sample_size(FORMAT)
+audio = pyaudio.PyAudio()
+
+# start Recording
+stream = audio.open(channels=CHANNELS, format=FORMAT,
+    rate=RATE, frames_per_buffer=CHUNK, input=True)
+
+
+def load_plugin():
+    # Wake word initialization
+    config = {"model": "path/to/hey_computer.model"}
+    return MyHotWord("hey computer", config=config)
+
+
+def listen_for_ww(plug):
+    # TODO - see examples below
+    return False
+
+plug = load_plugin()
+
+print(f"Waiting for wake word {MAX_RECORD_SECONDS} seconds")
+found = listen_for_ww(plug)
+
+if found:
+    print("Found wake word!")
+else:
+    print("No wake word found")
+
+# stop everything
+plug.stop()
+stream.stop_stream()
+stream.close()
+audio.terminate()
+
+

new style plugins

+

New style plugins expect to receive live audio, they may keep their own cyclic buffers internally

+

+def listen_for_ww(plug):
+    for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)):
+        data = stream.read(CHUNK)
+        # feed data directly to streaming prediction engines
+        plug.update(data)
+        # streaming engines return result here
+        found = plug.found_wake_word(data)
+        if found:
+            return True
+
+

old style plugins (DEPRECATED)

+

Old style plugins expect to receive ~3 seconds of audio data at once

+
def listen_for_ww(plug):
+    # used for old style non-streaming wakeword (deprecated)
+    audio_buffer = CyclicAudioBuffer(plug.expected_duration,
+                                     sample_rate=RATE, sample_width=SAMPLE_WIDTH)
+    for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)):
+        data = stream.read(CHUNK)
+        # add data to rolling buffer, used by non-streaming engines
+        audio_buffer.append(data)
+        # non-streaming engines check the byte_data in audio_buffer
+        audio_data = audio_buffer.get()
+        found = plug.found_wake_word(audio_data)
+        if found:
+            return True
+
+

new + old style plugins (backwards compatibility)

+

if you are unsure what kind of plugin you will be using you can be compatible with both approaches like ovos-core

+
def listen_for_ww(plug):
+    # used for old style non-streaming wakeword (deprecated)
+    audio_buffer = CyclicAudioBuffer(plug.expected_duration,
+                                     sample_rate=RATE, sample_width=SAMPLE_WIDTH)
+    for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)):
+        data = stream.read(CHUNK)
+        # old style engines will ignore the update
+        plug.update(data)
+        # streaming engines will ignore the byte_data
+        audio_buffer.append(data)
+        audio_data = audio_buffer.get()
+        found = plug.found_wake_word(audio_data)
+        if found:
+            return True
+
+

Plugin Template

+
from ovos_plugin_manager.templates.hotwords import HotWordEngine
+from threading import Event
+
+
+class MyWWPlugin(HotWordEngine):
+    def __init__(self, key_phrase="hey mycroft", config=None, lang="en-us"):
+        super().__init__(key_phrase, config, lang)
+        self.detection = Event()
+        # read config settings for your plugin
+        self.sensitivity = self.config.get("sensitivity", 0.5)
+        # TODO - plugin stuff
+        # how does your plugin work? phonemes? text? models?
+        self.engine = MyWW(key_phrase) 
+
+    def found_wake_word(self, frame_data):
+        """Check if wake word has been found.
+
+        Checks if the wake word has been found. Should reset any internal
+        tracking of the wake word state.
+
+        Arguments:
+            frame_data (binary data): Deprecated. Audio data for large chunk
+                                      of audio to be processed. This should not
+                                      be used to detect audio data instead
+                                      use update() to incrementally update audio
+        Returns:
+            bool: True if a wake word was detected, else False
+        """
+        detected = self.detection.is_set()
+        if detected:
+            self.detection.clear()
+        return detected
+
+    def update(self, chunk):
+        """Updates the hotword engine with new audio data.
+
+        The engine should process the data and update internal trigger state.
+
+        Arguments:
+            chunk (bytes): Chunk of audio data to process
+        """
+        if self.engine.found_it(chunk): # TODO - check for wake word
+            self.detection.set()
+
+    def stop(self):
+        """Perform any actions needed to shut down the wake word engine.
+
+        This may include things such as unloading data or shutdown
+        external processess.
+        """
+        self.engine.bye()  # TODO - plugin specific shutdown
+
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + +