404
+ +Page not found
+ + +diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..bc5c320e --- /dev/null +++ b/404.html @@ -0,0 +1,276 @@ + + +
+ + + + +Page not found
+ + +Introducing OpenVoiceOS - The Free and Open-Source Personal Assistant and Smart Speaker.
+OpenVoiceOS is a new player in the smart speaker market, offering a powerful and flexible alternative to proprietary solutions like Amazon Echo and Google Home.
+With OpenVoiceOS, you have complete control over your personal data and the ability to customize and extend the functionality of your smart speaker.
+Built on open-source software, OpenVoiceOS is designed to provide users with a seamless and intuitive voice interface for controlling their smart home devices, playing music, setting reminders, and much more.
+The platform leverages cutting-edge technology, including machine learning and natural language processing, to deliver a highly responsive and accurate experience.
+In addition to its voice capabilities, OpenVoiceOS features a touch-screen GUI made using QT5 and the KF5 framework.
+The GUI provides an intuitive, user-friendly interface that allows you to access the full range of OpenVoiceOS features and functionality.
+Whether you prefer voice commands or a more traditional touch interface, OpenVoiceOS has you covered.
+One of the key advantages of OpenVoiceOS is its open-source nature, which means that anyone with the technical skills can contribute to the platform and help shape its future.
+Whether you're a software developer, data scientist, someone with a passion for technology, or just a casual user that would like to experience what OVOS has to offer, you can get involved and help build the next generation of personal assistants and smart speakers.
+With OpenVoiceOS, you have the option to run the platform fully offline, giving you complete control over your data and ensuring that your information is never shared with third parties. This makes OpenVoiceOS the perfect choice for anyone who values privacy and security.
+So if you're looking for a personal assistant and smart speaker that gives you the freedom and control you deserve, be sure to check out OpenVoiceOS today!
+Disclaimer: This post was written in collaboration with ChatGPT
+ +This section can be a bit technical, but is included for reference. It is not necessary to read this section for day-to-day usage of OVOS.
+OVOS is a collection of modular services that work together to provide a seamless, private, open source voice assistant.
+The suggested way to start OVOS is with systemd service files. Most of the images run these services as a normal user
instead of system wide. If you get an error when using the system files, try using it as a system service.
NOTE The ovos.service
is just a wrapper to control the other OVOS services. It is used here as an example showing --user
vs system
.
systemctl --user status ovos.service
systemctl status ovos.service
This service provides the main instance for OVOS and handles all of the skill loading, and intent processing.
+All user queries are handled by the skills service. You can think of it as OVOS's brain
+typical systemd command
+systemctl --user status ovos-skills
systemctl --user restart ovos-skills
C++ version
+NOTE This is an alpha
version and mostly Proof of Concept
. It has been known to crash often.
You can think of the bus service as OVOS's nervous system.
+The ovos-bus
is considered an internal and private websocket, external clients should not connect directly to it. Please do not expose the messagebus to the outside world!
typical systemd command
+systemctl --user start ovos-messagebus
The listener service is used to detect your voice. It controls the WakeWord, STT (Speech To Text), and VAD (Voice Activity Detection) Plugins. You can modify microphone settings and enable additional features under the listener section of your mycroft.conf
file, such as wake word / utterance recording / uploading.
The ovos-dinkum-listener is the new OVOS listener that replaced the original ovos-listener and has many more options. Others still work, but are not recommended.
+ +typical systemd command
+systemctl --user start ovos-dinkum-listener
This is where speech is transcribed into text and forwarded to the skills service.
+Two STT plugins may be loaded at once. If the primary plugin fails, the second will be used.
+Having a lower accuracy offline model as fallback will account for internet outages, which ensures your device never becomes fully unusable.
+Several different STT (Speech To Text) plugins are available for use. OVOS provides a number of public services using the ovos-stt-plugin-server plugin which are hosted by OVOS trusted members (Members hosting services). No additional configuration is required.
+ + +OVOS uses "Hotwords" to trigger any number of actions. You can load any number of hotwords in parallel and trigger different actions when they are detected. Each Hotword can do one or more of the following:
+A Wake word is what OVOS uses to activate the device. By default Hey Mycroft
is used by OVOS. Like other things in the OVOS ecosystem, this is configurable.
VAD Plugins detect when you are actually speaking to the device, and when you quit talking.
+Most of the time, this will not need changed. If you are having trouble with your microphone hearing you, or stopping listening when you are done talking, you might change this and see if it helps your issue.
+ + +The audio service handles the output of all audio. It is how you hear the voice responses, music, or any other sound from your OVOS device.
+ +TTS (Text To Speech) is the verbal response from OVOS. There are several plugins available that support different engines. Multiple languages and voices are available to use.
+OVOS provides a set of public TTS servers hosted by OVOS trusted members (Members hosting services). It uses the ovos-tts-server-plugin, and no additional configuration is needed.
+ + +PHAL stands for Plugin-based Hardware Abstraction Layer. It is used to allow access of different hardware devices access to use the OVOS software stack. It completely replaces the concept of hardcoded "enclosure" from mycroft-core
.
Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark 1 plugin
+ + +Similar to regular PHAL, but is used when sudo
or privlidged user
is needed
+Be extremely careful when adding admin-phal plugins
. They give OVOS administrative privileges, or root privileges to your operating system
+Admin PHAL
OVOS uses the standard mycroft-gui framework, you can find the official documentation here
+The GUI service provides a websocket for GUI clients to connect to, it is responsible for implementing the GUI protocol under ovos-core
.
You can find in depth documentation here
+OVOS provides a number of helper scripts to allow the user to control the device at the command line.
+ovos-say-to
This provides a way to communicate an intent to ovos.ovos-say-to "what time is it"
ovos-listen
This opens the microphone for listening, just like if you would have said the WakeWord. It is expecting a verbal command."what time is it"
ovos-speak
This takes your command and runs it through the TTS (Text To Speech) engine and speaks what was provided.ovos-speak "hello world"
will output "hello world"
in the configured TTS voiceovos-config
is a command line interface that allows you to view and set configuration values.When you first start OVOS, there should not be any configuration needed to have a working device.
+NOTE To continue with the examples, you will need access to a shell on your device. This can be achieved with SSH. Connect to your device with the command ssh ovos@<device_ip_address>
and enter the password ovos
.
This password is EXTREMELY insecure and should be changed or use ssh keys for logging in.
+ +This section will explain how the configuration works, and how to do basic configuration changes.
+The rest of this section will assume you have shell access to your device.
+OVOS will load configuration files from several locations and combine them into a single json
file that is used throughout the software. The file that is loaded last, is what the user should use to modify any configuration values. This is usually located at `~/.config/mycroft/mycroft.conf
{ovos-config-path}/mycroft.conf
<python_install_path>/site-packages/ovos_config/mycroft.conf
ovos-core
os.environ.get('MYCROFT_SYSTEM_CONFIG')
or /etc/mycroft/mycroft.confos.environ.get('MYCROFT_WEB_CACHE')
or XDG_CONFIG_PATH
/mycroft/web_cache.json~/.mycroft/mycroft.conf
(Deprecated)XDG_CONFIG_DIRS
+ /mycroft/mycroft.conf/etc/xdg/mycroft/mycroft.conf
XDG_CONFIG_HOME
(default ~/.config) + /mycroft/mycroft.confWhen the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that exist in multiple configuration files will be overridden by the last file to contain the value. This process results in a minimal amount being written for a specific device and user, without modifying default distribution files.
+ +OVOS provides a command line tool ovos-config
for viewing and changing configuration values.
Values can also be set manually in config files instead of using the CLI tool.
+These methods will be used later in the How To section of these Docs.
+ +User configuration should be set in the XDG_CONFIG_HOME
file. Usually located at ~/.config/mycroft/mycroft.conf
. This file may or may not exist by default. If it does NOT exist, create it.
mkdir -p ~/.config/mycroft
touch ~/.config/mycroft/mycroft.conf
Now you can edit that file. To continue with the previous example, we will change the host of the TTS server, then add the value manually to the user's mycroft.conf
file.
Open the file for editing. It is not uncommon for this file to exist, but be empty.
+nano ~/.config/mycroft/mycroft.conf
Enter the following into the file. NOTE this file must be valid json or yaml format. OVOS knows how to read both
+{
+ "tts": {
+ "module": "ovos-tts-plugin-server",
+ "ovos-tts-plugin-server": {
+ "host": "https://pipertts.ziggyai.online"
+ }
+ }
+}
+
+You can check the formatting of your file with the jq
command.
cat ~/.config/mycroft/mycroft.conf | jq
+If your distribution does not include jq
it can be installed with the command sudo apt install jq
or the equivalent for your distro.
If there are no errors, it will output the complete file. On error, it will output the line where the error is. You can use an online JSON checker if you want also.
+ + +OVOS provides a small command line tool, ovos-config, for viewing and setting configuration values in the OVOS ecosystem.
+NOTE The CLI of this script is new, and may contain some bugs. Please report issues to the ovos-config
github page.
ovos-config --help
will show a list of commands to use with this tool.
ovos-config show
will display a table representing all of the current configuration values.
To get the values of a specific section:
+ovos-config show --section tts
will show just the "tts" section of the configuration
ovos-config show --section tts
+┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
+┃ Configuration keys (Configuration: Joined, Section: tts) ┃ Value ┃
+┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩
+│ pulse_duck │ False │
+│ module │ ovos-tts-plugin-server │
+│ fallback_module │ ovos-tts-plugin-mimic │
+├──────────────────────────────────────────────────────────────┼────────────────────────┤
+│ ovos-tts-plugin-server │ │
+│ host │ │
+└──────────────────────────────────────────────────────────────┴────────────────────────┘
+
+We will continue with the example above, TTS.
+Change the host of the TTS server:
+ovos-config set -k tts
will show a table of values that can be edited
set -k tts
+┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┓
+┃ # ┃ Path ┃ Value ┃
+┡━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━┩
+│ 0 │ tts/pulse_duck │ False │
+│ 1 │ tts/ovos-tts-plugin-server/host │ │
+└───┴─────────────────────────────────┴───────┘
+Which value should be changed? (2='Exit') [0/1/2]:
+
+Enter 1
to change the value of tts/ovos-tts-plugin-server/host
Please enter the value to be stored (type: str) :
Enter the value for the tts server that you want ovos to use.
+https://pipertts.ziggyai.online
Use ovos-config show --section tts
to check your results
ovos-config show --section tts
+┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
+┃ Configuration keys (Configuration: Joined, Section: tts) ┃ Value ┃
+┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
+│ pulse_duck │ False │
+│ module │ ovos-tts-plugin-server │
+│ fallback_module │ ovos-tts-plugin-mimic │
+├──────────────────────────────────────────────────────────────┼─────────────────────────────────┤
+│ ovos-tts-plugin-server │ │
+│ host │ https://pipertts.ziggyai.online │
+└──────────────────────────────────────────────────────────────┴─────────────────────────────────┘
+
+This can be done for any of the values in the configuration stack.
+ +OVOS aims to be a full operating system that is free and open source. The Open Voice Operating System consists of OVOS +packages (programs specifically released by the OVOS Project) as well as free software released by third parties such as +skills and plugins. OVOS makes it possible to voice enable technology without software that would trample your freedom.
+Historically OVOS has been used to refer to several things, the team, the github organization and the reference +buildroot implementation
+OVOS started as MycroftOS, you can find the original mycroft forums +thread here.
+Over time more mycroft community members joined the project, and it was renamed to OpenVoiceOS to avoid trademark issues.
+Initially OVOS was focused on bundling mycroft-core and on creating only companion software, but due to contributions +not being accepted upstream we now maintain an enhanced reference fork of mycroft-core with extra functionality, while +keeping all companion software mycroft-core (dev branch) compatible
+You can think of OVOS as the unsanctioned "Mycroft Community Edition"
+Everyone in the OVOS team is a long term mycroft community member and has experience working with the mycroft code base
+Meet the team:
+Both projects are fully independent, initially OVOS was focused on wrapping mycroft-core with a minimal OS, but as both +projects matured, ovos-core was created to include extra functionality and make OVOS development faster and more +efficient. OVOS has been committed to keeping our components compatible with Mycroft and many of our changes are +submitted to Mycroft to include in their projects at their discretion.
+We don't, OVOS is a volunteer project with no source of income or business model
+However, we want to acknowledge Blue Systems and NeonGeckoCom, a lot of +the work in OVOS is done on paid company time from these projects
+We provide essential skills and those are bundled in all our reference images.
+ovos-core does not manage your skills, unlike mycroft it won't install or update anything by itself. if you installed +ovos-core manually you also need to install skills manually
+By default ovos-core does not require a backend internet server to operate. Some skills can be accessed (via command line) entirely offline. The default speech-to-text (STT) engine currently requires an internet connection, though some self-hosted, offline options are available. Individual skills and plugins may require internet, and most of the time you will want to use those.
+no! you can integrate ovos-core with selene +or personal backend but that is fully optional
+we provide some microservices for some of our skills, but you can also use your own api keys
+hundreds! nearly everything in OVOS is modular and configurable, that includes Text To Speech.
+Voices depend on language and the plugins you have installed, you can find a non-exhaustive list of plugins +in the ovos plugins awesome list
+yes, ovos-core supports several wake word plugins.
+Additionally, OVOS allows you to load any number of hot words in parallel and trigger different actions when they are +detected
+each hotword can do one or more of the following:
+mostly yes, depending on exactly what you mean by this question
+OVOS can run without any wake word configured, in this case you will only be able to interact via CLI or button press, +best for privacy, not so great for a smart speaker
+ovos-core also provides a couple experimental settings, if you enable continuous listening then VAD will be used to +detect speech and no wake word is needed, just speak to mycroft and it should answer! However, this setting is +experimental for a reason, you may find that mycroft answers your TV or even tries to answer itself if your hardware +does not have AEC
+Another experimental setting is hybrid mode, with hybrid mode you can ask follow-up questions, up to 45 seconds after the +last mycroft interaction, if you do not interact with mycroft it will go back to waiting for a wake word
+By default, to answer a request:
+Through this process there are a number of factors that can affect the perceived speed of responses:
+Many schools, universities and workplaces run a proxy
on their network. If you need to type in a username and password to access the external internet, then you are likely behind a proxy
.
If you plan to use OVOS behind a proxy, then you will need to do an additional configuration step.
+NOTE: In order to complete this step, you will need to know the hostname
and port
for the proxy server. Your network administrator will be able to provide these details. Your network administrator may want information on what type of traffic OVOS will be using. We use https
traffic on port 443
, primarily for accessing ReST-based APIs.
If you are using OVOS behind a proxy without authentication, add the following environment variables, changing the proxy_hostname.com
and proxy_port
for the values for your network. These commands are executed from the Linux command line interface (CLI).
$ export http_proxy=http://proxy_hostname.com:proxy_port
+$ export https_port=http://proxy_hostname.com:proxy_port
+$ export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,0.0.0.0,::1"
+
+If you are behind a proxy which requires authentication, add the following environment variables, changing the proxy_hostname.com
and proxy_port
for the values for your network. These commands are executed from the Linux command line interface (CLI).
$ export http_proxy=http://user:password@proxy_hostname.com:proxy_port
+$ export https_port=http://user:password@proxy_hostname.com:proxy_port
+$ export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,0.0.0.0,::1"
+
+
+ OpenVoiceOS is part of a larger ecosystem of FOSS voice technology, we work closely with the following projects
+HiveMind is a community-developed superset or extension of OpenVoiceOS
+With HiveMind, you can extend one (or more, but usually just one!) instance of Mycroft to as many devices as you want, +including devices that can't ordinarily run Mycroft!
+HiveMind's developers have successfully connected to Mycroft from a PinePhone, a 2009 MacBook, and a Raspberry Pi 0, +among other devices. Mycroft itself usually runs on our desktop computers or our home servers, but you can use any +Mycroft-branded device, or OpenVoiceOS, as your central unit.
+You find the website here and the source +code here
+Plasma Bigscreen integrates and uses OpenVoiceOS as voice framework stack to serve voice queries and voice +applications (skills with a homescreen), one can easily enable mycroft / ovos integration in the bigscreen launcher by +installing ovos core and required services and enabling the integration switch in the bigscreen KCM
+You find the website here and the source +code here
+ +Neon was one of the first projects ever to adopt ovos-core as a library to build their own voice assistant, +Neon works closely together with OVOS and both projects are mostly compatible
+You find the website here and the source code here
+Mycroft AI started it all, it was one of the first ever FOSS voice assistants and is the project OVOS descends from.
+Most applications made for mycroft will work in OVOS and vice-versa
+You find the website here and the source code here
+Secret Sauce AI is a coordinated community of tech minded AI enthusiasts working together on projects to identify +blockers and improve the basic open source tools and pipeline components in the AI (voice) assistant pipeline (wakeword, +ASR, NLU, NLG, TTS). The focus is mostly geared toward deployment on edge devices and self-hosted solutions. This is not +a voice assistant project in and of itself, rather Secret Sauce AI helps AI (voice) assistant projects come together as +individuals and solve basic problems faced by the entire community.
+ + +Editor's Note +Some of the more detailed definitions will be moved to other pages, it's just here to keep track of the information for now.
+All the repositories under OpenVoiceOS organization
+The team behind OVOS
+Confirmation approaches can also be defined by Statements or Prompts , but when we talk about them in the context of confirmations we call them Implicit and Explicit.
+This type of confirmation is also a statement. The idea is to parrot the information back to the user to confirm that it +was correct, but not require additional input from the user. The implicit confirmation can be used in a majority of +situations.
+This type of confirmation requires an input from the user to verify everything is correct.
+Any time the user needs to input a lot of information or the user needs to sort through a variety of options a conversation will be needed. +Users may be used to systems that require them to separate input into different chunks.
+Allows for natural conversation by having skills set a "context" that can be used by subsequent handlers. Context could be anything from person to location. Context can also create "bubbles" of available intent handlers, to make sure certain Intents can't be triggered unless some previous stage in a conversation has occurred.
+You can find an example Tea Skill using conversational context on Github.
+As you can see, Conversational Context lends itself well to implementing a dialog tree or conversation tree.
+All of the letters and letter combinations that represent a phoneme.
+The OpenVoiceOS home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections.
+When an utterance is classified for its action and entities (e.g. 'turn on the kitchen lights' -> skill: home assistant, action: turn on/off, entity: kitchen lights)
+(Media Player Remote Interfacing Specification) is a standard D-Bus interface which aims to provide a common programmatic API for controlling media players. +More Inforamtion
+Primary configuration file for the voice assistant. Possible locations: +- /home/ovos/.local/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf +- /etc/mycroft/mycroft.conf +- /home/ovos/.config/mycroft/mycroft.conf +- /etc/xdg/mycroft/mycroft.conf +- /home/ovos/.mycroft/mycroft.conf +More Information
+OCP stands for OpenVoiceOS Common Play, it is a full fledged +media player
+OCP is a OVOSAbstractApplication, this +means it is a standalone but native OVOS application with full voice integration
+OCP differs from mycroft-core in several aspects:
+The central repository where the voice assistant "brain" is developed
+OPM is the OVOS Plugin Manager, this base package provides arbitrary plugins to the ovos ecosystem
+OPM plugins import their base classes from OPM making them portable and independent from core, plugins can be used in your standalone projects
+By using OPM you can ensure a standard interface to plugins and easily make them configurable in your project, plugin code and example configurations are mapped to a string via python entrypoints in setup.py
+Some projects using OPM are ovos-core +, hivemind-voice-sat +, ovos-personal-backend +, ovos-stt-server +and ovos-tts-server
+The gui service in ovos-core will expose a websocket to +the GUI client following the protocol +outlined here
+The GUI library which implements the protocol lives in the mycroft-gui +repository, The repository also hosts a development client for skill developers wanting to develop on the desktop.
+OVOS-shell is the OpenVoiceOS client implementation of the mycroft-gui +library used in our embedded device images, other distributions may offer alternative implementations such +as plasma-bigscreen* +or mycroft mark2
+OVOS-shell is tightly coupled to PHAL, the following companion plugins should be installed if you are +using ovos-shell
+Physical Hardware Abstraction Layer +PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the +concept of hardcoded "enclosure" from mycroft-core
+Any number of plugins providing functionality can be loaded and validated at runtime, plugins can +be system integrations to handle things like reboot and +shutdown, or hardware drivers such as mycroft mark2 plugin
+PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it +does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images
+The smallest phonetic unit in a language that is capable of conveying a distinction in meaning, as the m of mat and the b of bat in English.
+Snapcast is a multiroom client-server audio player, where all clients are time synchronized with the server to play perfectly synced audio. It's not a standalone player, but an extension that turns your existing audio player into a Sonos-like multiroom solution. +More Information
+You can think of Prompts as questions and Statements as providing information to the user that does not need a follow-up response.
+Qt Markup Language, the language for Qt Quick UIs. More Information
+The Mycroft GUI Framework uses QML.
+Speech To Text +Also known as ASR, automated speech recognition, the process of converting audio into words
+Text To Speech +The process of generating the audio with the responses
+Command, question, or query from a user (eg 'turn on the kitchen lights')
+A specific word or phrase trained used to activate the STT (eg 'hey mycroft')
+XDG stands for "Cross-Desktop Group", and it's a way to help with compatibility between systems. More Information
+ +OVOS has been confirmed to run on several devices, and more to come.
+Recommendations and notes on speakers and microphones
+Most audio devices are available to use with the help of Plugins and should for the most part work by default.
+If your device does not work, pop in to our Matrix support channel, please create an issue or start a discussion about your device.
+Most USB devices should work without any issues. But, not all devices are created equally.
+HDMI audio should work without issues if your device supports it.
+ +Analog output to headphones, or external speakers should work also. There may be some configuration needed on some devices.
+Audio Troubleshooting - Analog
+There are several HAT's that are available, some with just a microphone, others that play audio out also. Several are supported and tested, others should work with the proper configuration.
+/boot/config.txt
modification)/boot/config.txt
modification)Some special sound boards are also supported.
+Mark 1 custom sound board
+/boot.config.txt
modificationIf your device supports video out, you can use a screen on your device. (RPI3/3b/3b+ will not the OVOS GUI, ovos-shell, due to lack of processing power, but you can access a command prompt on a locally connected screen)
+OVOS supports touchscreen interaction, but not all are created equally. It has been noted that on some USB touchscreens, the touch matrix is not synced with the OVOS display and requires an x11 setup with a window manager to adjust the settings to work.
+ + +WIP
+ +The home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections.
+ +The Night Mode feature lets you quickly switch your home screen into a dark standby clock, reducing the amount of light emitted by your device. This is especially useful if you are using your device in a dark room or at night. You can enable the night mode feature by tapping on the left edge pill button on the home screen.
+ +The Quick Actions Dashboard provides you with a card-based interface to quickly access and add your most used action. The Quick Actions dashboard comes with a variety of pre-defined actions like the ability to quickly add a new alarm, start a new timer or add a new note. You can also add your own custom actions to the dashboard by tapping on the plus button in the top right corner of the dashboard. The Quick Actions dashboard is accessible by tapping on the right edge pill button on the home screen.
+ +OpenVoiceOS comes with support for dedicated voice applications. Voice Applications can be dedicated skills or PHAL plugins, providing their own dedicated user interface. The application launcher will show you a list of all available voice applications. You can access the application launcher by tapping on the center pill button on the bottom of the home screen.
+ +The home screen supports custom wallpapers and comes with a bunch of wallpapers to choose from. You can easily change your custom wallpaper by swiping from right to left on the home screen.
+ +The notifications widget provides you with a quick overview of all your notifications. The notifications bell icon will be displayed in the top left corner of the home screen. You can access the notifications overview by tapping on the bell icon when it is displayed.
+The timer widget is displayed in top left corner after the notifications bell icon. It will show up when you have an active timer running. Clicking on the timer widget will open the timers overview.
+The alarm widget is displayed in top left corner after the timer widget. It will show up when you have an active alarm set. Clicking on the alarm widget will open the alarms overview.
+The media player widget is displayed in the bottom of the home screen, It replaces the examples widget when a media player is active. The media player widget will show you the currently playing media and provide you with a quick way to pause, resume or skip the current media. You can also quickly access the media player by tapping the quick display media player button on the right side of the media player widget.
+ +The homescreen has several customizations available. This is sample settings.json
file with all of the options explained
{
+ "__mycroft_skill_firstrun": false,
+ "weather_skill": "skill-weather.openvoiceos",
+ "datetime_skill": "skill-date-time.mycroftai",
+ "examples_skill": "ovos-skills-info.openvoiceos",
+ "wallpaper": "default.jpg",
+ "persistent_menu_hint": false,
+ "examples_enabled": true,
+ "randomize_examples": true,
+ "examples_prefix": true
+}
+
+skill-ovos-date-time.openvoiceos
ovos_skills_manager.utils.get_skills_example()
function~
Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication.
+Create a keyfile (you can change ovos to whatever you want)
+ssh-keygen -t ed25519 -f ~/.ssh/ovos
+
+Copy to host (use the same filename as above, specify the user and hostname you are using)
+ssh-copy-id -i ~/.ssh/ovos ovos@mycroft
+
+On your dekstop, edit ~/.ssh/config and add the following lines
+Host rp2
+ user ovos
+ IdentityFile ~/.ssh/ovos
+
+On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line:
+PasswordAuthentication no
+
+restart sshd or reboot
+sudo systemctl restart sshd
+
+Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system!
+You can read more about the security issues over at Nhoya/MycroftAI-RCE
+in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills
+you can see a demonstration of this problem with BusBrickerSkill
+"shared_connection": false
ensures each skill gets its own websocket connection and avoids this problem
Additionally, it is recommended you change "host": "127.0.0.1"
, this will ensure no outside world connections are allowed
This section is provided as a basic Q&A for common questions.
+ + + + + + + + +The listener is responsible for loading STT, VAD and Wake Word plugins
+Speech is transcribed into text and forwarded to the skills service.
+The newest listener that OVOS uses is ovos-dinkum-listener. It is a version of the listener from the Mycroft Dinkum software for the Mark 2 modified for use with OVOS.
+ +OVOS uses microphone plugins to support different setups and devices.
+NOTE only ovos-dinkum-listener has this support.
+The default plugin that OVOS uses is ovos-microphone-plugin-alsa and for most cases should work fine.
+If you are running OVOS on a Mac, you need a different plugin to access the audio. ovos-microphone-plugin-sounddevice
+OVOS microphone plugins are available on PyPi
+pip install ovos-microphone-plugin-sounddevice
or
+pip install --pre ovos-microphone-plugin-sounddevice
for the latest alpha
versions.
NOTE The alpha
versions may be needed until the release of ovos-core 0.1.0
Plugin | +Usage | +
---|---|
ovos-microphone-plugin-alsa | +Default plugin - should work in most cases | +
ovos-microphone-plugin-sounddevice | +This plugin is needed when running OVOS on a Mac but also works on other platforms | +
ovos-microphone-plugin-socket | +Used to connect a websocket microphone for remote usage | +
ovos-microphone-plugin-files | +Will use a file as the voice input instead of a microphone | +
ovos-microphone-plugin-pyaudio | +Uses PyAudio for audio processing | +
ovos-microphone-plugin-arecord | +Uses arecord to get input from the microphone. In some cases this may be faster than the default alsa |
+
Microphone plugin configuration is located under the top level listener
value.
{
+ "listener": {
+ "microphone": {
+ "module": "ovos-microphone-plugin-alsa",
+ "ovos-microphone-plugin-alsa": {
+ "device": "default"
+ }
+ }
+ }
+}
+
+The only required section is "module"
. The plugin will then use the default values.
The "device"
section is used if you have several microphones attached, this can be used to specify which one to use.
Specific plugins may have other values that can be set. Check the GitHub repo of each plugin for more details.
+ +PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the concept of hardcoded "enclosure" from mycroft-core
.
Any number of plugins providing functionality can be loaded and validated at runtime, plugins can +be system integrations to handle things like reboot and shutdown, or hardware drivers such as Mycroft Mark 2 plugin.
+PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images.
+Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities.
+Plugin | +Description | +
---|---|
ovos-PHAL-plugin-alsa | +volume control | +
ovos-PHAL-plugin-system | +reboot / shutdown / factory reset | +
ovos-PHAL-plugin-mk1 | +mycroft mark1 integration | +
ovos-PHAL-plugin-mk2 | +mycroft mark2 integration | +
ovos-PHAL-plugin-respeaker-2mic | +respeaker 2mic hat integration | +
ovos-PHAL-plugin-respeaker-4mic | +respeaker 4mic hat integration | +
ovos-PHAL-plugin-wifi-setup | +wifi setup (central plugin) | +
ovos-PHAL-plugin-gui-network-client | +wifi setup (GUI interface) | +
ovos-PHAL-plugin-balena-wifi | +wifi setup (hotspot) | +
ovos-PHAL-plugin-network-manager | +wifi setup (network manager) | +
ovos-PHAL-plugin-brightness-control-rpi | +brightness control | +
ovos-PHAL-plugin-ipgeo | +automatic geolocation (IP address) | +
ovos-PHAL-plugin-gpsd | +automatic geolocation (GPS) | +
ovos-PHAL-plugin-dashboard | +dashboard control (ovos-shell) | +
ovos-PHAL-plugin-notification-widgets | +system notifications (ovos-shell) | +
ovos-PHAL-plugin-color-scheme-manager | +GUI color schemes (ovos-shell) | +
ovos-PHAL-plugin-configuration-provider | +UI to edit mycroft.conf (ovos-shell) | +
ovos-PHAL-plugin-analog-media-devices | +video/audio capture devices (OCP) | +
AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root
privileges.
This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely
+NOTE: Because this service runs as root, plugins it loads are responsible for not writing configuration changes which would result in breaking config file permissions.
+AdminPlugins are just like regular PHAL plugins that run with root
privileges.
Admin plugins will only load if their configuration contains "enabled": true
. All admin plugins need to be explicitly enabled.
You can find plugin packaging documentation here.
+ +Skills give OVOS the ability to perform a variety of functions. They can be installed or removed by the user, and can be easily updated to expand functionality. To get a good idea of what skills to build, let’s talk about the best use cases for a voice assistant, and what types of things OVOS can do.
+OVOS can run on a variety of platforms from the Linux Desktop to Single Board Computers (SBCs) like the Raspberry Pi. Different devices will have slightly different use cases. Devices in the home are generally located in the living room or kitchen and are ideal for listening to the news, playing music, general information, using timers while cooking, checking the weather, and other similar activities that are easily accomplished hands-free.
+We cover a lot of the basics with our Default Skills, things like Timers, Alarms, Weather, Time and Date, and more.
+We also call this General Question and Answer, and it covers all of those factual questions someone might think to ask a voice assistant. Questions like “who was the 32nd President of the United States?”, or “how tall is Eiffel Tower?” Although the Default Skills cover a great deal of questions there is room for more. There are many topics that could use a specific skill such as Science, Academics, Movie Info, TV info, and Music info, etc.
+ +One of the biggest use cases for Smart Speakers is playing media. The reason media playback is so popular is that it makes playing a song so easy, all you have to do is say “Hey Mycroft play the Beatles,” and you can be enjoying music without having to reach for a phone or remote. In addition to listening to music, there are skills that handle videos as well.
+Much like listening to music, getting the latest news with a simple voice interaction is extremely convenient. OVOS supports multiple news feeds, and has the ability to support multiple news skills.
+Another popular use case for Voice Assistants is to control Smart Home and IoT products. Within the OVOS ecosystem there are skills for Home Assistant, Wink IoT, Lifx and more, but there are many products that we do not have skill for yet. The open source community has been enthusiastically expanding OVOS's ability to voice control all kinds of smart home products.
+Voice games are becoming more and more popular, especially those that allow multiple users to play together. Trivia games are some of the most popular types of games to develop for voice assistants. There are several games already available for OVOS. There are native voice adventure games, ports of the popular text adventure games from infocom, a Crystal Ball game, a Number Guessing game and much more!
+Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage.
+You can ask your device what time or date it is just in case you lost your watch.
+++ +Hey Mycroft, what time is it?
+
++ +Hey Mycroft, what is the date?
+
Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device.
+++ +Hey Mycroft, set an alarm for 8 AM.
+
Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers.
+++ +Hey Mycroft, set a timer for 5 minutes.
+
You can always set more timers and even name them, so you know which timers is for what.
+++ +Hey, Mycroft, set another timer called rice cooking for 7 minutes.
+
You can ask your device what the weather is or would be at any given time or place.
+++ +Hey Mycroft, what is the weather like today?
+
The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen.
+ +There are more installed, just try. If you don't get the response you expected, see the section on installing new skills
+ +Each skill will have its own config file usually located at ~/.local/share/mycroft/skills/<skill_id>/settings.json
Skill settings provide the ability for users to configure a Skill using the command line or a web-based interface.
+This is often used to:
+Skill settings are completely optional.
+Refer to each skill repository for valid configuration values.
+ +This section will help you to understand what a skill is and how to install and use skills with OVOS.
+OVOS official skills can be found on PyPi and the latest stable version can be installed with a pip install
command.
pip install ovos-skill-naptime
If you have issues installing with this command, you may need to use the alpha
versions. Pip has a command line flag for this --pre
.
pip install --pre ovos-skill-naptime
will install the latest alpha
version. This should fix dependency issues with the stable versions.
Most skills are found throughout github. The official skills can be found with a simple search in the OVOS GitHub page. There are a few other places they can be found. Neon AI has several skills, and a search through GitHub will surley find more.
+There are a few ways to install skills in ovos. The preferred way is with pip
and a setup.py
file.
The preferred method is with pip
. If a skill has a setup.py
file, it can be installed this way.
The syntax is pip install git+<github/repository.git>
.
ex. pip install git+https://github.com/OpenVoiceOS/skill-ovos-date-time.git
should install the ovos-date-time skill.
Skills can be installed from a local file also.
+Clone the repository.
+git clone https://github.com/OpenVoiceOS/skill-ovos-date-time
pip install ./skill-ovos-date-time
After installing skills this way, ovos skills service needs to be restarted.
+systemctl --user restart ovos-skills
This is NOT the preferred method and is here for backward compatibility with the original mycroft-core
skills.
Skills can also be directly cloned to the skill directory, usually located at ~/.local/share/mycroft/skills/
.
enter the skill directory.
+cd ~/.local/share/mycroft/skills
and clone the found skill here with git.
+git clone <github/repository.git>
ex. git clone https://github.com/OpenVoiceOS/skill-ovos-date-time.git
will install the ovos-date-time skill.
A restart of the ovos-skills service is not required when installing this way.
+The OVOS skills manager is in need of some love, and when official skill stores are created, this will be updated to use the new methods. Until then, this method is NOT recommended, and NOT supported. The following is included just as reference.
+Install skills from any appstore!
+The mycroft-skills-manager alternative that is not vendor locked, this means you must use it responsibly!
+Do not install random skills, different appstores have different policies!
+Keep in mind any skill you install can modify mycroft-core at runtime, and very likely has root access if you are running on a Raspberry Pi.
+pip install ovos-skills-manager
+
+Enable a skill store
+osm enable --appstore [ovos|mycroft|pling|andlo|all]
+
+Search for a skill and install it
+osm install --search
+
+See more osm commands
+osm --help
+osm install --help
+
+
+
+ STT (Speech to Text) is what converts your voice into commands that OVOS recognizes, then converts to an intent that is used to activate skills.
+There are several STT engines available and OVOS uses ovos-stt-plugin-server and a list of public servers hosted by OVOS community members by default.
+Plugin | +Offline | +Type | +
---|---|---|
ovos-stt-plugin-vosk | +yes | +FOSS | +
ovos-stt-plugin-chromium | +no | +API (free) | +
neon-stt-plugin-google_cloud_streaming | +no | +API (key) | +
neon-stt-plugin-scribosermo | +yes | +FOSS | +
neon-stt-plugin-silero | +yes | +FOSS | +
neon-stt-plugin-polyglot | +yes | +FOSS | +
neon-stt-plugin-deepspeech_stream_local | +yes | +FOSS | +
ovos-stt-plugin-selene | +no | +API (free) | +
ovos-stt-plugin-http-server | +no | +API (self hosted) | +
ovos-stt-plugin-pocketsphinx | +yes | +FOSS | +
Several STT engines have different configuration settings for optimizing its use. Several have different voices to use, or you can specify a specific STT server to use.
+We will cover basic configuration of the default STT engine ovos-stt-plugin-server
All changes will be made in the User configuration file, eg. ~/.config/mycroft/mycroft.conf
Open the file for editing. nano ~/.config/mycroft/mycroft.conf
If your file is empty, or does not have a "stt"
section, you need to create it. Add this to your config
{
+ "stt": {
+ "module": "ovos-stt-plugin-server",
+ "fallback_module": "ovos-stt-plugin-vosk",
+ "ovos-stt-plugin-server": {
+ "url": "https://fasterwhisper.ziggyai.online/stt"
+ }
+ "ovos-stt-plugin-vosk": {}
+ }
+}
+
+By default, the language that is configured with OVOS will be used, but should (WIP), detect the spoken language and convert it as necessary.
+"module"
- This is where you specify what STT module to use.
"fallback_module"
- If by chance your first STT engine fails, OVOS will try to use this one. It is usually configured to use an on device
engine so that you always have some output even if you are disconnected from the internet.
"ovos-tts-server-plugin"
"ovos-tts-plugin-piper"
- Specify specific plugin settings here. Multiple entries are allowed. If an empty dict is provided, {}
, the plugin will use its default values.
Refer to the STT plugin GitHub repository for specifications on each plugin
+ +TTS plugins are responsible for converting text into audio for playback. Several options are available each with different attributes and supported languages. Some can be run on device, others need an internet connection to work.
+As with most OVOS packages, the TTS plugins are available on PyPi and can be installed with pip install
pip install ovos-tts-plugin-piper
will install the latest stable version. If there are installation errors, you can install the latest alpha
versions of the plugins.
pip install --pre ovos-tts-plugin-piper
By default, OVOS uses ovos-tts-server-plugin and a series of public TTS servers, provided by OVOS community members, to send speech to your device. If you host your own TTS server, or this option is not acceptable to you, there are many other options to use.
+TTS plugins are responsible for converting text into audio for playback.
+ +Advanced TTS Plugin Documentation
+Several TTS engines have different configuration settings for optimizing its use. Several have different voices to use, or you can specify a TTS server to use.
+We will cover basic configuration of the default TTS engine ovos-tts-server-plugin
.
All changes will be made in the User configuration file, eg. ~/.config/mycroft/mycroft.conf
.
Open the file for editing. nano ~/.config/mycroft/mycroft.conf
.
If your file is empty, or does not have a "tts"
section, you need to create it. Add this to your config
{
+ "tts": {
+ "module": "ovos-tts-server-plugin",
+ "fallback_module": "ovos-tts-plugin-piper",
+ "ovos-tts-server-plugin": {
+ "host": "https://pipertts.ziggyai.online",
+ "voice": "alan-low"
+ }
+ "ovos-tts-plugin-piper": {}
+ }
+}
+
+"module"
- This is where you specify what TTS plugin to use.
+- ovos-tts-server-plugin in this example.
+ - This plugin, by default, uses a random selection of public TTS servers provided by the OVOS community. With no "host"
provided, one of those will be used.
+ - You can still change your voice without changing the "host"
. The default voice is "alan-low"
, or the Mycroft original voice `"Alan Pope".
Changing your assistant's voice
+"fallback_module"
+- If by chance your first TTS engine fails, OVOS will try to use this one. It is usually configured to use an on device
engine so that you always have some output even if you are disconnected from the internet.
"ovos-tts-server-plugin"
"ovos-tts-plugin-piper"
+- Specify specific plugin settings here. Multiple entries are allowed. If an empty dict is provided, {}
, the plugin will use its default values.
Refer to the TTS github repository for specifications on each plugin
+OVOS uses "wakewords" to activate the system. This is what "hey Google" or "Alexa" is on proprietary devices. By default, OVOS uses the WakeWord "hey Mycroft".
+OVOS "hotwords" is the configuration section to specify what the WakeWord do. Multiple "hotwords" can be used to do a variety of things from putting OVOS into active listening mode, a WakeWord like "hey Mycroft", to issuing a command such as "stop" or "wake up"
+As with everything else, this too can be changed, and several plugins are available. Some work better than others.
+Plugin | +Type | +Description | +
---|---|---|
ovos-ww-plugin-precise-lite | +Model | +The most accurate plugin available as it uses pretrained models and community models are available also | +
ovos-ww-plugin-openWakeWord | +Model | +Uses openWakeWord for detection | +
ovos-ww-plugin-vosk | +Full Word | +Uses full word detection from a loaded model. | +
ovos-ww-plugin-pocketsphinx | +Phonomes | +Probably the least accurate, but can be used on almost any device | +
ovos-ww-plugin-hotkeys | +Model | +Use an input from keyboard or button to emulate a wakeword being said. Useful for privacy, but not so much for a smart speaker. | +
ovos-ww-plugin-snowboy | +Model | +Uses snowboy wakeword engine | +
ovos-ww-plugin-nyumaya | +Model | +WakeWord plugin using Nyumaya | +
The configuration for wakewords are in the "listener"
section of mycroft.conf
and configuration of hotwords is in the "hotwords"
section of the same file.
This example will use the vosk plugin and change the wake word to "hey Ziggy".
+Add the following to your ~/.config/mycroft/mycroft.conf
file.
{
+ "listener": {
+ "wake_word": "hey_ziggy"
+ }
+ "hotwords": {
+ "hey_ziggy": {
+ "module": "ovos-ww-plugin-vosk",
+ "listen": true,
+ "active": true,
+ "sound": "snd/start_listening.wav",
+ "debug": false
+ "rule": "fuzzy",
+ "lang": "en",
+ "samples": [
+ "hey ziggy",
+ "hay ziggy"
+ ]
+ }
+ }
+}
+
+The most important section is "wake_word": "hey_ziggy"
in the "listener"
section.
This tells OVOS what the default wakeword should be.
+In the "hotwords"
section, "active": true
, is only used if multiple wakewords are being used. By default, what ever wake_word
is set in the listener
section is automatically set to true
.
If you want to disable a wakeword, you can set this to false
.
If enabling a wakeword, be sure to also set "listen": true
.
Multiple hotwords can be configured at the same time, even the same word with different plugins. This allows for more accurate ones to be used before the less accurate, but only if the plugin is installed.
+ + +OpenVoiceOS ready to use images come in two flavours; The buildroot version, being the minimal consumer type of image and the Manjaro version, being the full distribution easy / easier for developing.
++ | OpenVoiceOS (Buildroot) |
+OpenVoiceOS (Manjaro) |
+Neon AI | +Mark II (Dinkum) |
+Mycroft A.I. (PiCroft) |
+
---|---|---|---|---|---|
Software - Architecture | ++ | + | + | + | + |
Core | +ovos-core | +ovos-core | +neon-core | +Dinkum | +mycroft-core | +
GUI | +ovos-shell (mycroft-gui based) |
+ovos-shell (mycroft-gui based) |
+ovos-shell (mycroft-gui based) |
+plasma-nano (mycroft-gui based) |
+N/A | +
Services | +systemd user session |
+systemd system session |
+systemd system session |
+systemd system session |
+N/A | +
Hardware - Compatibility | ++ | + | + | + | + |
Raspberry Pi | +3/3b/3b/4 | +4 | +4 | +Mark II (only) |
+3/3b/3b/4 | +
X86_64 | +planned | +No | +WIP | +No | +No | +
Virtual Appliance | +planned | +No | +Unknown | +No | +No | +
Docker | +No possibly in future |
+Yes | +Yes | +No | +No | +
Mark-1 | +Yes WIP |
+No | +No | +No | +No | +
Mark-2 | +Yes Dev-Kit Retail (WIP) |
+Yes Dev-Kit Retail |
+Yes Dev-Kit Retail |
+Yes Retail ONLY |
+No | +
Hardware - Peripherals | ++ | + | + | + | + |
ReSpeaker | +2-mic 4-mic squared 4-mic linear 6-mic |
+2-mic 4-mic squared 4-mic linear 6-mic |
+Unknown | +No | +Yes manual installation? |
+
USB | +Yes | +Yes | +Unknown | +No | +Yes manual installation |
+
SJ-201 | +Yes | +Yes | +Yes | +Yes | +No sandbox image maybe |
+
Google AIY v1 | +Yes manual configuration |
+Yes manual installation |
+Unknown | +No | +No manual installation? |
+
Google AIY v2 | +No perhaps in the future |
+Yes manual installation |
+Unknown | +No | +No manual installation? |
+
Screen - GUI | ++ | + | + | + | + |
GUI supported Showing a GUI if a screen is attached |
+Yes ovos-shell on eglfs |
+Yes ovos-shell on eglfs |
+Yes ovos-shell on eglfs |
+Yes plasma-nano on X11 |
+No | +
Network Setup - Options | ++ | + | + | + | + |
Mobile WiFi Setup Easy device "hotspot" to connect to preset network from phone or pad. |
+Yes | +No | +No | +Yes | +No | +
On device WiFi Setup Configure the WiFi connection on the device itself |
+Yes | +Yes | +Yes | +No | +No | +
On screen keyboard | +Yes | +Yes | +Yes | +Yes | +No | +
Reconfigure network Easy way to change the network settings |
+Yes | +Yes | +Yes | +No | +No | +
Configuration - Option | ++ | + | + | + | + |
Data privacy | +Yes | +Yes | +Yes | +Partial | +Partial | +
Offline mode | +Yes | +Yes | +Yes | +No | +No | +
Color theming | +Yes | +Yes | +Yes | +No | +No | +
Non-Pairing mode | +Yes | +Yes | +Yes | +No | +No | +
API Access w/o pairing | +Yes | +Yes | +Yes | +No | +No | +
On-Device configuration | +Yes | +Yes | +Yes | +No | +No | +
Online configuration | +Dashboard wip |
+Dashboard wip |
+WIP | +Yes | +Yes | +
Customization | ++ | + | + | + | + |
Open Build System | +Yes | +Yes | +Yes | +Partial build tools are not public |
+Yes | +
Package manager | +No No buildtools available. Perhaps opkg in the future |
+Yes (pacman) |
+Yes | +Yes *limited becuase of read-only filesystem |
+Yes | +
Updating | ++ | + | + | + | + |
Update mechanism(s) | +pip In the future: Firmware updates. On-device and Over The Air |
+pip package manager |
+Plugin-based update mechanismOS Updates WIP | +OTA controlled by Mycroft |
+pip package manager |
+
Voice Assistant - Functionality | ++ | + | + | + | + |
STT - On device | +Yes Kaldi/Vosk-API WhisperCPP (WIP) Whisper TFlite (WIP) |
+Yes Kaldi/Vosk-API |
+Yes Vosk Deepspeech |
+Yes Vosk Coqui |
+No | +
STT - On premises | +Yes Ovos STT Server (any plugin) |
+Yes Ovos STT Server (any plugin) |
+Yes Ovos STT Server (any plugin) |
+No | +No | +
STT - Cloud | +Yes Ovos Server Proxy More...? |
+Yes Ovos Server Proxy |
+Yes |
+Yes Selene Google Cloud Proxy |
+Yes Selene Google (Chromium) Proxy |
+
TTS - On device | +Yes Mimic 1 More...? |
+Yes Mimic 1 More...? |
+Yes Mimic 1 Mimic 3 Coqui |
+Yes Mimic 3 |
+Yes Mimic 1 |
+
TTS - On premises | +Yes ? |
+Yes ? |
+Yes CoquiMozillaLarynx |
+No | +No | +
TTS - Cloud | +Yes Mimic 2 Mimic 3 More...? |
+Yes Mimic 2 Mimic 3 More...? |
+Yes Amazon Polly |
+No | +No | +
Smart Speaker - Functionality | ++ | + | + | + | + |
Music player connectivity The use of external application on other devices to connect to your device. |
+Yes Airplay Spotifyd Bluetooth Snapcast KDE Connect |
+Unknown | +Unknown | +Yes MPD Local Files |
+No manual installation? |
+
Music player sync | +Yes OCP MPRIS |
+Yes OCP MPRIS |
+Yes OCP MPRIS |
+No | +No | +
HomeAssistant integration | +unknown | +Yes HomeAssistant PHAL Plugin |
+WIPMycroft Skill reported working | +unknown | +unknown | +
Camera support | +Yes | +wip | +Yes | +unknown | +unknown | +
The buildroot OpenVoiceOS editions is considered to be consumer friendly type of device, or as Mycroft A.I. would like to call, a retail version. However as we so not target a specific hardware platform and would like to support custom made systems we are implementing a smart way to detect and configure different type of Raspberry Pi HAT's.
+At boot the system scan the I2C bus for known and supported HAT's and if found configures the underlying linux sound system. At the moment this is still very much in development, however the below HAT's are or should soon be supported by this system; +- ReSpeaker 2-mic HAT +- ReSpeaker 4-mic Square HAT +- ReSpeaker 4-mic linear / 6-mic HAT +- USB devices such as the PS2 EYE +- SJ-201 Dev Kits +- SJ-201 Mark2 retail device
+TODO - write docs
+Your OpenVoiceOS device is accessible over the network from your Windows computer. This is still a work in process, but you can open a file explorer and navigate to you OpenVoiceOS device. + +At the moment the following directories within the user's home directory are shared over the network. +- Documents +- Music +- Pictures +These folders are also used by KDE Connect file transfer plugins and for instance the Camera skill (Hey Mycroft, take a selfie) and / or Homescreen Skill (Hey Mycroft, take a screenshot)
+In the near future the above Windows network shares are also made available over NFS for Linux clients. This is still a Work In Progress / To Do item.
+At this moment development is in very early stages and focussed on the Raspberry Pi 3B & 4. As soon as an initial first workable version +is created, other hardware might be added.
+Source code: https://github.com/OpenVoiceOS/ovos-buildroot
+Only use x86_64 based architecture/ hardware to build the image.
+The following example Build environment has been tested :
+The following system packages are required to build the image:
+In addition to the usual http/https ports (tcp 80, tcp 443) a couple of other ports need to be allowed to the internet : +- tcp 9418 (git). +- tcp 21 (ftp PASV) and random ports for DATA channel. This can be optional but better to have this allowed along with the corresponding random data channel ports. (knowledge of firewalls required)
+First, get the code on your system! The simplest method is via git.
+
+- cd ~/
+- git clone --recurse-submodules https://github.com/OpenVoiceOS/OpenVoiceOS.git
+- cd OpenVoiceOS
(ONLY at the first clean checkout/clone) If this is the very first time you are going to build an image, you need to execute the following command once;
+
+- ./scripts/br-patches.sh
+
+This will patch the Buildroot packages.
Building the image(s) can be done by utilizing a proper Makefile;
+
+To see the available commands, just run: 'make help'
+
+As example to build the rpi4 version;
+- make clean
+- make rpi4_64-gui-config
+- make rpi4_64-gui
Now grab a cup of coffee, go for a walk, sleep and repeat as the build process takes up a long time pulling everything from source and cross compiling everything for the device. Especially the qtwebengine package is taking a LONG time.
+
+(At the moment there is an outstanding issue which prevents the build to run completely to the end. The plasma-workspace package will error out, not finding the libGLESv4 properly linked within QT5GUI. When the build stopped because of this error, edit the following file;
+
+buildroot/output/host/aarch64-buildroot-linux-gnu/sysroot/usr/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake
+
+at the bottom of the file replace this line;
+
+_qt5gui_find_extra_libs(OPENGL "GLESv2" "" "")
+
And replace it bit this line;
+_qt5gui_find_extra_libs(OPENGL "${CMAKE_SYSROOT}/usr/lib/libGLESv2.so" "" "${CMAKE_SYSROOT}/usr/include/libdrm")
+
+Then you can continue the build process by re-running the "make rpi4_64-gui" command. (DO NOT, run "make clean" and/or "make rpi4_64-gui-config" again, or you will start from scratch again !!!)
+
+When everything goes fine the xz compressed image will be available within the release directory.
1.Ensure all required peripherals (mic, speakers, HDMI, usb mouse etc) are plugged in before powering on your RPI4 for the first time.
+
+2. Skip this step if RPI4 is using an ethernet cable. Once powered on, the screen will present the Wifi setup screen ( a Wifi HotSpot is created). Connect to the Wifi HotSpot (ssid OVOS) from another device and follow the on-screen instructions to set up Wifi.
+
+3.Once Wifi is setup a choice of Mycroft backend and Local backend is presented. Choose the Mycroft backend for now and follow the on-screen instructions, Local backend is not ready to use yet. After the pairing process has completed and skills downloaded it's time to test/ use it.
** Coming soon **
+ +The OVOS project documentation is written and maintained by users just like you!
+These documents are your starting point for installing and using OpenVoiceOS software
+Note some sections may be incomplete or outdated
+Please open Issues and Pull Requests!
+Check out our Quick Start Guide for help with installing an image, your first boot, and basic configuration.
+If this is your first experience with OpenVoiceOS, or you're not sure where to get started, +say hi in OpenVoiceOS Chat and a team member would be happy to mentor you. +Join the Discussions for questions and answers.
+The below links are in the process of being deprecated.
+ + +The GUI is a totaly optional component of OVOS, but adds a ton more functionality to the device. Some skills will not work without one.
+The OVOS GUI is an independent component of OVOS which uses QT5/6 to display information on your devices screen. It is touchscreen compliant 1, and has an on-screen keyboard for entering data. On a Raspberry Pi, the GUI runs in a framebuffer, therefore not needing a full window manager. This saves resources on underpowered devices.
+mycroft-gui-qt5 is a fork of the original mycroft-gui
+mycroft-gui-qt6 is in the works, but not all skills support it yet.
+The GUI software comes with a nice script which will install the needed packages for you.
+To get the software we will use git
and the dev_setup.sh
script that is provided.
cd ~
+git clone https://github.com/OpenVoiceOS/mycroft-gui-qt5
+cd mycroft-gui-qt5
+bash dev_setup.sh
+
+NOTE The mycroft-gui is NOT a python script, therefore will not run in the venv created for the rest of the software stack.
+That is all it takes to install the GUI for OVOS. Invoke the GUI with the command:
+ovos-gui-app
You can refer to the README in the mycroft-gui-qt5 repository for more information
+ +It has been my experience that while the touchscreen will work with OVOS, some have the touch matrix opposite of what the screen is displayed. With one of these screens, it is still possible to use it, but you will need a full window manager installed instead of the GUI running in a framebuffer.
+ +OpenVoiceOS is an open source platform for smart speakers and other voice-centric devices.
+ovos-core
is a backwards-compatible descendant of Mycroft-core, the central component of Mycroft. It contains extensions and features not present upstream.
All Mycroft Skills and Plugins should work normally with OVOS-core.
+ovos-core
is fully modular. Furthermore, common components have been repackaged as plugins. That means it isn't just a great assistant on its own, but also a pretty small library!
There a couple of ways to install and use the OVOS ecosystem.
+The easiest and fastest way to experience what OVOS has to offer is to use one of the prebuilt images that the OVOS team has provided.
+NOTE Images are currently only available for a RPi3b/b+/4. More may be on the way.
+Images are not the only way to use OVOS. It can be installed on almost any system as a set of Python libraries. ovos-core
is very modular; depending on where you are running ovos-core
you may want to run only a subset of the services
This is an advanced setup and requires access to a command shell and can take more effort to get working.
+Get started with OVOS libraries
+Docker images are also available and have been tested and working on Linux, Windows, and even Mac.
+ + +The OVOS ecosystem is very modular, depending on where you are running ovos-core
you may want to run only a subset of the services.
By default ovos-core
only installs the minimum components common to all services, for the purposes of this document we will assume you want a full install with a GUI.
NOTE The GUI requires separate packages in addition to what is required by ovos-core
. The GUI installation is covered in its own section.
OVOS requires some system dependencies, how to do this will depend on your distro.
+Ubuntu/Debian based images.
+sudo apt install build-essential python3-dev python3-pip swig libssl-dev libfann-dev portaudio19-dev libpulse-dev
+
+A few packages are not necessary, but are needed to install from source and may be required for some plugins. To add these packages run this command.
+sudo apt install git libpulse-dev cmake libncurses-dev pulseaudio-utils pulseaudio
+
+NOTE: MycroftAI's dev_setup.sh
does not exist in OVOS-core. See the community provided, WIP, manual_user_install for a minimal, almost, replacement.
We suggest you do this in a virtualenv.
+Create and activate the virtual environment.
+python -m venv .venv
+. .venv/bin/activate
+
+Update pip
and install wheel
pip install -U pip wheel
ovos-core
To install a full OVOS software stack with enough skills and plugins to have a working system, the OVOS team includes a subset of packages that can be installed automatically with pip.
+It is recommended to use the latest alpha versions until the 0.1.0
release as it contains all of the latest bug fixes and improvements.
latest stable
+ovos-core
0.0.7 does not include the new extras [mycroft]
so we use [all]
.
pip install ovos-core[all]
alpha version
+pip install --pre ovos-core[mycroft]
This should install everything needed for a basic OVOS software stack.
+There are additional extras
options available other than [mycroft]
and can be found in the ovos-core setup.py file.
Each module can be installed independently to only include the parts needed or wanted for a specific system.
+ovos-core
+pip install --pre ovos-core
ovos-messagebus
+pip install --pre ovos-messagebus
ovos-audio
+pip install --pre ovos-audio
dinkum-listener
+pip install --pre ovos-dinkum-listener
ovos-phal
+pip install --pre ovos-phal
We will use git to clone the repositories to a local directory. While not specifically necessary, we are assuming this to be the users HOME
directory.
Install ovos-core
from github source files.
git clone https://github.com/OpenVoiceOS/ovos-core
The ovos-core
repository provides extra
requirements files. For the complete stack, we will use the mycroft.txt
file.
pip install ~/
ovos-core[mycroft]
This should install everything needed to use the basic OVOS software stack.
+NOTE this also installs lgpl
licenced software.
Some systems may not require a full install of OVOS. Luckily, it can be installed as individual modules.
+core library
+git clone https://github.com/OpenVoiceOS/ovos-core
pip install ~/ovos-core
This is the minimal library needed as the brain of the system. There are no skills, no messagebus, and no plugins installed yet.
+ +messagebus
+git clone https://github.com/OpenVoiceOS/ovos-messagebus
pip install ~/ovos-messagebus
This is the nervous system of OVOS needed for modules to talk to each other.
+ +listener
+OVOS has updated their listener to use ovos-dinkum-listener
instead of ovos-listener
. It is code from mycroft-dinkum adopted for use with the OVOS ecosystem. Previous listeners are still available, but not recommended.
git clone https://github.com/OpenVoiceOS/ovos-dinkum-listener
pip install ~/ovos-dinkum-listener
You now have what is needed for OVOS to use a microphone and its associated services, WakeWords, HotWords, and STT
+ +PHAL
+The OVOS Plugin based Hardware Abstraction Layer is what is used to allow the OVOS software to communicate with hardware devices such as the operating system or interacting with the Mycroft Mark 1 device.
+The PHAL system consists of two interfaces.
+ovos-phal
is the basic interface that normal plugins would use.
ovos-admin-phal
is used where superuser privileges are needed.
Be extremely careful when installing admin-phal
plugins as they provide full control over the host system.
git clone https://github.com/OpenVoiceOS/ovos-PHAL
pip install ~/ovos-PHAL
This just installs the basic system that allows the plugins to work.
+ +audio
+This is the service that is used by OVOS to play all of the audio. It can be a voice response, or a stream from somewhere such as music, or a podcast.
+It also installs OVOS Common Play which can be used as a standalone media player and is required for OVOS audio playback.
+git clone https://github.com/OpenVoiceOS/ovos-audio
pip install ~/ovos-audio
This will enable the default TTS (Text To Speech) engine for voice feedback from your OVOS device. However, plenty of alternative TTS engines are available.
+ +You now should have all of the separate components needed to run a full OVOS software stack.
+ + +KDE Connect is a multi-platform application developed by KDE, which facilitates wireless communications and data transfer between devices over local networks and is installed and configured by default on the Buildroot based image.
+A couple of features of KDE Connect are:
+For the sake of simplicity the below screenshots are made using the iPhone KDE Connect client, however as it is not yet fully feature complete and / or stable, it is recommended to use the Android and / or Linux client. Especially if you would like to have full MPRIS control of your OpenVoiceOS device.
+On your mobile device, open the KDE Connect app and it will see the advertised OpenVoiceOS KDE Connect device automatically. +{ width=50% } +Click / Tap on the "OpenVoiceOS-*" to start the pairing process. + +By clicking / tapping the pair button a similar pop-up will appear on the screen of the OpenVoiceOS device. Also click / tap on the pair button finalises the pairing proces allowing your Mobile device to automatically connect with your OpenVoiceOS device and make use of all the extra functionality of what KDE Connect brings. +
+ +We have a universal donor policy, our code should be able to be used anywhere by anyone, no ifs or conditions attached.
+OVOS is predominately Apache2 or BSD licensed. There are only a few exceptions to this, which are all licensed under other compatible open source licenses.
+Individual plugins or skills may have their own license, for example mimic3 is AGPL, so we can not change the license of our plugin.
+We are committed to maintain all core components fully free, any code that we have no control over the license will live in an optional plugin and be flagged as such.
+This includes avoiding LGPL code for reasons explained here.
+Our license policy has the following properties:
+The license does not restrict the software that may run on OVOS, however -- and thanks to the plugin architecture, even traditionally tightly-coupled components such as drivers can be distributed separately, so maintainers are free to choose whatever license they like for their projects.
+The following repositories do not respect our universal donor policy, please ensure their licenses are compatible before you use them
+Repository | +License | +Reason | +
---|---|---|
ovos-intent-plugin-padatious | +Apache2.0 | +padatious license might not be valid, depends on libfann2 (LGPL) | +
ovos-tts-plugin-mimic3 | +AGPL | +depends on mimic3 (AGPL) | +
ovos-tts-plugin-SAM | +? | +reverse engineered abandonware | +
JarbasAI
+Daniel McKnight
+j1nx
+forslund
+ChanceNCounter
+5trongthany
+builderjer
+goldyfruit
+mikejgray
+emphasize
+dscripka
use with ovos-tts-server-plugin
+Mimic1 TTS
+Mimic3 TTS
+Piper TTS
use with ovos-tts-server-plugin
+ +use with ovos-stt-server-plugin
+ +use with ovos-tts-plugin-mimic3-server
+ +use with ovos-stt-server-plugin
+ +use with ovos-tts-plugin-mimic3-server
+ + +Feature | +Mycroft | +OVOS | +Description | +
---|---|---|---|
Wake Word (listen) | +yes | +yes | +Only transcribe speech (STT) after a certain word is spoken | +
Wake Up Word (sleep mode) | +yes | +yes | +When in sleep mode only listen for "wake up" (no STT) | +
Hotword (bus event) | +no | +yes | +Emit bus events when a hotword is detected (no STT) | +
Multiple Wake Words | +no | +yes | +Load multiple hotword engines/models simultaneously | +
Fallback STT | +no | +yes | +fallback STT if the main one fails (eg, internet outage) | +
Instant Listen | +no | +yes | +Do not pause between wake word detection and recording start | +
Hybrid Listen | +no | +WIP | +Do not require wake word for follow up questions | +
Continuous Listen | +no | +WIP | +Do not require wake word, always listen using VAD | +
Recording mode | +no | +WIP | +Save audio instead of processing speech | +
Wake Word Plugins | +yes | +yes | +Supports 3rd party integrations for hotword detection | +
STT Plugins | +yes | +yes | +Supports 3rd party integrations for STT | +
VAD plugins | +no * | +yes | +Supports 3rd party integrations for voice activity detection | +
NOTES:
+mycroft.listener
moduleFeature | +Mycroft | +OVOS | +Description | +
---|---|---|---|
MPRIS integration | +no | +yes | +Integrate with MPRIS protocol | +
NOTES:
+Feature | +Mycroft | +OVOS | +Description | +
---|---|---|---|
Skill Plugins | +no | +yes | +skills can be packaged like standard python projects and installed via setup.py (eg. with pip or your package manager) | +
User Resources | +no | +yes | +Users can override resource files, eg. customize dialogs for installed skills | +
Skill permissions | +no | +WIP | +Users can limit converse and fallback functionality per skill and configure the order in which skills are executed | +
Intent Plugins | +no | +WIP | +Supports 3rd party integrations for Intent Matching | +
Feature | +Mycroft | +OVOS | +Description | +
---|---|---|---|
System Plugins | +no | +yes | +Support for 3rd party hardware (eg. mk2-plugin) and OS level integrations (eg. wifi-setup) | +
NOTES:
+Feature | +Mycroft | +OVOS | +Description | +
---|---|---|---|
Offline usage | +no | +yes | +Can be configured to work without internet connectivity | +
MultiLingual | +no | +WIP | +Can be configured to work in multiple languages at the same time | +
HiveMind support | +WIP | +WIP | +Supports HiveMind for a distributed/remote mycroft experience | +
XDG compliance | +WIP | +yes | +All resources respect XDG standards and support multiple users | +
Usage as a lib | +no | +yes | +Packaged as a library, supports derivative voice assistants | +
NOTES:
+Mycroft Mark2 shipped with a new version of mycroft called "dinkum", this is a total overhaul of mycroft-core and +incompatible
+mycroft-core is now referred to as "Classic Core" by MycroftAI
+MycroftAI now provides what they call sandbox images, to add to the confusion those only work in the mark 2 and "Classic +Core" means the mark-ii/latest branch of mycroft-core, this is a derivative version of the branch that was used in the dev kits (mark-ii/qa) and is also +backwards incompatible, changes in this branch were not done via PRs and had no review or community input
+Mark2 useful links:
+you can find mycroft's guide to porting skills to dinkum here https://mycroft-ai.gitbook.io/mark-ii/differences-to-classic-core/porting-classic-core-skills
+mark2/qa brought some changes to mycroft-core, not all of them backwards compatible and some that were contentious +within the community.
+exactly
and excludes
methods. excludes
will be added upstream in adapt/pull/156. Any skill using these new methods will be incompatible with most core versionsdinkum contains all changes above and also brought further changes to the table
+Any skills using these new "features" will not work outside the mark2
+No, not even classic core skills run in dinkum. We have no plans to support this
+No, dinkum is designed in a very incompatible way, the mycroft
module is not always mycroft-core and the MycroftSkill
class is not always a MycroftSkill, we have no intention of transparently loading dinkum skills in ovos-core
We have a small proof of concept tool to convert a dinkum skill into an ovos/classic core compatible skill, see https://github.com/OpenVoiceOS/undinkumfier
+No, Audio plugin support has been removed, you can run OCP standalone but will be missing the compatibility layers and can't load OCP skills anyway
+It could be made to work but this is not in the roadmap, PRs will be accepted and reviewed
+It should! We don't explicitly target or test it with dinkum, but it is a fairly standalone component
+STT , TTS and WW plugins should work, We don't explicitly target or test compatibility, PRs will be accepted and reviewed
+ +** Coming soon **
+ +Personal mycroft backend alternative to mycroft.home, written in flask
+This repo is an alternative to the backend meant for personal usage, this allows you to run without mycroft servers
+:warning: there are no user accounts :warning:
+This is NOT meant to provision third party devices, but rather to run on the mycroft devices directly or on a private network
+For a full backend experience, the official mycroft backend has been open sourced, read the blog post
+NOTE: There is no pairing, devices will just activate themselves and work
+from pip
+pip install ovos-local-backend
+
+There are 2 main intended ways to run local backend with mycroft
+NOTE: you can not fully run mycroft-core offline, it refuses to launch without internet connection, you can only replace the calls to use this backend instead of mycroft.home
+We recommend you use ovos-core instead
+update your mycroft config to use this backend, delete identity2.json
and restart mycroft
{
+ "server": {
+ "url": "http://0.0.0.0:6712",
+ "version": "v1",
+ "update": true,
+ "metrics": true
+ },
+ "listener": {
+ "wake_word_upload": {
+ "url": "http://0.0.0.0:6712/precise/upload"
+ }
+ }
+}
+
+start backend
+$ ovos-local-backend -h
+usage: ovos-local-backend [-h] [--flask-port FLASK_PORT] [--flask-host FLASK_HOST]
+
+optional arguments:
+ -h, --help show this help message and exit
+ --flask-port FLASK_PORT
+ Mock backend port number
+ --flask-host FLASK_HOST
+ Mock backend host
+
+
+There is also a docker container you can use
+docker run -p 8086:6712 -d --restart always --name local_backend ghcr.io/openvoiceos/local-backend:dev
+
+a docker-compose.yml
could look like this
version: '3.6'
+services:
+ # ...
+ ovosbackend:
+ container_name: ovos_backend
+ image: ghcr.io/openvoiceos/local-backend:dev
+ # or build from local source (relative to docker-compose.yml)
+ # build: ../ovos/ovos-personal-backend/.
+ restart: unless-stopped
+ ports:
+ - "6712:6712" # default port backend API
+ - "36535:36535" # default port backend-manager
+ volumes: # <host>:<guest>:<SELinux flag>
+ - ./ovos/backend/config:/root/.config/json_database:z # shared config directory
+ - ./ovos/backend/data:/root/.local/share/ovos_backend:Z # shared data directory
+ # set `data_path` to `/root/.local/share/ovos_backend`
+
+about selinux flags (omit if you don't deal with selinux)
+WIP Coming Soon
+ +You now have a flashed boot medium with OVOS installed. Now what?
+Insert your boot medium, and power on your device.
+NOTE If using a Raspberry Pi 4 with a device conneected via USB-A, the top-right USB 3.2 Gen 1 port is recommended.
+Each image has a different experience when you first boot your device.
+ + +Notes on internet While OVOS can run fully offline, with on device STT and TTS, your device will be lacking many of the things you expect from a smart assistant. On a RPi, this includes the date and time because the Pi lacks a Real Time Clock, and therefor needs to connect to a server to set those on your device.
+When you first boot the Buildroot image, you will be greeted with an OVOS splash screen as shown below.
+ +As this is the first time you have booted your device, it might take a bit longer than normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device.
+Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown.
Again, since this is your first boot, this might take a bit longer as the ovos-core configuration is populated and skills are being setup.
+There should be no issues connecting automatically if your router accepts DHCP requests.
+If you do not have an internet connection, you will be prompted with a screen with options to connect.
+ +You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work. This includes the date and time as a Raspberry Pi does not have a Real Time Clock and therefore does not know this data without being told).
+On Device Setup
+If this option is selected the next screen will have a list of available WiFi connections.
+ +Select your desired network, and enter a password if needed.
+ +If everything went correctly, you should be connected to the internet and after a short period of time, OVOS is loading skills that require internet, you will be presented with the homescreen.
+ +Configure WiFi after skipping initial setup
+If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the "Wi-Fi" icon. This brings you to the same on-device configuration screen.
+ +From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks.
+ +Mobile Setup
+If this option is chosen, you will be prompted to connect your mobile device, or computer to the hotspot OVOS
.
The rest of this option coincides with the headless image WiFi setup. Continue Mobile WiFi Setup.
+If you are trying to run OVOS on a RPi3, this is the image to use. It DOES NOT provide a GUI, therefore some things that are available on the buildroot image, are not available here.
+Once again, it may take several minutes for this first boot to complete. OVOS resizes the partition on the drive, and loads the required plugins.
+There should be no issues connecting automatically if your router accepts DHCP requests.
+The Raspbian image will create a HotSpot
which is a temporary access point that allows you to configure your WiFi credentials.
On first boot, you should hear a voice prompt instructing you to connect to the OVOS
hotspot.
++"Open a browser to 'start dot openvoiceos dot com'"
+
Connect your mobile device or computer to the OVOS
HotSpot and open the webpage http://start.openvoiceos.com NOTE This is NOT the official OVOS website, rather a "HotSpot" created by OVOS and will be removed after the device is connected to WiFi.
Choose from the list of WiFi access points from the list on your mobile device.
+ +Enter your password if needed and enjoy your OVOS smart assistant.
+NOTE There is a known bug in with balena-wifi
connecting to WPA3 security. You must use on device
setup or use raspi-config
from a command prompt if your WiFi security is not supported with the mobile setup
option.
Thats it!! You should have a working OVOS device, QUICK!!
+While this is the fastest and easiest way to get OVOS, it is not the only way.
+ + + +So you just want to give OVOS a try? This quick start will help get an OVOS image installed and running on your Raspberry Pi.
+NOTE The GUI will not reliably run on a RPi3 and is therefore not recommended for that device.
+OVOS provides a couple of different images specifically for the Raspberry Pi.
+The most advanced and featureful is the Buildroot image. If you want a GUI, this is currently your only choice.
+OVOS also provides a "Headless" image that is similar to the origional picroft
software from MycroftAI
. It runs without a screen and works with a RPi3b/b+
Once you have an image downloaded, it needs to be flashed to a boot device.
+NOTE If you have a Raspberry Pi 4 we recommend to use a good USB3.1 device or better, a USB3 SSD. If you have a Raspberry Pi 3, use a proper SD card. (From fast to slow: SSD - USB3.1 - SD card - USB2)
+Some image writing methods, dd
, may require your file be decompressed. Others, BalenaEtcher for example, can use a compressed image.
+The Buildroot image is compressed in .xz
format and the raspbian image is in .zip
format.
Windows
+Use winzip
or 7-zip
to decompress the image.
Linux
+Use gunzip
to decompress .xz
compressed images and unzip
to decompress .zip
images.
The resulting file should end in .img
and is now ready to flash to a device.
Flashing your image to your SD card or USB drive is not different from flashing any other image. For the non-technical users we advise to use the flashing utility, Raspberry Pi Imager from the Raspberry Pi Foundation. It is available for Windows, Mac OS, and Linux.
+Upon completion, you should have a bootable SD card or USB drive.
+Be careful with the dd command, you can easily render your computer useless
+lsusb
command.sdxx
sudo dd if=<path-to-unzipped-image> of=<path-to-sd-card> bs=4M status=progress
No matter what method you used, upon completion, you should have a bootable SD card or USB drive.
+ +Prebuilt images come with a default set of skills installed, including, but not limited to the date/time, and weather. Give them a shot.
+Speak these commands and enjoy the spoils:
+Hey Mycroft, what time is it?
Hey Mycroft, what is today's date?
Hey Mycroft, what is the weather today?
Hey Mycroft, will it rain today?
While there are several default skills installed, there are many more available to be used. The link below will show you how to find and install more skills.
+ +But wait, there's more!!
+OVOS is highly configurable, and uses a file in either JSON
or YAML
format to provide these options. While in most cases, OVOS should just work, sometimes you either need to, or want to change some options.
OVOS ships with a default TTS (Text to Speech) engine which speaks in the original Alan-Pope
voice that Mycroft used. There are MANY more to choose from. The following link will help you choose and configure a different voice for your assistant.
Your device does not understand your voice when you speak? There are options for different STT (Speech To Text) engines also. Some work better than others, but can provide less privacy.
+ +Your OVOS assistant uses a "wake word" which lets it know it is time to start listening to your commands. By default, the phrase to wake OVOS is Hey Mycroft
. This, like most things in OVOS, is totally configurable. Follow the link to learn more.
PHAL plugins allow OVOS to interact with the underlying hardware and operating system. Several are available, and may be installed and run together.
+ +OVOS ships with default services available to the public to use. These include public TTS and STT servers, a weather API provided by OpenMeteo, access to Wolfram, and more. Since OVOS is an open and private system, you can also change these to your own preferences.
+ + +' + escapeHtml(summary) +'
' + noResultsText + '
'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..1920c31f --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"The OpenVoiceOS Project Documentation The OVOS project documentation is written and maintained by users just like you! These documents are your starting point for installing and using OpenVoiceOS software Note some sections may be incomplete or outdated Please open Issues and Pull Requests ! Just want to get started with OVOS? Check out our Quick Start Guide for help with installing an image, your first boot, and basic configuration. Getting Involved If this is your first experience with OpenVoiceOS, or you're not sure where to get started, say hi in OpenVoiceOS Chat and a team member would be happy to mentor you. Join the Discussions for questions and answers. Links Latest Release OpenVoiceOS Chat OpenVoiceOS Website OpenVoiceOS Reddit The below links are in the process of being deprecated. Mycroft Chat Mycroft Forum Mycroft Blog Mycroft Documentation Mycroft API Docs","title":"Introduction"},{"location":"#the-openvoiceos-project-documentation","text":"The OVOS project documentation is written and maintained by users just like you! These documents are your starting point for installing and using OpenVoiceOS software Note some sections may be incomplete or outdated Please open Issues and Pull Requests !","title":"The OpenVoiceOS Project Documentation"},{"location":"#just-want-to-get-started-with-ovos","text":"Check out our Quick Start Guide for help with installing an image, your first boot, and basic configuration.","title":"Just want to get started with OVOS?"},{"location":"#getting-involved","text":"If this is your first experience with OpenVoiceOS, or you're not sure where to get started, say hi in OpenVoiceOS Chat and a team member would be happy to mentor you. Join the Discussions for questions and answers.","title":"Getting Involved"},{"location":"#links","text":"Latest Release OpenVoiceOS Chat OpenVoiceOS Website OpenVoiceOS Reddit The below links are in the process of being deprecated. Mycroft Chat Mycroft Forum Mycroft Blog Mycroft Documentation Mycroft API Docs","title":"Links"},{"location":"about/","text":"About OpenVoiceOS Introducing OpenVoiceOS - The Free and Open-Source Personal Assistant and Smart Speaker. OpenVoiceOS is a new player in the smart speaker market, offering a powerful and flexible alternative to proprietary solutions like Amazon Echo and Google Home. With OpenVoiceOS, you have complete control over your personal data and the ability to customize and extend the functionality of your smart speaker. Built on open-source software, OpenVoiceOS is designed to provide users with a seamless and intuitive voice interface for controlling their smart home devices, playing music, setting reminders, and much more. The platform leverages cutting-edge technology, including machine learning and natural language processing, to deliver a highly responsive and accurate experience. In addition to its voice capabilities, OpenVoiceOS features a touch-screen GUI made using QT5 and the KF5 framework. The GUI provides an intuitive, user-friendly interface that allows you to access the full range of OpenVoiceOS features and functionality. Whether you prefer voice commands or a more traditional touch interface, OpenVoiceOS has you covered. One of the key advantages of OpenVoiceOS is its open-source nature, which means that anyone with the technical skills can contribute to the platform and help shape its future. Whether you're a software developer, data scientist, someone with a passion for technology, or just a casual user that would like to experience what OVOS has to offer, you can get involved and help build the next generation of personal assistants and smart speakers. With OpenVoiceOS, you have the option to run the platform fully offline, giving you complete control over your data and ensuring that your information is never shared with third parties. This makes OpenVoiceOS the perfect choice for anyone who values privacy and security. So if you're looking for a personal assistant and smart speaker that gives you the freedom and control you deserve, be sure to check out OpenVoiceOS today! Disclaimer : This post was written in collaboration with ChatGPT","title":"About"},{"location":"about/#about-openvoiceos","text":"Introducing OpenVoiceOS - The Free and Open-Source Personal Assistant and Smart Speaker. OpenVoiceOS is a new player in the smart speaker market, offering a powerful and flexible alternative to proprietary solutions like Amazon Echo and Google Home. With OpenVoiceOS, you have complete control over your personal data and the ability to customize and extend the functionality of your smart speaker. Built on open-source software, OpenVoiceOS is designed to provide users with a seamless and intuitive voice interface for controlling their smart home devices, playing music, setting reminders, and much more. The platform leverages cutting-edge technology, including machine learning and natural language processing, to deliver a highly responsive and accurate experience. In addition to its voice capabilities, OpenVoiceOS features a touch-screen GUI made using QT5 and the KF5 framework. The GUI provides an intuitive, user-friendly interface that allows you to access the full range of OpenVoiceOS features and functionality. Whether you prefer voice commands or a more traditional touch interface, OpenVoiceOS has you covered. One of the key advantages of OpenVoiceOS is its open-source nature, which means that anyone with the technical skills can contribute to the platform and help shape its future. Whether you're a software developer, data scientist, someone with a passion for technology, or just a casual user that would like to experience what OVOS has to offer, you can get involved and help build the next generation of personal assistants and smart speakers. With OpenVoiceOS, you have the option to run the platform fully offline, giving you complete control over your data and ensuring that your information is never shared with third parties. This makes OpenVoiceOS the perfect choice for anyone who values privacy and security. So if you're looking for a personal assistant and smart speaker that gives you the freedom and control you deserve, be sure to check out OpenVoiceOS today! Disclaimer : This post was written in collaboration with ChatGPT","title":"About OpenVoiceOS"},{"location":"architecture/","text":"OpenVoiceOS Architecture This section can be a bit technical, but is included for reference. It is not necessary to read this section for day-to-day usage of OVOS. OVOS is a collection of modular services that work together to provide a seamless, private, open source voice assistant. The suggested way to start OVOS is with systemd service files. Most of the images run these services as a normal user instead of system wide. If you get an error when using the system files, try using it as a system service. NOTE The ovos.service is just a wrapper to control the other OVOS services. It is used here as an example showing --user vs system . user service systemctl --user status ovos.service system service systemctl status ovos.service ovos-core ovos-core This service provides the main instance for OVOS and handles all of the skill loading, and intent processing. All user queries are handled by the skills service. You can think of it as OVOS's brain typical systemd command systemctl --user status ovos-skills systemctl --user restart ovos-skills Technical Docs on Skills Messagebus ovos-messagebus C++ version NOTE This is an alpha version and mostly Proof of Concept . It has been known to crash often. ovos-bus-service You can think of the bus service as OVOS's nervous system. The ovos-bus is considered an internal and private websocket, external clients should not connect directly to it. Please do not expose the messagebus to the outside world! Technical docs for messagebus typical systemd command systemctl --user start ovos-messagebus Listener ovos-dinkum-listener The listener service is used to detect your voice. It controls the WakeWord, STT (Speech To Text), and VAD (Voice Activity Detection) Plugins. You can modify microphone settings and enable additional features under the listener section of your mycroft.conf file, such as wake word / utterance recording / uploading. The ovos-dinkum-listener is the new OVOS listener that replaced the original ovos-listener and has many more options. Others still work, but are not recommended. Technical Listener Docs typical systemd command systemctl --user start ovos-dinkum-listener STT Plugins This is where speech is transcribed into text and forwarded to the skills service. Two STT plugins may be loaded at once. If the primary plugin fails, the second will be used. Having a lower accuracy offline model as fallback will account for internet outages, which ensures your device never becomes fully unusable. Several different STT (Speech To Text) plugins are available for use. OVOS provides a number of public services using the ovos-stt-plugin-server plugin which are hosted by OVOS trusted members (Members hosting services) . No additional configuration is required. OVOS Supported STT Plugins Changing STT Plugin Hotwords OVOS uses \"Hotwords\" to trigger any number of actions. You can load any number of hotwords in parallel and trigger different actions when they are detected. Each Hotword can do one or more of the following: trigger listening, also called a Wake word play a sound emit a bus event take ovos-core out of sleep mode, also called a wakeup_word or standup_word take ovos-core out of recording mode, also called a stop_word Setting and adding Hotwords WakeWord Plugins A Wake word is what OVOS uses to activate the device. By default Hey Mycroft is used by OVOS. Like other things in the OVOS ecosystem, this is configurable. Wake word plugins Changing the Wake word VAD Plugins VAD Plugins detect when you are actually speaking to the device, and when you quit talking. Most of the time, this will not need changed. If you are having trouble with your microphone hearing you, or stopping listening when you are done talking, you might change this and see if it helps your issue. Supported VAD Plugins Changing VAD Plugin Audio ovos-audio The audio service handles the output of all audio. It is how you hear the voice responses, music, or any other sound from your OVOS device. Configuring Audio TTS Plugins TTS (Text To Speech) is the verbal response from OVOS. There are several plugins available that support different engines. Multiple languages and voices are available to use. OVOS provides a set of public TTS servers hosted by OVOS trusted members (Members hosting services) . It uses the ovos-tts-server-plugin , and no additional configuration is needed. Supported TTS Plugins Changing TTS Plugin PHAL ovos-PHAL PHAL stands for Plugin-based Hardware Abstraction Layer. It is used to allow access of different hardware devices access to use the OVOS software stack. It completely replaces the concept of hardcoded \"enclosure\" from mycroft-core . Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark 1 plugin Supported PHAL Plugins PHAL Plugins Admin PHAL Similar to regular PHAL, but is used when sudo or privlidged user is needed Be extremely careful when adding admin-phal plugins . They give OVOS administrative privileges, or root privileges to your operating system Admin PHAL GUI OVOS uses the standard mycroft-gui framework, you can find the official documentation here The GUI service provides a websocket for GUI clients to connect to, it is responsible for implementing the GUI protocol under ovos-core . You can find in depth documentation here Other OVOS services OVOS provides a number of helper scripts to allow the user to control the device at the command line. ovos-say-to This provides a way to communicate an intent to ovos. ovos-say-to \"what time is it\" ovos-listen This opens the microphone for listening, just like if you would have said the WakeWord. It is expecting a verbal command. Continue by speaking to your device \"what time is it\" ovos-speak This takes your command and runs it through the TTS (Text To Speech) engine and speaks what was provided. ovos-speak \"hello world\" will output \"hello world\" in the configured TTS voice ovos-config is a command line interface that allows you to view and set configuration values.","title":"OpenVoiceOS Architecture"},{"location":"architecture/#openvoiceos-architecture","text":"This section can be a bit technical, but is included for reference. It is not necessary to read this section for day-to-day usage of OVOS. OVOS is a collection of modular services that work together to provide a seamless, private, open source voice assistant. The suggested way to start OVOS is with systemd service files. Most of the images run these services as a normal user instead of system wide. If you get an error when using the system files, try using it as a system service. NOTE The ovos.service is just a wrapper to control the other OVOS services. It is used here as an example showing --user vs system . user service systemctl --user status ovos.service system service systemctl status ovos.service","title":"OpenVoiceOS Architecture"},{"location":"architecture/#ovos-core","text":"ovos-core This service provides the main instance for OVOS and handles all of the skill loading, and intent processing. All user queries are handled by the skills service. You can think of it as OVOS's brain typical systemd command systemctl --user status ovos-skills systemctl --user restart ovos-skills Technical Docs on Skills","title":"ovos-core"},{"location":"architecture/#messagebus","text":"ovos-messagebus C++ version NOTE This is an alpha version and mostly Proof of Concept . It has been known to crash often. ovos-bus-service You can think of the bus service as OVOS's nervous system. The ovos-bus is considered an internal and private websocket, external clients should not connect directly to it. Please do not expose the messagebus to the outside world! Technical docs for messagebus typical systemd command systemctl --user start ovos-messagebus","title":"Messagebus"},{"location":"architecture/#listener","text":"ovos-dinkum-listener The listener service is used to detect your voice. It controls the WakeWord, STT (Speech To Text), and VAD (Voice Activity Detection) Plugins. You can modify microphone settings and enable additional features under the listener section of your mycroft.conf file, such as wake word / utterance recording / uploading. The ovos-dinkum-listener is the new OVOS listener that replaced the original ovos-listener and has many more options. Others still work, but are not recommended. Technical Listener Docs typical systemd command systemctl --user start ovos-dinkum-listener","title":"Listener"},{"location":"architecture/#stt-plugins","text":"This is where speech is transcribed into text and forwarded to the skills service. Two STT plugins may be loaded at once. If the primary plugin fails, the second will be used. Having a lower accuracy offline model as fallback will account for internet outages, which ensures your device never becomes fully unusable. Several different STT (Speech To Text) plugins are available for use. OVOS provides a number of public services using the ovos-stt-plugin-server plugin which are hosted by OVOS trusted members (Members hosting services) . No additional configuration is required. OVOS Supported STT Plugins Changing STT Plugin","title":"STT Plugins"},{"location":"architecture/#hotwords","text":"OVOS uses \"Hotwords\" to trigger any number of actions. You can load any number of hotwords in parallel and trigger different actions when they are detected. Each Hotword can do one or more of the following: trigger listening, also called a Wake word play a sound emit a bus event take ovos-core out of sleep mode, also called a wakeup_word or standup_word take ovos-core out of recording mode, also called a stop_word Setting and adding Hotwords","title":"Hotwords"},{"location":"architecture/#wakeword-plugins","text":"A Wake word is what OVOS uses to activate the device. By default Hey Mycroft is used by OVOS. Like other things in the OVOS ecosystem, this is configurable. Wake word plugins Changing the Wake word","title":"WakeWord Plugins"},{"location":"architecture/#vad-plugins","text":"VAD Plugins detect when you are actually speaking to the device, and when you quit talking. Most of the time, this will not need changed. If you are having trouble with your microphone hearing you, or stopping listening when you are done talking, you might change this and see if it helps your issue. Supported VAD Plugins Changing VAD Plugin","title":"VAD Plugins"},{"location":"architecture/#audio","text":"ovos-audio The audio service handles the output of all audio. It is how you hear the voice responses, music, or any other sound from your OVOS device. Configuring Audio","title":"Audio"},{"location":"architecture/#tts-plugins","text":"TTS (Text To Speech) is the verbal response from OVOS. There are several plugins available that support different engines. Multiple languages and voices are available to use. OVOS provides a set of public TTS servers hosted by OVOS trusted members (Members hosting services) . It uses the ovos-tts-server-plugin , and no additional configuration is needed. Supported TTS Plugins Changing TTS Plugin","title":"TTS Plugins"},{"location":"architecture/#phal","text":"ovos-PHAL PHAL stands for Plugin-based Hardware Abstraction Layer. It is used to allow access of different hardware devices access to use the OVOS software stack. It completely replaces the concept of hardcoded \"enclosure\" from mycroft-core . Any number of plugins providing functionality can be loaded and validated at runtime, plugins can be system integrations to handle things like reboot and shutdown, or hardware drivers such as mycroft mark 1 plugin Supported PHAL Plugins PHAL Plugins","title":"PHAL"},{"location":"architecture/#admin-phal","text":"Similar to regular PHAL, but is used when sudo or privlidged user is needed Be extremely careful when adding admin-phal plugins . They give OVOS administrative privileges, or root privileges to your operating system Admin PHAL","title":"Admin PHAL"},{"location":"architecture/#gui","text":"OVOS uses the standard mycroft-gui framework, you can find the official documentation here The GUI service provides a websocket for GUI clients to connect to, it is responsible for implementing the GUI protocol under ovos-core . You can find in depth documentation here","title":"GUI"},{"location":"architecture/#other-ovos-services","text":"OVOS provides a number of helper scripts to allow the user to control the device at the command line. ovos-say-to This provides a way to communicate an intent to ovos. ovos-say-to \"what time is it\" ovos-listen This opens the microphone for listening, just like if you would have said the WakeWord. It is expecting a verbal command. Continue by speaking to your device \"what time is it\" ovos-speak This takes your command and runs it through the TTS (Text To Speech) engine and speaks what was provided. ovos-speak \"hello world\" will output \"hello world\" in the configured TTS voice ovos-config is a command line interface that allows you to view and set configuration values.","title":"Other OVOS services"},{"location":"config/","text":"OVOS Configuration When you first start OVOS, there should not be any configuration needed to have a working device. NOTE To continue with the examples, you will need access to a shell on your device. This can be achieved with SSH. Connect to your device with the command ssh ovos@The Mycroft Selene backend is deprecated but the software has been made avaliable on github.
+https://github.com/MycroftAI/selene-backend
+Coming Soon +Installing your own Selene backend
+ +Being a modular system has the advantage of being able to start with several different methods.
+If you installed an image, your device is already pre-configured to start all of the services automatically.
+As of July 2023, both the Buildroot image, and the Rasbpian image, use systemd service files to start, restart, and stop each OVOS module.
+Typical command to restart the OVOS stack
+systemctl --user restart ovos.service
ovos.service
is a special systemd service file that instructs the rest of the stack to follow what it is doing. If you stop ovos.service
all of the services will stop. Same with start
and restart
. This makes it handy to restart the complete stack in one command after changes have been made.
The rest of this section will describe this method, and others in detail.
+Starting as stand alone modules
+ + + +OVOS in its simplest form is just a python module, and can be invoked as one. In fact, the systemd service method of starting OVOS uses a glorified version of this.
+ovos-core is the brains of the device. Without it, you would have some cool software that does not work together. It controls the skills
service and directs intents
to the right skill.
Open a command shell and type the following
+ovos-core
You will see a bunch of lines from the logs, and at the end, it will say WARNING - Message Bus Client will reconnect in 5.0 seconds.
This is because we have not started the messagebus service and that is the nervous system
. You cannot communicate to the other parts without it.
ovos-messagebus is the nervous system of OVOS. This is what makes everything work together.
+NOTE The messagebus is an unsecured bus to your system and should NOT be exposed to the outside world.
+ +With ovos-core running in one terminal shell, open another and type the command
+ovos-messagebus
Once again, a whole bunch of log lines will scroll by, and at the end, it will say INFO - Message bus service started!
If you look back at the terminal with ovos-core, you will notice that there are new logs that ovos-core has connected to the messagebus.
+ +OVOS, being a modular system, has several pieces that can start individually.
+The OVOS team suggests doing this in a virtual environment
. While not necessary, it can keep dependency problems in a running system from conflicting with one another.
We will assume that you are starting from your home directory. +Enter the following commands into a shell terminal.
+python -m venv .venv
+
+. .venv/bin/activate
+
+After a couple of seconds, your command prompt will change with (.venv)
being at the front.
+ nn
+User systemd
service files are the recommended way to start each module. Other methods exist, such as using the modules as a python library, but are advanced topics and not discussed here.
This is the preferred method to start the OVOS modules. If you have not used systemd
before, there are many references on the web with more information. It is out of scope of this document. The following is assuming the user ovos
is being used.
A systemd service
file and a systemd hook
file is required for this to work. We will create both files for the ovos-messagebus
service because this is used by all other modules. The provided system hook
files need another Python package sdnotify
to work as written.
pip install sdnotify
This is the main service file that is used to start the stack as a unit. This is not necessasary, but helpful if more than one module should start together.
+Create the service file
+nano .config/systemd/user/ovos.service
This file should contain
+[Unit]
+Description=OVOS A.I. Software stack.
+
+[Service]
+Type=oneshot
+ExecStart=/bin/true
+RemainAfterExit=yes
+
+[Install]
+WantedBy=default.target
+
+There is no hook
file needed for this service.
The messagebus is the main nervous system for OVOS and is needed by all other modules to enable communication between them.
+Create the service file
+nano ~/.config/systemd/user/ovos-messagebus.service
And make it contain the following
+[Unit]
+Description=OVOS Messagebus
+PartOf=ovos.service
+After=ovos.service
+
+[Service]
+Type=notify
+ExecStart=/home/ovos/.local/bin/ovos-systemd-messagebus
+TimeoutStartSec=1m
+TimeoutStopSec=1m
+Restart=on-failure
+StartLimitInterval=5min
+StartLimitBurst=4
+#StartLimitAction=reboot-force
+#WatchdogSec=30s
+
+[Install]
+WantedBy=ovos.service
+
+Create the hook file
+nano ~/.local/bin/ovos-systemd-messagebus
This file should contain
+#!/usr/bin/env python
+##########################################################################
+# ovos-systemd_messagebus.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##########################################################################
+import sdnotify
+from mycroft.messagebus.service.__main__ import main
+
+n = sdnotify.SystemdNotifier()
+
+def notify_ready():
+ n.notify('READY=1')
+ print('Startup of Mycroft Messagebus service complete')
+
+ def notify_stopping():
+ n.notify('STOPPING=1')
+ print('Stopping the Mycroft Messagebus service')
+
+ main(ready_hook=notify_ready, stopping_hook=notify_stopping)
+
+Reload the systemd daemon
+systemctl --user daemon-reload
The service can now be started
+systemctl --user start ovos-messagebus.service
To start the stack on boot, enable both of the services
+systemctl --user enable ovos.service
systemctl --user enable ovos-messagebus.service
Now on every reboot, the OVOS system should start automatically.
+NOTE the systemd service and hook files are examples used in the raspbian-ovos repository.
+For each module that needs to be started, they should have similar service
and hook
files.
For a complete system the following need to be running. +- ovos-messagebus.service +- ovos-skills.service +- ovos-audio.service +- ovos-dinkum-listener.service +- ovos-phal.service
+The ovos-admin-phal.service
need to run as a system service or with the user root
+- ovos-admin-phal.service
If audio isn't working for OpenVoiceOS, it's useful to verify that the operating system audio is working.
+ALSA is the kernel level sound mixer, it manages your sound card directly. ALSA is crap (seriously) and it can handle a few (sometimes just one) channel. We don't generally have to deal with ALSA directly.
+ALSA can be configured to use PulseAudio as it's default device, that way ALSA applications that are not PulseAudio aware will still use PulseAudio via an indirection layer.
+PulseAudio is a software mixer, running in user space. When it runs, it uses Alsa's channel and manages everything. mixing, devices, network devices, etc.
+PulseAudio always uses ALSA as backend, and on startup opens all ALSA devices. Since most ALSA devices can't be opened multiple times, this will cause all ALSA applications that try to use an ALSA device directly when PulseAudio is running to fail. If you have a legacy application that for some reason doesn't work, you can use pasuspender
to temporary suspend PulseAudio to run this particular application.
List hardware cards cat /proc/asound/cards
+List Playback and capture devices visible to ALSA (note the Card Number)
aplay -l
+arecord -l
+
+This will list the cards, which can then be referenced in arecord using -D hw:
arecord -f dat -r 16000 -D hw:4,0 -c 4 -d 10 test2.wav
+
+You can then play the file back to test your speakers
+aplay -D hw:2,0 test2.wav
+
+** If PulseAudio is installed, Alsa should be configured to use PulseAudio as it's default, and we don't change anything in Alsa, we configure our default sources and sinks in Pulse Audio**
+Verify that pulseaudio is installed
+apt list pulseaudio
+
+Verify that Alsa is using Pulse Audio as the default
+$ aplay -L | head -n9
+null
+ Discard all samples (playback) or generate zero samples (capture)
+default
+ Playback/recording through the PulseAudio sound server
+
+List sinks (speakers) and sources (microphones) visible to PulseAudio
+pactl list sinks
+pactl list sources
+
+This will list the sources that can be used to set the default source for pulseaudio below.
+pacmd set-default-source
+
+e.g.
+pacmd set-default-source alsa_input.usb-OmniVision_Technologies__Inc._USB_Camera-B4.09.24.1-01.multichannel-input
+
+You can test if OVOS is recieving mic input using the ovos-cli-client
Install the ovos-cli-client from github to ensure you have the latest version
+cd ~
+git clone https://github.com/openvoiceos/ovos-cli-client
+pip install ./ovos-cli-client
+
+Run the ovos-cli-client
+ovos-cli-client
+
+In the lower left you can observe the microphone levels, when you talk, the levels should increase. If they don't ovos is probably using the wrong microphone.
+Before submitting an issue or asking a question on Element, please gather the following data.
+For Microphone issues:
+arecord -l
+arecord -L | head -n9
+pactl list sources
+pacmd dump
+
+For Speaker issues:
+aplay -l
+aplay -L | head -n9
+pactl list sinks
+pacmd dump
+
+Try disabling suspend-on-idle in /etc/pulse/default.pa
+Change this:
+### Automatically suspend sinks/sources that become idle for too long
+load-module module-suspend-on-idle
+
+to this:
+### Automatically suspend sinks/sources that become idle for too long
+#load-module module-suspend-on-idle
+
+and then restart PulseAudio. There is quite a lot of variation in how people do this but killall pulseaudio is one option (it gets automatically started again). If you want to be sure, you can restart the system.
+ +The personal backend has a config file. https://github.com/OpenVoiceOS/ovos-personal-backend#configuration
+Set a value for "admin_key"
+ +coming soon
+There is a known issue with balena (the wifi access point app) when it detects a WPA3 network of any sort, it fails. +More Information
+Not sure what is causing this, but if you reboot the pi (ctrl-alt-del) it will come up fine on the second boot.
+ +** Coming soon **
+This error could from the ovos-cli-client, or other sources.
+To resolve, ensure that your locale is set correctly, try running raspi-config to set it if your on Raspberry Pi OS (Raspbian).
+Manually update /etc/default/locale
use locale
to verify your current locale, and locale -a
to verify the local you've set is actually available.
+Source
See Also Troubleshooting Installation
+ +** Coming Soon **
+OpenVoiceOS uses a plugin (or plugins) to recognize the wake word. In your mycroft.conf file you'll specify the plugin used and what the wakeword is.
+To verify that it is the Wake Word and not the microphone causing the issue, we will get OVOS to ask us a question that we can respond to. In the OVOS cli type in the utterance "set timer". OVOS will then ask how long of a timer you would like. Speaking now should result in your utterance being transcribed.
+If your response is successfully transcribed, it is most likely the Wake Word engine causing the problem.
+Determine which configuration files are being loaded
+grep mycroft.conf /var/log/syslog
+
+Look at your mycoft.conf file and verify how your wake word is configured. Look for the following lines (or similar):
+ "wake_word": "hey_mycroft" ## this will match a hotword listed later in the config
+ ...
+ "hotwords": {
+ "hey_mycroft": { ## matches the wake_word
+ "module": "ovos-ww-plugin-precise", ## what plugin is used
+ "version": "0.3",
+ "model": "https://github.com/MycroftAI/precise-data/raw/models-dev/hey-mycroft.tar.gz",
+ "phonemes": "HH EY . M AY K R AO F T",
+ "threshold": 1e-90,
+ "lang": "en-us",
+ "listen": true,
+ "sound": "snd/start_listening.wav"
+ },
+
+grep wake /var/log/syslog
+Look for lines similar to:
voice - ovos_plugin_manager.wakewords:load_module:85 - INFO - Loading "wake_up" wake word via ovos-ww-plugin-pocketsphinx
+voice - ovos_plugin_manager.wakewords:load_module:92 - INFO - Loaded the Wake Word plugin ovos-ww-plugin-pocketsphinx
+voice - ovos_plugin_manager.wakewords:load_module:85 - INFO - Loading "hey_mycroft" wake word via ovos-ww-plugin-precise
+voice - ovos_plugin_manager.wakewords:load_module:92 - INFO - Loaded the Wake Word plugin ovos-ww-plugin-precise
+
+If you see an error about "failed to load plugin" make sure the plugin is installed.
+Use ovos-cli-client to see if the microphone is working and the wakeword is being triggered.
+ovos-cli-client
+Look for
05:19:54.975 - voice - mycroft.listener.service:handle_wakeword:97 - INFO - Wakeword Detected: hey_mycroft
+ 05:19:55.555 - voice - mycroft.listener.service:handle_record_begin:71 - INFO - Begin Recording...
+ 05:19:57.052 - voice - mycroft.listener.service:handle_record_end:78 - INFO - End Recording...
+
+OCA is a user facing interface to configure ovos devices
+OCA provides a local Web UI similar to ovos-backend-manager, in here you can configure your device, view metrics, handle OAuth and more
+A command line interface is planned but not yet available to provide equivalent functionality to the Web UI
+from ovos_config_assistant.module_helpers import pprint_core_module_info
+pprint_core_module_info()
+"""
+## Mycroft module info
+ can import mycroft : True
+ is ovos-core : True
+ mycroft module location: /home/user/ovos-core/mycroft
+
+## Downstream ovos.conf overrides
+Module: neon_core
+ can import neon_core : False
+ neon_core module location: None
+ xdg compliance : True
+ base xdg folder : neon
+ mycroft config filename : neon.conf
+ default mycroft.conf path :
+ /home/user/NeonCore/neon_core/configuration/neon.conf
+Module: hivemind
+ can import hivemind : False
+ hivemind module location: None
+ xdg compliance : True
+ base xdg folder : hivemind
+ mycroft config filename : hivemind.conf
+ default mycroft.conf path :
+ /home/user/PycharmProjects/ovos_workspace/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf
+
+## Downstream module overrides:
+Module: neon_speech
+ uses config from : neon_core
+ can import neon_speech : False
+ neon_speech module location: None
+Module: neon_audio
+ uses config from : neon_core
+ can import neon_audio : False
+ neon_audio module location: None
+Module: neon_enclosure
+ uses config from : neon_core
+ can import neon_enclosure : False
+ neon_enclosure module location: None
+Module: hivemind_voice_satellite
+ uses config from : hivemind
+ can import hivemind_voice_satellite : True
+ hivemind_voice_satellite module location: /home/user/HiveMind-voice-sat/hivemind_voice_satellite
+"""
+
+from ovos_config_assistant.config_helpers import pprint_ovos_conf
+pprint_ovos_conf()
+"""
+## OVOS Configuration
+ ovos.conf exists : True
+ /home/user/.config/OpenVoiceOS/ovos.conf
+ xdg compliance : True
+ base xdg folder : mycroft
+ mycroft config filename : mycroft.conf
+ default mycroft.conf path :
+ /home/user/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf
+"""
+
+
+ PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the +concept of hardcoded "enclosure" from mycroft-core
+Any number of plugins providing functionality can be loaded and validated at runtime, plugins can +be system integrations to handle things like reboot and +shutdown, or hardware drivers such as mycroft mark2 plugin
+PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it +does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images
+Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities
+Plugin | +Description | +
---|---|
ovos-PHAL-plugin-alsa | +volume control | +
ovos-PHAL-plugin-system | +reboot / shutdown / factory reset | +
ovos-PHAL-plugin-mk1 | +mycroft mark1 integration | +
ovos-PHAL-plugin-mk2 | +mycroft mark2 integration | +
ovos-PHAL-plugin-respeaker-2mic | +respeaker 2mic hat integration | +
ovos-PHAL-plugin-respeaker-4mic | +respeaker 4mic hat integration | +
ovos-PHAL-plugin-wifi-setup | +wifi setup (central plugin) | +
ovos-PHAL-plugin-gui-network-client | +wifi setup (GUI interface) | +
ovos-PHAL-plugin-balena-wifi | +wifi setup (hotspot) | +
ovos-PHAL-plugin-network-manager | +wifi setup (network manager) | +
ovos-PHAL-plugin-brightness-control-rpi | +brightness control | +
ovos-PHAL-plugin-ipgeo | +automatic geolocation (IP address) | +
ovos-PHAL-plugin-gpsd | +automatic geolocation (GPS) | +
ovos-PHAL-plugin-dashboard | +dashboard control (ovos-shell) | +
ovos-PHAL-plugin-notification-widgets | +system notifications (ovos-shell) | +
ovos-PHAL-plugin-color-scheme-manager | +GUI color schemes (ovos-shell) | +
ovos-PHAL-plugin-configuration-provider | +UI to edit mycroft.conf (ovos-shell) | +
ovos-PHAL-plugin-analog-media-devices | +video/audio capture devices (OCP) | +
In mycroft-core the equivalent to PHAL plugins would usually be shipped as skills or hardcoded
+in OVOS sometimes it may be unclear if we should develop a skill or plugin, there isn't a one size fits all answer, in some circumstances it may make sense to create both a plugin and a companion skill
+ +PHAL plugins do not follow a strict template, they are usually event listeners that perform certain actions and integrate with other components
+from mycroft_bus_client import Message
+from ovos_plugin_manager.phal import PHALPlugin
+
+
+class MyPHALPluginValidator:
+ @staticmethod
+ def validate(config=None):
+ """ this method is called before loading the plugin.
+ If it returns False the plugin is not loaded.
+ This allows a plugin to run platform checks"""
+ return True
+
+
+class MyPHALPlugin(PHALPlugin):
+ validator = MyPHALPluginValidator
+
+ def __init__(self, bus=None, config=None):
+ super().__init__(bus=bus, name="ovos-PHAL-plugin-NAME", config=config)
+ # register events for plugin
+ self.bus.on("my.event", self.handle_event)
+
+ def handle_event(self, message):
+ # TODO plugin stuff
+ self.bus.emit(Message("my.event.response"))
+
+ def shutdown(self):
+ # cleanly remove any event listeners and perform shutdown actions
+ self.bus.remove("my.event", self.handle_event)
+ super().shutdown()
+
+You can find plugin packaging documentation here
+AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root
privileges.
This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely
+NOTE: Because this service runs as root, plugins it loads are responsible for not writing +configuration changes which would result in breaking config file permissions.
+to use AdminPHAL create a launcher /usr/libexec/mycroft-systemd-admin-phal
import sdnotify
+from ovos_PHAL.admin import main
+
+n = sdnotify.SystemdNotifier()
+
+def notify_ready():
+ n.notify('READY=1')
+ print('Startup of Admin service complete')
+
+def notify_stopping():
+ n.notify('STOPPING=1')
+ print('Stopping Admin service')
+
+main(ready_hook=notify_ready, stopping_hook=notify_stopping)
+
+and system service /usr/lib/systemd/user/mycroft-admin-phal.service
[Unit]
+Description=Admin PHAL
+PartOf=mycroft.service
+After=mycroft-messagebus.service
+
+[Service]
+Type=notify
+TimeoutStopSec=30
+Restart=always
+User=root
+ExecStart=/usr/libexec/mycroft-systemd-admin-phal
+
+[Install]
+WantedBy=mycroft.service
+
+AdminPlugins are just like regular PHAL plugins that run with root
privileges
A plugin needs to identify itself as an admin plugin via it's entry point, PHAL will not load Admin plugins and AdminPHAL will not load regular plugins
+Admin plugins will only load if their configuration contains "enabled": true
. All admin plugins need to be explicitly enabled
You can find plugin packaging documentation here
+ +By default, your OpenVoiceOS device is advertising itself as Airplay (v1 - currently) device on your network. This can be used from either the iOS Airplay selection screen if you play some local files, like shown below; + +Tap / Click the bottom middle Airplay icon on your music player which opens the Airplay devices menu. It should pick up your OpenVoiceOS device automatically from the network. + +Select the OpenVoiceOS device to re-route your sound output to your OpenVoiceOS device. +
+The Airplay selection menu is also available within other music clients such as the Spotify app. + +And if that client also supports metadata over MPRIS your OpenVoiceOS device will show it on it's screen as well. +
+ +There are several API's used by OVOS including for weather inquiries, Wolfram-alpha and others
+OVOS provides default keys that are used by default and no more configuration is needed.
+You can always provide your own API keys for each service, and even add more APIs for OVOS to use.
+TODO fix link to api config +API Configuration
+ +A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device. The backend is optional and concidered to be advanced usage in OVOS. By default, OVOS does not use a backend, and is totally unnecessasary for basic usage. A local backend becomes usefull if you have several devices around the house and would like a central place to configure them.
+A backend can provide:
+Available backends:
+TODO fix link to personal backend
+ + +OVOS devices with displays provide skill developers the opportunity to create skills that can be empowered by both voice and screen interaction.
+The display interaction technology is based on the QML user interface markup language that gives you complete freedom to create in-depth innovative interactions without boundaries or provide you with simple templates within the Mycroft GUI framework that allow minimalistic display of text and images based on your skill development specifics and preferences.
+Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt.
+QML user interface markup language is a declarative language built on top of Qt's existing strengths designed to describe the user interface of a program: both what it looks like, and how it behaves. QML provides modules that consist of sophisticated set of graphical and behavioral building elements.
+A collection of resources to familiarize you with QML and Kirigami Framework.
+ +OVOS Core supports a GUI Extension framework which allows the GUI service to incorporate additional behaviour for a +specific platform. GUI Extensions currently supported:
+This extension is responsible for managing the smartspeaker GUI interface behaviour, it supports homescreens and +homescreen management. Enabling the smartspeaker GUI extension:
+This extension is responsible for managing the plasma bigscreen GUI interface behaviour, it supports window management +and window behaviour control on specific window managers like Kwin. Enabling the Bigscreen GUI extension:
+This extension is responsible for managing the mobile GUI interface behaviour, it supports homescreens and additionally +adds support for global page back navigation. Enabling the Mobile GUI extension:
+This extension provides a generic GUI interface and does not add any additional behaviour, +it optionally supports homescreens if the platform or user manually enables it. +This extension is enabled by default when no other extension is specified.
+ +** Editors Note ** +Lots of cleanup coming here, just placeholder information for now.
+PHAL is our Platform/Hardware Abstraction Layer, it completely replaces the +concept of hardcoded "enclosure" from mycroft-core
+Any number of plugins providing functionality can be loaded and validated at runtime, plugins can +be system integrations to handle things like reboot and +shutdown, or hardware drivers such as mycroft mark2 plugin
+PHAL plugins can perform actions such as hardware detection before loading, eg, the mark2 plugin will not load if it +does not detect the sj201 hat. This makes plugins safe to install and bundle by default in our base images
+Platform/Hardware specific integrations are loaded by PHAL, these plugins can handle all sorts of system activities
+Plugin | +Description | +
---|---|
ovos-PHAL-plugin-alsa | +volume control | +
ovos-PHAL-plugin-system | +reboot / shutdown / factory reset | +
ovos-PHAL-plugin-mk1 | +mycroft mark1 integration | +
ovos-PHAL-plugin-mk2 | +mycroft mark2 integration | +
ovos-PHAL-plugin-respeaker-2mic | +respeaker 2mic hat integration | +
ovos-PHAL-plugin-respeaker-4mic | +respeaker 4mic hat integration | +
ovos-PHAL-plugin-wifi-setup | +wifi setup (central plugin) | +
ovos-PHAL-plugin-gui-network-client | +wifi setup (GUI interface) | +
ovos-PHAL-plugin-balena-wifi | +wifi setup (hotspot) | +
ovos-PHAL-plugin-network-manager | +wifi setup (network manager) | +
ovos-PHAL-plugin-brightness-control-rpi | +brightness control | +
ovos-PHAL-plugin-ipgeo | +automatic geolocation (IP address) | +
ovos-PHAL-plugin-gpsd | +automatic geolocation (GPS) | +
ovos-PHAL-plugin-dashboard | +dashboard control (ovos-shell) | +
ovos-PHAL-plugin-notification-widgets | +system notifications (ovos-shell) | +
ovos-PHAL-plugin-color-scheme-manager | +GUI color schemes (ovos-shell) | +
ovos-PHAL-plugin-configuration-provider | +UI to edit mycroft.conf (ovos-shell) | +
ovos-PHAL-plugin-analog-media-devices | +video/audio capture devices (OCP) | +
AdminPHAL performs the exact same function as PHAL, but plugins it loads will have root
privileges.
This service is intended for handling any OS-level interactions requiring escalation of privileges. Be very careful when installing Admin plugins and scrutinize them closely
+NOTE: Because this service runs as root, plugins it loads are responsible for not writing +configuration changes which would result in breaking config file permissions.
+AdminPlugins are just like regular PHAL plugins that run with root
privileges
Admin plugins will only load if their configuration contains "enabled": true
. All admin plugins need to be explicitly enabled
You can find plugin packaging documentation here
+ +** Coming Soon **
+ +Editors Note Major revisions coming here, mostly placeholder information
+The skills service is responsible for loading skills and intent parsers
+All user queries are handled by the skills service, you can think of it as OVOS's brain +More Information
+The speech client is responsible for loading STT, VAD and Wake Word plugins
+Speech is transcribed into text and forwarded to the skills service
+OVOS allows you to load any number of hot words in parallel and trigger different actions when they are detected
+each hotword can do one or more of the following:
+To add a new hotword add its configuration under "hotwords" section.
+By default, all hotwords are disabled unless you set "listen": true
.
+Under the "listener"
setting a main wake word and stand up word are defined, those will be automatically enabled unless you set "active": false
.
+This is usually not desired unless you are looking to completely disabled wake word usage
Two STT plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used.
+This allows you to have a lower accuracy offline model as fallback to account for internet outages, this ensures your device never becomes fully unusable
+You can modify microphone settings and enable additional features under the listener section such as wake word / utterance recording / uploading
+Voice Activity Detection is used by the speech client to determine when a user stopped speaking, this indicates the voice command is ready to be executed. +Several VAD strategies are supported
+ +Skills give OVOS the ability to perform a variety of functions. They can be installed or removed by the user, and can be easily updated to expand functionality. To get a good idea of what skills to build, let’s talk about the best use cases for a voice assistant, and what types of things OVOS can do.
+OVOS can run on a variety of platforms from the Linux Desktop to SBCs like the raspberry pi. +Different devices will have slightly different use cases. Devices in the home are generally located in the living room or kitchen and are ideal for listening to the news, playing music, general information, using timers while cooking, checking the weather, and other similar activities that are easily accomplished hands free.
+We cover a lot of the basics with our Default Skills, things like Timers, Alarms, Weather, Time and Date, and more.
+We also call this General Question and Answer, and it covers all of those factual questions someone might think to ask a +voice assistant. Questions like “who was the 32nd President of the United States?”, or “how tall is Eiffel Tower?” +Although the Default Skills cover a great deal of questions there is room for more. There are many topics that could use a specific skill such as Science, Academics, Movie Info, TV info, and Music info, etc..
+One of the biggest use cases for Smart Speakers is playing media. The reason media playback is so popular is that it makes playing a song so easy, all you have to do is say “Hey Mycroft play the Beatles,” and you can be enjoying music without having to reach for a phone or remote. In addition to listening to music, there are +skills that handle videos as well.
+Much like listening to music, getting the latest news with a simple voice interaction is extremely convenient. OVOS +supports multiple news feeds, and has the ability to support multiple news skills.
+Another popular use case for Voice Assistants is to control Smart Home and IoT products. Within the mycroft ecosystem +there are skills for Home Assistant, Wink IoT, Lifx and more, but there are many products that we do not have skill for yet. +The open source community has been enthusiastically expanding OVOS's ability to voice control all kinds of smart home +products.
+Voice games are becoming more and more popular, especially those that allow multiple users to play together. Trivia games are some of the most popular types of games to develop for voice assistants. There are several games already available for OVOS. There are native voice adventure games, ports of the popular text adventure games from infocom, a Crystal Ball game, a Number Guessing game and much more!
+Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage.
+You can ask your device what time or date it is just in case you lost your watch.
+++ +Hey Mycroft, what time is it?
+
++ +Hey Mycroft, what is the date?
+
Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device.
+++ +Hey Mycroft, set an alarm for 8 AM.
+
Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers.
+++ +Hey Mycroft, set a timer for 5 minutes.
+
You can always set more timers and even name them, so you know which timers is for what.
+++ +Hey, Mycroft, set another timer called rice cooking for 7 minutes.
+
You can ask your device what the weather is or would be at any given time or place.
+++ +Hey Mycroft, what is the weather like today?
+
The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen.
+ +The file browser allows you to browse the filesystem in your device and any connected media, you can view images and play music and videos.
+KDEConnect integration allows you to share files with your mobile devices
+ +Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt.
+OVOS uses the standard mycroft-gui framework, you can find the official documentation here
+ +Audio plugins are responsible for handling playback of media, like music and podcasts
+If mycroft-gui is available these plugins will rarely be used unless ovos is explicitly configured to do so
+Plugin | +Description | +
---|---|
ovos-ocp-audio-plugin | +framework + compatibility layer | +
ovos-audio-plugin-simple | +sox / aplay / paplay / mpg123 | +
ovos-vlc-plugin | +vlc audio backend | +
ovos-mplayer-plugin | +mplayer audio backend | +
The audio service is responsible for loading TTS and Audio plugins
+All audio playback is handled by this service
+Usually playback is triggered by some originating bus message, eg "recognizer_loop:utterance"
, this message contains metadata that is used to determine if playback should happen.
message.context
may contain a source and destination, playback is only triggered if a message destination is a native_source or if missing (considered a broadcast).
This separation of native sources allows remote clients such as an android app to interact with OVOS without the actual device where ovos-core is running repeating all TTS and music playback out loud
+You can learn more about message targeting here
+By default, only utterances originating from the speech client and ovos cli are considered native
+for legacy reasons the names for ovos cli and speech client are "debug_cli"
and "audio"
respectively
Two TTS plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used.
+This allows you to have a lower quality offline voice as fallback to account for internet outages, this ensures your device can always give you feedback
+"tts": {
+ "pulse_duck": false,
+ "module": "ovos-tts-plugin-mimic2",
+ "fallback_module": "ovos-tts-plugin-mimic"
+},
+
+You can enable additional Audio plugins and define the native sources described above under the "Audio"
section of mycroft.conf
ovos-core uses OCP natively for media playback, you can learn more about OCP here
+OCP will decide when to call the Audio service and what plugin to use, the main use case is for headless setups without a GUI
+NOTE: mycroft-core has a "default-backend"
config option, in ovos-core this option has been deprecated and is always OCP.
"Audio": {
+ "native_sources": ["debug_cli", "audio"],
+
+ "backends": {
+ "OCP": {
+ "type": "ovos_common_play",
+ "active": true
+ },
+ "simple": {
+ "type": "ovos_audio_simple",
+ "active": true
+ },
+ "vlc": {
+ "type": "ovos_vlc",
+ "active": true
+ }
+ }
+},
+
+
+ ovos-core supports multiple backends under a single unified interface
+Developers do not need to worry about backend details in their applications and skills
+A unique uuid and pairing information generated by registering with Home is stored in:
+~/.config/mycroft/identity/identity2.json
<-- DO NOT SHARE THIS WITH OTHERS!
This file uniquely identifies your device and should be kept safe
+a companion stt plugin is available to use a backend as remote STT provider
+edit your configuration to use ovos-stt-plugin-selene
+{
+ "stt": {
+ "module": "ovos-stt-plugin-selene"
+ }
+}
+
+
+OVOS by default runs without a backend, in this case you will need to configure api keys manually
+This can be done with OCA or by editing mycroft.conf
+edit your configuration to use the offline backend
+{
+ "server": {
+ "backend_type": "offline"
+ }
+}
+
+The official mycroft home backend is called selene, users need to create an account and pair devices with the mycroft +servers.
+This backend is not considered optional by MycroftAI but is not used by OVOS unless explicitly enabled
+Selene is AGPL licensed: +- backend source code +- frontend source code
+edit your configuration to use the selene backend
+{
+ "server": {
+ "backend_type": "selene",
+ "url": "https://api.mycroft.ai",
+ "version": "v1",
+ "update": true,
+ "metrics": true,
+ "sync_skill_settings": true
+ }
+}
+
+Personal backend is a reverse engineered alternative to selene that predates it
+It provides the same functionality for devices and packs some extra options
+It is not intended to serve different users or thousands of devices, there are no user accounts!
+This is currently the only way to run a vanilla mycroft-core device offline
+edit your configuration to use your own personal backend instance
+{
+ "server": {
+ "backend_type": "personal",
+ "url": "http://0.0.0.0:6712",
+ "version": "v1",
+ "update": true,
+ "metrics": true,
+ "sync_skill_settings": true
+ }
+}
+
+
+OVOS Api Service is not a full backend, it is a set of free proxy services hosted by the OVOS Team for usage in default +skills
+device management functionality and user accounts do not exist, offline mode will be used for these apis
+edit your configuration to use the OVOS backend
+{
+ "server": {
+ "backend_type": "ovos",
+ "url": "https://api.openvoiceos.com"
+ }
+}
+
+
+
+ Python client library for interaction with several supported backends under a single unified interface
+API | +Offline | +Personal | +Selene | +OVOS | +
---|---|---|---|---|
Admin | +yes [1] | +yes | +no | +no | +
Device | +yes [2] | +yes | +yes | +yes [4] | +
Metrics | +yes [2] | +yes | +yes | +yes [4] | +
Dataset | +yes [2] | +yes | +yes | +yes [4] | +
OAuth | +yes [2] | +yes | +yes | +yes [4] | +
Wolfram | +yes [3] | +yes | +yes | +yes | +
Geolocate | +yes | +yes | +yes | +yes | +
STT | +yes [3] | +yes | +yes | +yes | +
Weather | +yes [3] | +yes | +yes | +yes | +
yes [3] | +yes | +yes | +yes | +
[1] will update user level mycroft.conf
+[2] shared json database with personal backend for UI compat
+[3] needs additional configuration (eg. credentials)
+[4] uses offline_backend functionality
+
+from ovos_backend_client.api import GeolocationApi
+
+geo = GeolocationApi()
+data = geo.get_geolocation("Lisbon Portugal")
+# {'city': 'Lisboa',
+# 'country': 'Portugal',
+# 'latitude': 38.7077507,
+# 'longitude': -9.1365919,
+# 'timezone': 'Europe/Lisbon'}
+
+To interact with skill settings on selene
+from ovos_backend_client.settings import RemoteSkillSettings
+
+# in ovos-core skill_id is deterministic and safe
+s = RemoteSkillSettings("skill.author")
+# in mycroft-core please ensure a valid remote_id
+# in MycroftSkill class you can use
+# remote_id = self.settings_meta.skill_gid
+# s = RemoteSkillSettings("skill.author", remote_id="@|whatever_msm_decided")
+s.download()
+
+s.settings["existing_value"] = True
+s.settings["new_value"] = "will NOT show up in UI"
+s.upload()
+
+# auto generate new settings meta for all new values before uploading
+s.settings["new_value"] = "will show up in UI"
+s.generate_meta() # now "new_value" is in meta
+s.upload()
+
+
+
+by hijacking skill settings we allows storing arbitrary data in selene and use it across devices and skills
+from ovos_backend_client.cloud import SeleneCloud
+
+cloud = SeleneCloud()
+cloud.add_entry("test", {"secret": "NOT ENCRYPTED MAN"})
+data = cloud.get_entry("test")
+
+an encrypted version is also supported if you dont trust selene!
+from ovos_backend_client.cloud import SecretSeleneCloud
+
+k = "D8fmXEP5VqzVw2HE" # you need this to read back the data
+cloud = SecretSeleneCloud(k)
+cloud.add_entry("test", {"secret": "secret data, selene cant read this"})
+data = cloud.get_entry("test")
+
+
+Retrieving the tokens in a skill does not depend on the selected backend, the mechanism to register a token is backend +specific
+First you need to authorize the application, this can be done +with ovos-config-assistant if running offline +or ovos-backend-manager if using personal backend
+If using selene there is no automated process to add a +token, you need to contact support@mycroft.ai
+from ovos_backend_client.api import OAuthApi, BackendType
+
+# api = OAuthApi() # determine oauth backend from mycroft.conf
+api = OAuthApi(backend_type=BackendType.OFFLINE) # explicitly use ovos-backend-manager oauth
+token_json = api.get_oauth_token("spotify")
+
+from ovos_backend_client.api import OpenWeatherMapApi
+
+owm = OpenWeatherMapApi()
+data = owm.get_weather()
+# dict - see api docs from owm onecall api
+
+from ovos_backend_client.api import WolframAlphaApi
+
+wolf = WolframAlphaApi()
+answer = wolf.spoken("what is the speed of light")
+# The speed of light has a value of about 300 million meters per second
+
+data = wolf.full_results("2+2")
+# dict - see api docs from wolfram
+
+a companion stt plugin is available - ovos-stt-plugin-selene
+since local backend does not provide a web ui a admin api +can be used to manage your devices
+from ovos_backend_client.api import AdminApi
+
+admin = AdminApi("secret_admin_key")
+uuid = "..." # check identity2.json in the device you want to manage
+
+# manually pair a device
+identity_json = admin.pair(uuid)
+
+# set device info
+info = {"opt_in": True,
+ "name": "my_device",
+ "device_location": "kitchen",
+ "email": "notifications@me.com",
+ "isolated_skills": False,
+ "lang": "en-us"}
+admin.set_device_info(uuid, info)
+
+# set device preferences
+prefs = {"time_format": "full",
+ "date_format": "DMY",
+ "system_unit": "metric",
+ "lang": "en-us",
+ "wake_word": "hey_mycroft",
+ "ww_config": {"phonemes": "HH EY . M AY K R AO F T",
+ "module": "ovos-ww-plugin-pocketsphinx",
+ "threshold": 1e-90},
+ "tts_module": "ovos-tts-plugin-mimic",
+ "tts_config": {"voice": "ap"}}
+admin.set_device_prefs(uuid, prefs)
+
+# set location data
+loc = {
+ "city": {
+ "code": "Lawrence",
+ "name": "Lawrence",
+ "state": {
+ "code": "KS",
+ "name": "Kansas",
+ "country": {
+ "code": "US",
+ "name": "United States"
+ }
+ }
+ },
+ "coordinate": {
+ "latitude": 38.971669,
+ "longitude": -95.23525
+ },
+ "timezone": {
+ "code": "America/Chicago",
+ "name": "Central Standard Time",
+ "dstOffset": 3600000,
+ "offset": -21600000
+ }
+}
+admin.set_device_location(uuid, loc)
+
+
+ OpenVoiceOS GUI supports various Skills and PHAL plugins that share a voice application interface with Plasma Bigscreen. In order to enable key navigation on Plasma Bigscreen and Media Centers, the user needs to export an environment variable.
+In order to enable key navigation on Plasma Bigscreen and Media Centers, the user needs to export the environment variable QT_FILE_SELECTORS=mediacenter
. This can be done by executing the following command in the terminal:
export QT_FILE_SELECTORS=mediacenter
+
+This environment variable by default is enabled and added to the Plasma Bigscreen environment. To create your own media center environment store the variable in /etc/environment or /etc/profile.d/99-ovos-media-center.sh
+Exporting the environment variable QT_FILE_SELECTORS=mediacenter
is a necessary step to enable key navigation on Plasma Bigscreen and Media Centers for the Open Voice OS project GUI. With this in place, the user can enjoy seamless key navigation while using the Skills and PHAL plugins on their Plasma Bigscreen and Media Centers.
The buildroot edition of OpenVoiceOS by default also acts as a bluetooth speaker. You can find it from any (mobile) device as discoverable within the bluetooth pairing menu. + +You can pair with it and use your OpenVoiceOS as any other Bluetooth speaker you might own. +(NOTE: At the moment, pairing is broken but will be fixed as soon as we get to it)
+ +The buildroot OpenVoiceOS editions is considered to be consumer friendly type of device, or as Mycroft A.I. would like to call, a retail version. However as we so not target a specific hardware platform and would like to support custom made systems we are implementing a smart way to detect and configure different type of Raspberry Pi HAT's.
+At boot the system scan the I2C bus for known and supported HAT's and if found configures the underlying linux sound system. At the moment this is still very much in development, however the below HAT's are or should soon be supported by this system; +- ReSpeaker 2-mic HAT +- ReSpeaker 4-mic Square HAT +- ReSpeaker 4-mic linear / 6-mic HAT +- USB devices such as the PS EYE-2 +- SJ-201 Dev Kits +- SJ-201 Mark2 retail device
+TODO - write docs
+Your OpenVoiceOS device is accessible over the network from your Windows computer. This is still a work in process, but you can open a file explorer and navigate to you OpenVoiceOS device. + +At the moment the following directories within the user's home directory are shared over the network. +- Documents +- Music +- Pictures +These folders are also used by KDE Connect file transfer plugins and for instance the Camera skill (Hey Mycroft, take a selfie) and / or Homescreen Skill (Hey Mycroft, take a screenshot)
+In the near future the above Windows network shares are also made available over NFS for Linux clients. This is still a Work In Progress / To Do item.
+At this moment development is in very early stages and focussed on the Raspberry Pi 3B & 4. As soon as an initial first workable version +is created, other hardware might be added.
+Source code: https://github.com/OpenVoiceOS/ovos-buildroot
+Only use x86_64 based architecture/ hardware to build the image.
+The following example Build environment has been tested :
+The following system packages are required to build the image:
+In addition to the usual http/https ports (tcp 80, tcp 443) a couple of other ports need to be allowed to the internet : +- tcp 9418 (git). +- tcp 21 (ftp PASV) and random ports for DATA channel. This can be optional but better to have this allowed along with the corresponding random data channel ports. (knowledge of firewalls required)
+First, get the code on your system! The simplest method is via git.
+
+- cd ~/
+- git clone --recurse-submodules https://github.com/OpenVoiceOS/OpenVoiceOS.git
+- cd OpenVoiceOS
(ONLY at the first clean checkout/clone) If this is the very first time you are going to build an image, you need to execute the following command once;
+
+- ./scripts/br-patches.sh
+
+This will patch the Buildroot packages.
Building the image(s) can be done by utilizing a proper Makefile;
+
+To see the available commands, just run: 'make help'
+
+As example to build the rpi4 version;
+- make clean
+- make rpi4_64-gui-config
+- make rpi4_64-gui
Now grab a cup of coffee, go for a walk, sleep and repeat as the build process takes up a long time pulling everything from source and cross compiling everything for the device. Especially the qtwebengine package is taking a LONG time.
+
+(At the moment there is an outstanding issue which prevents the build to run completely to the end. The plasma-workspace package will error out, not finding the libGLESv4 properly linked within QT5GUI. When the build stopped because of this error, edit the following file;
+
+buildroot/output/host/aarch64-buildroot-linux-gnu/sysroot/usr/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake
+
+at the bottom of the file replace this line;
+
+_qt5gui_find_extra_libs(OPENGL "GLESv2" "" "")
+
And replace it bit this line;
+_qt5gui_find_extra_libs(OPENGL "${CMAKE_SYSROOT}/usr/lib/libGLESv2.so" "" "${CMAKE_SYSROOT}/usr/include/libdrm")
+
+Then you can continue the build process by re-running the "make rpi4_64-gui" command. (DO NOT, run "make clean" and/or "make rpi4_64-gui-config" again, or you will start from scratch again !!!)
+
+When everything goes fine the xz compressed image will be available within the release directory.
1.Ensure all required peripherals (mic, speakers, HDMI, usb mouse etc) are plugged in before powering on your RPI4 for the first time.
+
+2. Skip this step if RPI4 is using an ethernet cable. Once powered on, the screen will present the Wifi setup screen ( a Wifi HotSpot is created). Connect to the Wifi HotSpot (ssid OVOS) from another device and follow the on-screen instructions to set up Wifi.
+
+3.Once Wifi is setup a choice of Mycroft backend and Local backend is presented. Choose the Mycroft backend for now and follow the on-screen instructions, Local backend is not ready to use yet. After the pairing process has completed and skills downloaded it's time to test/ use it.
The bus service provides a websocket where all internal events travel
+You can think of the bus service as OVOS's nervous system
+The mycroft-bus is considered an internal and private websocket, external clients should not connect directly to it.
+Please do not expose the messagebus to the outside world!
+Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system!
+You can read more about the security issues over at Nhoya/MycroftAI-RCE
+If you need to connect to the bus from the outside world please check the companion project HiveMind
+Lots of guides for mycroft tell you to expose the websocket, please (re)read the info and links above, think 10 times before following steps blindly
+A Message consists of a json payload, it contains a type
, some data
and a context
.
The context
is considered to be metadata and might be changed at any time in transit, the context
can contain anything depending on where the message came from, and often is completely empty.
You can think of the message context
as a sort of session data for a individual interaction, in general messages down the chain keep the context
from the original message, most listeners (eg, skills) will only care about type
and data
.
ovos-core uses the message context
to add metadata about the messages themselves, where do they come from and what are they intended for.
the Message
object provides the following methods:
message.forward
method, keeps previous context.destination
message.reply
method swaps "source"
with "destination"
source
The context destination
parameter in the original message can be set to a list with any number of intended targets:
bus.emit(Message('recognizer_loop:utterance', data,
+ context={'destination': ['audio', 'kde'],
+ 'source': "remote_service"))
+
+ovos-core injects the context when it emits an utterance, this can be either typed in the ovos-cli-client
or spoken via STT service
STT will identify itself as audio
ovos-cli-client will identify itself as debug_cli
mycroft.conf
contains a native_sources
section you can configure to change how the audio service processes external requests
Output capable services are the cli and the TTS
+The command line is a debug tool, it will ignore the destination
TTS checks the message context if it's the intended target for the message and will only speak in the following conditions:
+destination
is "audio"
destination
is set to None
destination
is missing completelyThe idea is that for example when the android app is used to access OpenVoiceOS the device at home shouldn't start to speak.
+TTS will be executed when "audio"
or "debug_cli"
are the destination
A missing destination
or if the destination
is set to None
is interpreted as a multicast and should trigger all output capable processes (be it the audio service, a web-interface, the KDE plasmoid or maybe the android app)
.reply
to the original utterance message.forward
(from previous intent service .reply
)converse
/get_response
support is limited, the context may be lost warningThe messagebus has a dedicated section in mycroft.conf
"websocket": {
+ "host": "0.0.0.0",
+ "port": 8181,
+ "route": "/core",
+ "shared_connection": true
+}
+
+in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills
+you can see a demonstration of this problem with BusBrickerSkill
+"shared_connection": false
ensures each skill gets its own websocket connection and avoids this problem
Additionally, it is recommended you change "host": "127.0.0.1"
, this will ensure no outside world connections are allowed
a simple UI for ovos-personal-backend, utility to manage all +your devices
+If you are running ovos-core without a backend OCA provides a similar local interface
+ +pip install ovos-backend-manager
or from source
+pip install git+https://github.com/OpenVoiceOS/ovos-backend-manager
It needs to run on the same machine as the personal backend, it directly interacts with the databases and configuration files
+ovos-backend-manager
will be available in the command line after installing
ovos-core supports multiple backends under a single unified interface
+Developers do not need to worry about backend details in their applications and skills
+A unique uuid and pairing information generated by registering with Home is stored in:
+~/.config/mycroft/identity/identity2.json
<-- DO NOT SHARE THIS WITH OTHERS!
This file uniquely identifies your device and should be kept safe
+a companion stt plugin is available to use a backend as remote STT provider
+edit your configuration to use ovos-stt-plugin-selene
+{
+ "stt": {
+ "module": "ovos-stt-plugin-selene"
+ }
+}
+
+
+OVOS by default runs without a backend, in this case you will need to configure api keys manually
+This can be done with OCA or by editing mycroft.conf
+edit your configuration to use the offline backend
+{
+ "server": {
+ "backend_type": "offline"
+ }
+}
+
+The official mycroft home backend is called selene, users need to create an account and pair devices with the mycroft +servers.
+This backend is not considered optional by MycroftAI but is not used by OVOS unless explicitly enabled
+Selene is AGPL licensed: +- backend source code +- frontend source code
+edit your configuration to use the selene backend
+{
+ "server": {
+ "backend_type": "selene",
+ "url": "https://api.mycroft.ai",
+ "version": "v1",
+ "update": true,
+ "metrics": true,
+ "sync_skill_settings": true
+ }
+}
+
+Personal backend is a reverse engineered alternative to selene that predates it
+It provides the same functionality for devices and packs some extra options
+It is not intended to serve different users or thousands of devices, there are no user accounts!
+This is currently the only way to run a vanilla mycroft-core device offline
+edit your configuration to use your own personal backend instance
+{
+ "server": {
+ "backend_type": "personal",
+ "url": "http://0.0.0.0:6712",
+ "version": "v1",
+ "update": true,
+ "metrics": true,
+ "sync_skill_settings": true
+ }
+}
+
+
+OVOS Api Service is not a full backend, it is a set of free proxy services hosted by the OVOS Team for usage in default +skills
+device management functionality and user accounts do not exist, offline mode will be used for these apis
+edit your configuration to use the OVOS backend
+{
+ "server": {
+ "backend_type": "ovos",
+ "url": "https://api.openvoiceos.com"
+ }
+}
+
+
+
+ Work in Progress
+ +The home screen is the central place for all your tasks. It is the first thing you will see after completing the onboarding process. It supports a variety of pre-defined widgets which provide you with a quick overview of information you need to know like the current date, time and weather. The home screen contains various features and integrations which you can learn more about in the following sections.
+ +The Night Mode feature lets you quickly switch your home screen into a dark standby clock, reducing the amount of light emitted by your device. This is especially useful if you are using your device in a dark room or at night. You can enable the night mode feature by tapping on the left edge pill button on the home screen.
+ +The Quick Actions Dashboard provides you with a card-based interface to quickly access and add your most used action. The Quick Actions dashboard comes with a variety of pre-defined actions like the ability to quickly add a new alarm, start a new timer or add a new note. You can also add your own custom actions to the dashboard by tapping on the plus button in the top right corner of the dashboard. The Quick Actions dashboard is accessible by tapping on the right edge pill button on the home screen.
+ +OpenVoiceOS comes with support for dedicated voice applications. Voice Applications can be dedicated skills or PHAL plugins, providing their own dedicated user interface. The application launcher will show you a list of all available voice applications. You can access the application launcher by tapping on the center pill button on the bottom of the home screen.
+ +The home screen supports custom wallpapers and comes with a bunch of wallpapers to choose from. You can easily change your custom wallpaper by swiping from right to left on the home screen.
+ +The notifications widget provides you with a quick overview of all your notifications. The notifications bell icon will be displayed in the top left corner of the home screen. You can access the notifications overview by tapping on the bell icon when it is displayed.
+The timer widget is displayed in top left corner after the notifications bell icon. It will show up when you have an active timer running. Clicking on the timer widget will open the timers overview.
+The alarm widget is displayed in top left corner after the timer widget. It will show up when you have an active alarm set. Clicking on the alarm widget will open the alarms overview.
+The media player widget is displayed in the bottom of the home screen, It replaces the examples widget when a media player is active. The media player widget will show you the currently playing media and provide you with a quick way to pause, resume or skip the current media. You can also quickly access the media player by tapping the quick display media player button on the right side of the media player widget.
+ +The homescreen has several customizations available. This is sample settings.json
file with all of the options explained
{
+ "__mycroft_skill_firstrun": false,
+ "weather_skill": "skill-weather.openvoiceos",
+ "datetime_skill": "skill-date-time.mycroftai",
+ "examples_skill": "ovos-skills-info.openvoiceos",
+ "wallpaper": "default.jpg",
+ "persistent_menu_hint": false,
+ "examples_enabled": true,
+ "randomize_examples": true,
+ "examples_prefix": true
+}
+
+skill-ovos-date-time.openvoiceos
ovos_skills_manager.utils.get_skills_example()
function~
OCA is a user facing interface to configure ovos devices
+OCA provides a local Web UI similar to ovos-backend-manager, in here you can configure your device, view metrics, handle OAuth and more
+A command line interface is available using ovos-config
+ +from ovos_config_assistant.module_helpers import pprint_core_module_info
+pprint_core_module_info()
+"""
+## Mycroft module info
+ can import mycroft : True
+ is ovos-core : True
+ mycroft module location: /home/user/ovos-core/mycroft
+
+## Downstream ovos.conf overrides
+Module: neon_core
+ can import neon_core : False
+ neon_core module location: None
+ xdg compliance : True
+ base xdg folder : neon
+ mycroft config filename : neon.conf
+ default mycroft.conf path :
+ /home/user/NeonCore/neon_core/configuration/neon.conf
+Module: hivemind
+ can import hivemind : False
+ hivemind module location: None
+ xdg compliance : True
+ base xdg folder : hivemind
+ mycroft config filename : hivemind.conf
+ default mycroft.conf path :
+ /home/user/PycharmProjects/ovos_workspace/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf
+
+## Downstream module overrides:
+Module: neon_speech
+ uses config from : neon_core
+ can import neon_speech : False
+ neon_speech module location: None
+Module: neon_audio
+ uses config from : neon_core
+ can import neon_audio : False
+ neon_audio module location: None
+Module: neon_enclosure
+ uses config from : neon_core
+ can import neon_enclosure : False
+ neon_enclosure module location: None
+Module: hivemind_voice_satellite
+ uses config from : hivemind
+ can import hivemind_voice_satellite : True
+ hivemind_voice_satellite module location: /home/user/HiveMind-voice-sat/hivemind_voice_satellite
+"""
+
+from ovos_config_assistant.config_helpers import pprint_ovos_conf
+pprint_ovos_conf()
+"""
+## OVOS Configuration
+ ovos.conf exists : True
+ /home/user/.config/OpenVoiceOS/ovos.conf
+ xdg compliance : True
+ base xdg folder : mycroft
+ mycroft config filename : mycroft.conf
+ default mycroft.conf path :
+ /home/user/ovos-core/.venv/lib/python3.9/site-packages/mycroft/configuration/mycroft.conf
+"""
+
+
+ The audio service is responsible for loading TTS and Audio plugins
+All audio playback is handled by this service
+Usually playback is triggered by some originating bus message, eg "recognizer_loop:utterance"
, this message contains metadata that is used to determine if playback should happen.
message.context
may contain a source and destination, playback is only triggered if a message destination is a native_source or if missing (considered a broadcast).
This separation of native sources allows remote clients such as an android app to interact with OVOS without the actual device where ovos-core is running repeating all TTS and music playback out loud
+You can learn more about message targeting here
+By default, only utterances originating from the speech client and ovos cli are considered native
+for legacy reasons the names for ovos cli and speech client are "debug_cli"
and "audio"
respectively
Two TTS plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used.
+This allows you to have a lower quality offline voice as fallback to account for internet outages, this ensures your device can always give you feedback
+"tts": {
+ "pulse_duck": false,
+ "module": "ovos-tts-plugin-mimic2",
+ "fallback_module": "ovos-tts-plugin-mimic"
+},
+
+You can enable additional Audio plugins and define the native sources described above under the "Audio"
section of mycroft.conf
ovos-core uses OCP natively for media playback, you can learn more about OCP here
+OCP will decide when to call the Audio service and what plugin to use, the main use case is for headless setups without a GUI
+NOTE: mycroft-core has a "default-backend"
config option, in ovos-core this option has been deprecated and is always OCP.
"Audio": {
+ "native_sources": ["debug_cli", "audio"],
+
+ "backends": {
+ "OCP": {
+ "type": "ovos_common_play",
+ "active": true
+ },
+ "simple": {
+ "type": "ovos_audio_simple",
+ "active": true
+ },
+ "vlc": {
+ "type": "ovos_vlc",
+ "active": true
+ }
+ }
+},
+
+ovos_config.config.Configuration
is a singleton that loads a single config
+object. The configuration files loaded are determined by ovos.conf
as described below and can be in either json or
+yaml format.
if Configuration()
is called the following configs would be loaded in this
+order:
{core-path}
/configuration/mycroft.confos.environ.get('MYCROFT_SYSTEM_CONFIG')
or /etc/mycroft/mycroft.confos.environ.get('MYCROFT_WEB_CACHE')
or XDG_CONFIG_PATH
/neon/web_cache.jsonXDG_CONFIG_DIRS
+ /mycroft/mycroft.confXDG_CONFIG_HOME
(default ~/.config) + /mycroft/mycroft.confWhen the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that +exist in multiple configuration files will be overridden by the last file to contain the value. This process results in +a minimal amount being written for a specific device and user, without modifying default distribution files.
+There are a couple of special configuration keys that change the way the configuration stack loads.
+Default
config refers to the config specified at default_config_path
in
+ ovos.conf
(#1 {core-path}/configuration/mycroft.conf
in the stack above).System
config refers to the config at /etc/{base_folder}/{config_filename}
(#2 /etc/mycroft/mycroft.conf
in the stack
+ above).A "protected_keys"
configuration section may be added to a Default
or System
Config file
+(default /etc/mycroft/mycroft.conf
). This configuration section specifies
+other configuration keys that may not be specified in remote
or user
configurations.
+Keys may specify nested parameters with .
to exclude specific keys within nested dictionaries.
+An example config could be:
{
+ "protected_keys": {
+ "remote": [
+ "gui_websocket.host",
+ "websocket.host"
+ ],
+ "user": [
+ "gui_websocket.host"
+ ]
+ }
+}
+
+This example specifies that config['gui_websocket']['host']
may be specified in user configuration, but not remote.
+config['websocket']['host']
may not be specified in user or remote config, so it will only consider default
+and system configurations.
If this config parameter is set to True in Default
or System
configuration,
+no user configurations will be loaded (no XDG configuration paths).
If this config parameter is set to True in Default
or System
configuration,
+the remote configuration (web_cache.json
) will not be loaded.
while mycroft.conf
configures the voice assistant, ovos.conf
configures the library
what this means is that ovos.conf
decides what files are loaded by the Configuration
class described above, as an end user or skill developer you should never have to worry about this
all XDG paths across OpenVoiceOS packages build their paths taking ovos.conf
into consideration
this feature is what allows downstream voice assistants such as neon-core to change their config files to neon.yaml
Using the above example, if Configuration()
is called from neon-core
, the following configs would be loaded in this
+order:
{core-path}
/configuration/neon.yamlos.environ.get('MYCROFT_SYSTEM_CONFIG')
or /etc/neon/neon.yamlos.environ.get('MYCROFT_WEB_CACHE')
or XDG_CONFIG_PATH
/neon/web_cache.jsonXDG_CONFIG_DIRS
+ /neon/neon.yamlXDG_CONFIG_HOME
(default ~/.config) + /neon/neon.yamlThe ovos_config
package determines which config files to load based on ovos.conf
.
+get_ovos_config
will return default values that load mycroft.conf
unless otherwise configured.
ovos.conf
files are loaded in the following order, with later files taking priority over earlier ones in the list:
XDG_CONFIG_DIRS
+ /OpenVoiceOS/ovos.confXDG_CONFIG_HOME
(default ~/.config) + /OpenVoiceOS/ovos.confA simple ovos_config
should have a structure like:
{
+ "base_folder": "mycroft",
+ "config_filename": "mycroft.conf",
+ "default_config_path": "<Absolute Path to Installed Core>/configuration/mycroft.conf",
+ "module_overrides": {},
+ "submodule_mappings": {}
+}
+
+++Note:
+default_config_path
should always be an absolute path. This is generally +detected automatically, but any manual override must specify an absolute path to a json or yaml config file.
Non-Mycroft modules may specify alternate config paths. A call to get_ovos_config
from
+neon_core
or neon_messagebus
will return a configuration like:
{
+ "base_folder": "neon",
+ "config_filename": "neon.yaml",
+ "default_config_path": "/etc/example/config/neon.yaml",
+ "module_overrides": {
+ "neon_core": {
+ "base_folder": "neon",
+ "config_filename": "neon.yaml",
+ "default_config_path": "/etc/example/config/neon.yaml"
+ }
+ },
+ "submodule_mappings": {
+ "neon_messagebus": "neon_core",
+ "neon_speech": "neon_core",
+ "neon_audio": "neon_core",
+ "neon_gui": "neon_core"
+ }
+}
+
+If get_ovos_config
was called from mycroft
with the same configuration file as the last example,
+the returned configuration would be:
{
+ "base_folder": "mycroft",
+ "config_filename": "mycroft.conf",
+ "default_config_path": "<Path to Installed Core>/configuration/mycroft.conf",
+ "module_overrides": {
+ "neon_core": {
+ "base_folder": "neon",
+ "config_filename": "neon.yaml",
+ "default_config_path": "/etc/example/config/neon.yaml"
+ }
+ },
+ "submodule_mappings": {
+ "neon_messagebus": "neon_core",
+ "neon_speech": "neon_core",
+ "neon_audio": "neon_core",
+ "neon_gui": "neon_core"
+ }
+}
+
+
+ ovos-core reads from several config files and is able to combine them into one main configuration to be used by all of the OVOS modules.
+The default configuration is locatated at <python install directory>/site-packages/ovos-config/mycroft/mycroft.conf
In this file, you can see all of the avaliable configuration values and an explination of its use.
+The images will inclue a file at /etc/mycroft/mycroft.conf
and values set there will override the system values.
DO NOT EDIT THESE FILES These files are default values, and will be overwritten on an update.
+Next OVOS checks for a file in ~/.config/mycroft/web_cache.json
. This file contains values retrieved from a remote server and will overwrite the previous two values. This one should also NOT be edited, it will be overwritten also.
The user configuration file is located in ~/.config/mycroft/mycroft.conf
. This is the file that you should use to change default values to custom ones. When this document refers to Add this to config
this is the file that should be modified.
This file needs to be a valid json
or yaml
file. OVOS knows how to handle both.
ovos_config.config.Configuration
is a singleton that loads a single config
+object. The configuration files loaded are determined by ovos.conf
as described below and can be in either json or
+yaml format.
if Configuration()
is called the following configs would be loaded in this
+order:
{core-path}
/configuration/mycroft.confos.environ.get('MYCROFT_SYSTEM_CONFIG')
or /etc/mycroft/mycroft.confos.environ.get('MYCROFT_WEB_CACHE')
or XDG_CONFIG_PATH
/neon/web_cache.jsonXDG_CONFIG_DIRS
+ /mycroft/mycroft.confXDG_CONFIG_HOME
(default ~/.config) + /mycroft/mycroft.confWhen the configuration loader starts, it looks in these locations in this order, and loads ALL configurations. Keys that +exist in multiple configuration files will be overridden by the last file to contain the value. This process results in +a minimal amount being written for a specific device and user, without modifying default distribution files.
+There are a couple of special configuration keys that change the way the configuration stack loads.
+Default
config refers to the config specified at default_config_path
in
+ ovos.conf
(#1 {core-path}/configuration/mycroft.conf
in the stack above).System
config refers to the config at /etc/{base_folder}/{config_filename}
(#2 /etc/mycroft/mycroft.conf
in the stack
+ above).A "protected_keys"
configuration section may be added to a Default
or System
Config file
+(default /etc/mycroft/mycroft.conf
). This configuration section specifies
+other configuration keys that may not be specified in remote
or user
configurations.
+Keys may specify nested parameters with .
to exclude specific keys within nested dictionaries.
+An example config could be:
{
+ "protected_keys": {
+ "remote": [
+ "gui_websocket.host",
+ "websocket.host"
+ ],
+ "user": [
+ "gui_websocket.host"
+ ]
+ }
+}
+
+This example specifies that config['gui_websocket']['host']
may be specified in user configuration, but not remote.
+config['websocket']['host']
may not be specified in user or remote config, so it will only consider default
+and system configurations.
If this config parameter is set to True in Default
or System
configuration,
+no user configurations will be loaded (no XDG configuration paths).
If this config parameter is set to True in Default
or System
configuration,
+the remote configuration (web_cache.json
) will not be loaded.
while mycroft.conf
configures the voice assistant, ovos.conf
configures the library
what this means is that ovos.conf
decides what files are loaded by the Configuration
class described above, as an end user or skill developer you should never have to worry about this
all XDG paths across OpenVoiceOS packages build their paths taking ovos.conf
into consideration
this feature is what allows downstream voice assistants such as neon-core to change their config files to neon.yaml
Using the above example, if Configuration()
is called from neon-core
, the following configs would be loaded in this
+order:
{core-path}
/configuration/neon.yamlos.environ.get('MYCROFT_SYSTEM_CONFIG')
or /etc/neon/neon.yamlos.environ.get('MYCROFT_WEB_CACHE')
or XDG_CONFIG_PATH
/neon/web_cache.jsonXDG_CONFIG_DIRS
+ /neon/neon.yamlXDG_CONFIG_HOME
(default ~/.config) + /neon/neon.yamlThe ovos_config
package determines which config files to load based on ovos.conf
.
+get_ovos_config
will return default values that load mycroft.conf
unless otherwise configured.
ovos.conf
files are loaded in the following order, with later files taking priority over earlier ones in the list:
XDG_CONFIG_DIRS
+ /OpenVoiceOS/ovos.confXDG_CONFIG_HOME
(default ~/.config) + /OpenVoiceOS/ovos.confA simple ovos_config
should have a structure like:
{
+ "base_folder": "mycroft",
+ "config_filename": "mycroft.conf",
+ "default_config_path": "<Absolute Path to Installed Core>/configuration/mycroft.conf",
+ "module_overrides": {},
+ "submodule_mappings": {}
+}
+
+++Note:
+default_config_path
should always be an absolute path. This is generally +detected automatically, but any manual override must specify an absolute path to a json or yaml config file.
Non-Mycroft modules may specify alternate config paths. A call to get_ovos_config
from
+neon_core
or neon_messagebus
will return a configuration like:
{
+ "base_folder": "neon",
+ "config_filename": "neon.yaml",
+ "default_config_path": "/etc/example/config/neon.yaml",
+ "module_overrides": {
+ "neon_core": {
+ "base_folder": "neon",
+ "config_filename": "neon.yaml",
+ "default_config_path": "/etc/example/config/neon.yaml"
+ }
+ },
+ "submodule_mappings": {
+ "neon_messagebus": "neon_core",
+ "neon_speech": "neon_core",
+ "neon_audio": "neon_core",
+ "neon_gui": "neon_core"
+ }
+}
+
+If get_ovos_config
was called from mycroft
with the same configuration file as the last example,
+the returned configuration would be:
{
+ "base_folder": "mycroft",
+ "config_filename": "mycroft.conf",
+ "default_config_path": "<Path to Installed Core>/configuration/mycroft.conf",
+ "module_overrides": {
+ "neon_core": {
+ "base_folder": "neon",
+ "config_filename": "neon.yaml",
+ "default_config_path": "/etc/example/config/neon.yaml"
+ }
+ },
+ "submodule_mappings": {
+ "neon_messagebus": "neon_core",
+ "neon_speech": "neon_core",
+ "neon_audio": "neon_core",
+ "neon_gui": "neon_core"
+ }
+}
+
+
+
+ Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication.
+Create a keyfile (you can change ovos to whatever you want)
+ssh-keygen -t ed25519 -f ~/.ssh/ovos
+
+Copy to host (use the same filename as above, specify the user and hostname you are using)
+ssh-copy-id -i ~/.ssh/ovos ovos@mycroft
+
+On your dekstop, edit ~/.ssh/config and add the following lines
+Host rp2
+ user ovos
+ IdentityFile ~/.ssh/ovos
+
+On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line:
+PasswordAuthentication no
+
+restart sshd or reboot
+sudo systemctl restart sshd
+
+Anything connected to the bus can fully control OVOS, and OVOS usually has full control over the whole system!
+You can read more about the security issues over at Nhoya/MycroftAI-RCE
+in mycroft-core all skills share a bus connection, this allows malicious skills to manipulate it and affect other skills
+you can see a demonstration of this problem with BusBrickerSkill
+"shared_connection": false
ensures each skill gets its own websocket connection and avoids this problem
Additionally, it is recommended you change "host": "127.0.0.1"
, this will ensure no outside world connections are allowed
Each skill will have its own config file usually located at ~/.local/share/mycroft/skills/<skill_id>
Skill settings provide the ability for users to configure a Skill using the command line or a web-based interface.
+This is often used to:
+Skill settings are completely optional.
+Refer to each skill repository for valid configuration values.
+ +WakeWord plugins classify audio and report if a certain word or sound is present or not.
+These plugins usually correspond to the name of the voice assistant, "hey mycroft", but can also be used for other purposes.
+Unlike the original Mycroft assistant, OVOS supports multiple wakewords in any combination of engines.
+ +Plugin | +Type | +
---|---|
ovos-ww-plugin-pocketsphinx | +phonemes | +
ovos-ww-plugin-vosk | +text samples | +
ovos-ww-plugin-snowboy | +model | +
ovos-ww-plugin-precise | +model | +
ovos-ww-plugin-precise-lite | +model | +
ovos-ww-plugin-nyumaya | +model | +
ovos-ww-plugin-nyumaya-legacy | +model | +
neon_ww_plugin_efficientwordnet | +model | +
mycroft-porcupine-plugin | +model | +
ovos-ww-plugin-hotkeys | +keyboard | +
The default wake words for OVOS generally use one of two different plugins, Precise-lite (referred to here as Precise) or Vosk. Precise is typically the more accurate of the two because it is trained on recordings and uses an ML model. Vosk translates sounds to phonemes and will generally be more sensitive and prone to error.
+The primary benefit of Vosk wakewords is that they require no training or downloaded models. You can simply configure the wakeword and it will work. The downside is that you will usually get many false wakes, especially with short and common phonemes. Something like "Hey Neon" will trigger almost every time the "ee" sound is pronounced in English, while "Hey Ziggy" is much less likely to trigger because the phonemes are less common.
+Note that Vosk wakewords consume a large amount of memory. Configuring multiple Vosk wakewords on a device with limited memory, like the Mycroft Mark 2, can cause performance issues.
+To create a Vosk wakeword on your OVOS device, open the user configuration (defaults to ~/.config/mycroft/mycroft.conf
) in your text editor of choice and add the following lines. This will enable wakewords for both "Hey Mycroft" and "Hey Ziggy."
"hotwords": {
+ "hey_neon": {
+ "module": "ovos-ww-plugin-vosk",
+ "active": true,
+ "listen": true,
+ "sound": "snd/start_listening.wav",
+ "debug": false,
+ "rule": "fuzzy",
+ "lang": "en",
+ "samples": ["hey neon"]
+ },
+ "hey_ziggy": {
+ "module": "ovos-ww-plugin-vosk",
+ "listen": true,
+ "active": true,
+ "sound": "snd/start_listening.wav",
+ "debug": false,
+ "rule": "fuzzy",
+ "lang": "en",
+ "samples": ["hey ziggy", "hay ziggy"]
+ }
+}
+
+If you already have a hotwords
section in your user configuration, the first and last lines are not necessary. Also, the most important section is "active": true
, which tells the assistant to use the wakeword. If you want to disable a wakeword, you can set this to false
. If enabling a wakeword, be sure to also set "listen": true
.
Another important combination is "debug": true
, which will print the phonemes to the logs when the wakeword is triggered. This can be useful for debugging issues. It can also tell you what combinations the speech-to-text engine is picking up when you try to activate it so you can add them to the samples
array.
Those are two common default wakewords. You can also create a completely custom wakeword as follows:
+"hotwords": {
+ "k9": {
+ "module": "ovos-ww-plugin-vosk",
+ "active": true,
+ "listen": true,
+ "sound": "snd/start_listening.wav",
+ "debug": true,
+ "rule": "fuzzy",
+ "lang": "en",
+ "samples": ["k9", "k 9", "kay nine", "k nine", "kay nein", "k nein"]
+ }
+}
+
+OVOS community members have used Vosk for very creative wakewords. Please feel free to share your custom wakewords in the OVOS Matrix chat!
+NOTE: The original Precise engine is not actively maintained and is not recommended for new installations. Precise-Lite is a fork of Precise that is actively maintained. Please use that instead.
+Precise-Lite wakewords require a pre-trained .tflite
model to operate. OVOS maintains several pre-trained models of commonly requested wakewords. To use them, try this configuration:
"hotwords": {
+ "computer": {
+ "module": "ovos-ww-plugin-precise-lite",
+ "model": "https://github.com/OpenVoiceOS/precise-lite-models/raw/master/wakewords/en/computer.tflite",
+ "active": true,
+ "listen": true,
+ "sound": "snd/start_listening.wav",
+ "expected_duration": 3,
+ "trigger_level": 3,
+ "sensitivity": 0.5
+ }
+}
+
+Your OVOS device will automatically download the model if it isn't already on the device.
+OVOS maintains the following models:
+To use them, replace the model
section with the link to the model you want to use. Then replace the key with the name of the model, e.g. instead of computer
use android
or marvin
or whichever model you chose.
In addition to the Precise wakeword models that OVOS maintains, the community has created many more models, and additional model requests are welcome! If you have a model you would like to see created, please open an issue with the name of the wakeword. The OVOS team will generate some synthetic samples and add it to the list of models to be created.
+These synthetic models perform fairly well out of the box, but always work better with community-contributed recordings. Please see the README on the repo above for instructions on how to contribute recordings, and consider contributing to as many as you can!
+ +NOTE: Conversational context is currently only available with the Adapt Intent Parser, and is not yet available for Padatious
+++How tall is John Cleese?
+
"John Cleese is 196 centimeters"
++Where's he from?
+
"He's from England"
Context is added manually by the Skill creator using either the self.set_context()
method or the @adds_context()
decorator.
Consider the following intent handlers:
+ @intent_handler(IntentBuilder().require('PythonPerson').require('Length'))
+ def handle_length(self, message):
+ python = message.data.get('PythonPerson')
+ self.speak('{} is {} cm tall'.format(python, length_dict[python]))
+
+ @intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom'))
+ def handle_from(self, message):
+ python = message.data.get('PythonPerson')
+ self.speak('{} is from {}'.format(python, from_dict[python]))
+
+To interact with the above handlers the user would need to say
+User: How tall is John Cleese?
+Mycroft: John Cleese is 196 centimeters
+User: Where is John Cleese from?
+Mycroft: He's from England
+
+To get a more natural response the functions can be changed to let OVOS know which PythonPerson
we're talking about by using the self.set_context()
method to give context:
@intent_handler(IntentBuilder().require('PythonPerson').require('Length'))
+ def handle_length(self, message):
+ # PythonPerson can be any of the Monty Python members
+ python = message.data.get('PythonPerson')
+ self.speak('{} is {} cm tall'.format(python, length_dict[python]))
+ self.set_context('PythonPerson', python)
+
+ @intent_handler(IntentBuilder().require('PythonPerson').require('WhereFrom'))
+ def handle_from(self, message):
+ # PythonPerson can be any of the Monty Python members
+ python = message.data.get('PythonPerson')
+ self.speak('He is from {}'.format(from_dict[python]))
+ self.set_context('PythonPerson', python)
+
+When either of the methods are called the PythonPerson
keyword is added to OVOS's context, which means that if there is a match with Length
but PythonPerson
is missing OVOS will assume the last mention of that keyword. The interaction can now become the one described at the top of the page.
++User: How tall is John Cleese?
+
OVOS detects the Length
keyword and the PythonPerson
keyword
++OVOS: 196 centimeters
+
John Cleese is added to the current context
+++User: Where's he from?
+
OVOS detects the WhereFrom
keyword but not any PythonPerson
keyword. The Context Manager is activated and returns the latest entry of PythonPerson
which is John Cleese
++OVOS: He's from England
+
The context isn't limited by the keywords provided by the current Skill. For example
+ @intent_handler(IntentBuilder().require(PythonPerson).require(WhereFrom))
+ def handle_from(self, message):
+ # PythonPerson can be any of the Monty Python members
+ python = message.data.get('PythonPerson')
+ self.speak('He is from {}'.format(from_dict[python]))
+ self.set_context('PythonPerson', python)
+ self.set_context('Location', from_dict[python])
+
+Enables conversations with other Skills as well.
+User: Where is John Cleese from?
+Mycroft: He's from England
+User: What's the weather like over there?
+Mycroft: Raining and 14 degrees...
+
+To make sure certain Intents can't be triggered unless some previous stage in a conversation has occurred. Context can be used to create "bubbles" of available intent handlers.
+User: Hey Mycroft, bring me some Tea
+Mycroft: Of course, would you like Milk with that?
+User: No
+Mycroft: How about some Honey?
+User: All right then
+Mycroft: Here you go, here's your Tea with Honey
+
+from mycroft.skills.context import adds_context, removes_context
+
+class TeaSkill(MycroftSkill):
+ @intent_handler(IntentBuilder('TeaIntent').require("TeaKeyword"))
+ @adds_context('MilkContext')
+ def handle_tea_intent(self, message):
+ self.milk = False
+ self.speak('Of course, would you like Milk with that?',
+ expect_response=True)
+
+ @intent_handler(IntentBuilder('NoMilkIntent').require("NoKeyword").
+ require('MilkContext').build())
+ @removes_context('MilkContext')
+ @adds_context('HoneyContext')
+ def handle_no_milk_intent(self, message):
+ self.speak('all right, any Honey?', expect_response=True)
+
+ @intent_handler(IntentBuilder('YesMilkIntent').require("YesKeyword").
+ require('MilkContext').build())
+ @removes_context('MilkContext')
+ @adds_context('HoneyContext')
+ def handle_yes_milk_intent(self, message):
+ self.milk = True
+ self.speak('What about Honey?', expect_response=True)
+
+ @intent_handler(IntentBuilder('NoHoneyIntent').require("NoKeyword").
+ require('HoneyContext').build())
+ @removes_context('HoneyContext')
+ def handle_no_honey_intent(self, message):
+ if self.milk:
+ self.speak('Heres your Tea with a dash of Milk')
+ else:
+ self.speak('Heres your Tea, straight up')
+
+ @intent_handler(IntentBuilder('YesHoneyIntent').require("YesKeyword").
+ require('HoneyContext').build())
+ @removes_context('HoneyContext')
+ def handle_yes_honey_intent(self, message):
+ if self.milk:
+ self.speak('Heres your Tea with Milk and Honey')
+ else:
+ self.speak('Heres your Tea with Honey')
+
+When starting up only the TeaIntent
will be available. When that has been triggered and MilkContext is added the MilkYesIntent
and MilkNoIntent
are available since the MilkContext is set. when a yes or no is received the MilkContext is removed and can't be accessed. In it's place the HoneyContext is added making the YesHoneyIntent
and NoHoneyIntent
available.
You can find an example Tea Skill using conversational context on Github.
+As you can see, Conversational Context lends itself well to implementing a dialog tree or conversation tree.
+ +If this is your first PR, or you're not sure where to get started, +say hi in OpenVoiceOS Chat and a team member would be happy to mentor you. +Join the Discussions for questions and answers.
+Each Skill may define a converse()
method. This method will be called anytime the Skill has been recently active and a new utterance is processed.
The converse method expects a single argument which is a standard Mycroft Message object. This is the same object an intent handler receives.
+Converse methods must return a Boolean value. True if an utterance was handled, otherwise False.
+Let's use a version of the Ice Cream Skill we've been building up and add a converse method to catch any brief statements of thanks that might directly follow an order.
+from mycroft import MycroftSkill, intent_handler
+
+
+class IceCreamSkill(MycroftSkill):
+ def __init__(self):
+ MycroftSkill.__init__(self)
+ self.flavors = ['vanilla', 'chocolate', 'mint']
+
+ @intent_handler('request.icecream.intent')
+ def handle_request_icecream(self):
+ self.speak_dialog('welcome')
+ selection = self.ask_selection(self.flavors, 'what.flavor')
+ self.speak_dialog('coming-right-up', {'flavor': selection})
+
+ def converse(self, message):
+ if self.voc_match(message.data['utterances'][0], 'Thankyou'):
+ self.speak_dialog("you-are-welcome")
+ return True
+
+
+def create_skill():
+ return IceCreamSkill()
+
+In this example:
+handle_request_icecream()
Thankyou.voc
file in the Skill and speak the contents of the you-are-welcome.dialog
file. The method would return True
and the utterance would be consumed meaning the intent parsing service would never be triggered.A Skill is considered active if it has been called in the last 5 minutes.
+Skills are called in order of when they were last active. For example, if a user spoke the following commands:
+++Hey Mycroft, set a timer for 10 minutes
+Hey Mycroft, what's the weather
+
Then the utterance "what's the weather" would first be sent to the Timer Skill's converse()
method, then to the intent service for normal handling where the Weather Skill would be called.
As the Weather Skill was called it has now been added to the front of the Active Skills List. Hence, the next utterance received will be directed to:
+WeatherSkill.converse()
TimerSkill.converse()
There are occasions where a Skill has not been triggered by the User, but it should still be considered "Active".
+In the case of our Ice Cream Skill - we might have a function that will execute when the customers order is ready. At this point, we also want to be responsive to the customers thanks, so we call self.make_active()
to manually add our Skill to the front of the Active Skills List.
OpenVoiceOS is an open source platform for smart speakers and other voice-centric devices.
+OVOS-core is a backwards-compatible descendant of Mycroft-core, the central component of Mycroft. It contains extensions and features not present upstream.
+All Mycroft Skills and Plugins should work normally with OVOS-core.
+OVOS-core is fully modular. Furthermore, common components have been repackaged as plugins. That means it isn't just a great assistant on its own, but also a pretty small library!
+ovos-core is very modular, depending on where you are running ovos-core you may want to run only a subset of the services
+by default ovos-core only installs the minimum components common to all services, for the purposes of this document we will assume you want a full install
+if you want to finetune the components please replace [all]
in commands below with the subset of desired extras, eg [skills,bus]
ovos-core can be installed from pypi or from source
+if install fails you may need to install some system dependencies, how to do this will depend on your distro
+sudo apt install build-essential python3-dev swig libssl-dev libfann-dev portaudio19-dev libpulse-dev
+
+Note: MycroftAI's dev_setup.sh
does not exist in OVOS-core.
We suggest you do this in a virtualenv:
+pip install git+https://github.com/OpenVoiceOS/ovos-core[all]
pip install ovos-core[all]
start-mycroft.sh
is available to perform common tasks.
Assuming you installed ovos-core in your home directory, run:
+cd ~/ovos-core
./start-mycroft.sh debug
The "debug" command will start the background services (microphone listener, skill, messagebus, and audio subsystems) as
+well as bringing up a text-based Command Line Interface (CLI) you can use to interact with Mycroft and see the contents
+of the various logs. Alternatively you can run ./start-mycroft.sh all
to begin the services without the command line
+interface. Later you can bring up the CLI using ./start-mycroft.sh cli
.
The background services can be stopped as a group with:
+./stop-mycroft.sh
We recommend you create system services to manage ovos instead of depending on the launcher script above
+A good explanation can be found here https://github.com/j1nx/mycroft-systemd
+A reference implementation can be found in ovos-buildroot
+ +from ovos_utils.gui import GUITracker
+from ovos_workshop.skills import OVOSSkill
+from mycroft import intent_handler
+
+
+class MyGUIEventTracker(GUITracker):
+ # GUI event handlers
+ # skill can/should subclass this
+
+ def on_idle(self, namespace):
+ print("IDLE", namespace)
+ timestamp = self.idle_ts
+
+ def on_active(self, namespace):
+ # NOTE: page has not been loaded yet
+ # event will fire right after this one
+ print("ACTIVE", namespace)
+ # check namespace values, they should all be set before this event
+ values = self.gui_values[namespace]
+
+ def on_new_page(self, page, namespace, index):
+ print("NEW PAGE", namespace, index, namespace)
+ # check all loaded pages
+ for n in self.gui_pages: # list of named tuples
+ nspace = n.name # namespace / skill_id
+ pages = n.pages # ordered list of page uris
+
+ def on_gui_value(self, namespace, key, value):
+ # WARNING this will pollute logs quite a lot, and you will get
+ # duplicates, better to check values on a different event,
+ # demonstrated in on_active
+ print("VALUE", namespace, key, value)
+
+
+class MySkill(OVOSSkill):
+ def initialize(self):
+ self.tracker = MyGUIEventTracker(bus=self.bus)
+
+ @intent_handler("gui.status.intent")
+ def handle_status_intent(self, message):
+ print("device has screen:", self.tracker.can_display())
+ print("mycroft-gui installed:", self.tracker.is_gui_installed())
+ print("gui connected:", self.tracker.is_gui_connected())
+ # TODO - speak or something
+
+ @intent_handler("list.idle.screens.intent")
+ def handle_idle_screens_intent(self, message):
+ # check registered idle screens
+ print("Registered idle screens:")
+ for name in self.tracker.idle_screens:
+ skill_id = self.tracker.idle_screens[name]
+ print(" - ", name, ":", skill_id)
+ # TODO - speak or something
+
+Sometimes you want to abort a running intent immediately, the stop method may not be enough in some circumstances
+we provide a killable_intent
decorator in ovos_workshop
that can be used to abort a running intent immediately
a common use case is for GUI interfaces where the same action may be done by voice or clicking buttons, in this case you may need to abort a running get_response
loop
from ovos_workshop.skills import OVOSSkill
+from ovos_workshop.decorators import killable_intent
+from mycroft import intent_handler
+from time import sleep
+
+
+class Test(OVOSSkill):
+ """
+ send "mycroft.skills.abort_question" and confirm only get_response is aborted
+ send "mycroft.skills.abort_execution" and confirm the full intent is aborted, except intent3
+ send "my.own.abort.msg" and confirm intent3 is aborted
+ say "stop" and confirm all intents are aborted
+ """
+ def __init__(self):
+ super(Test, self).__init__("KillableSkill")
+ self.my_special_var = "default"
+
+ def handle_intent_aborted(self):
+ self.speak("I am dead")
+ # handle any cleanup the skill might need, since intent was killed
+ # at an arbitrary place of code execution some variables etc. might
+ # end up in unexpected states
+ self.my_special_var = "default"
+
+ @killable_intent(callback=handle_intent_aborted)
+ @intent_handler("test.intent")
+ def handle_test_abort_intent(self, message):
+ self.my_special_var = "changed"
+ while True:
+ sleep(1)
+ self.speak("still here")
+
+ @intent_handler("test2.intent")
+ @killable_intent(callback=handle_intent_aborted)
+ def handle_test_get_response_intent(self, message):
+ self.my_special_var = "CHANGED"
+ ans = self.get_response("question", num_retries=99999)
+ self.log.debug("get_response returned: " + str(ans))
+ if ans is None:
+ self.speak("question aborted")
+
+ @killable_intent(msg="my.own.abort.msg", callback=handle_intent_aborted)
+ @intent_handler("test3.intent")
+ def handle_test_msg_intent(self, message):
+ if self.my_special_var != "default":
+ self.speak("someone forgot to cleanup")
+ while True:
+ sleep(1)
+ self.speak("you can't abort me")
+
+Sometimes you may want to send files or binary data over the messagebus, ovos_utils
provides some tools to make this easy
Sending a file
+from ovos_utils.messagebus import send_binary_file_message, decode_binary_message
+from ovos_workshop.skills import OVOSSkill
+
+
+class MySkill(OVOSSkill):
+ def initialize(self):
+ self.add_event("mycroft.binary.file", self.receive_file)
+
+ def receive_file(self, message):
+ print("Receiving file")
+ path = message.data["path"] # file path, extract filename if needed
+ binary_data = decode_binary_message(message)
+ # TODO process data somehow
+
+ def send_file(self, my_file_path):
+ send_binary_file_message(my_file_path)
+
+Sending binary data directly
+from ovos_utils.messagebus import send_binary_data_message, decode_binary_message
+from ovos_workshop.skills import OVOSSkill
+
+
+class MySkill(OVOSSkill):
+ def initialize(self):
+ self.add_event("mycroft.binary.data", self.receive_binary)
+
+ def send_data(self, binary_data):
+ send_binary_data_message(binary_data)
+
+ def receive_binary(self, message):
+ print("Receiving binary data")
+ binary_data = decode_binary_message(message)
+ # TODO process data somehow
+
+To interact with skill settings via DeviceApi
+from ovos_backend_client.settings import RemoteSkillSettings
+
+# in ovos-core skill_id is deterministic and safe
+s = RemoteSkillSettings("skill.author")
+# in mycroft-core please ensure a valid remote_id
+# in MycroftSkill class you can use
+# remote_id = self.settings_meta.skill_gid
+# s = RemoteSkillSettings("skill.author", remote_id="@|whatever_msm_decided")
+s.download()
+
+s.settings["existing_value"] = True
+s.settings["new_value"] = "will NOT show up in UI"
+s.upload()
+
+# auto generate new settings meta for all new values before uploading
+s.settings["new_value"] = "will show up in UI"
+s.generate_meta() # now "new_value" is in meta
+s.upload()
+
+
+
+by hijacking skill settings we allow storing arbitrary data via DeviceApi and use it across devices and skills
+from ovos_backend_client.cloud import SeleneCloud
+
+cloud = SeleneCloud()
+cloud.add_entry("test", {"secret": "NOT ENCRYPTED MAN"})
+data = cloud.get_entry("test")
+
+an encrypted version is also supported if you don't trust the backend!
+from ovos_backend_client.cloud import SecretSeleneCloud
+
+k = "D8fmXEP5VqzVw2HE" # you need this to read back the data
+cloud = SecretSeleneCloud(k)
+cloud.add_entry("test", {"secret": "secret data, selene cant read this"})
+data = cloud.get_entry("test")
+
+
+from ovos_backend_client.api import GeolocationApi
+
+geo = GeolocationApi()
+data = geo.get_geolocation("Lisbon Portugal")
+
+
+ OVOS Common Play (OCP) is a full-fledged media player, compatible with the MPRIS standard. Developing a skill for OCP is similar to writing any other OVOS-compatible skill except basic intents and playing media are handled for the developer. This documentation is a quick start guide for developers hoping to write an OCP skill.
+self.voc_match(phrase, "skill_name")
to handle specific requests for your skillself.remove_voc(phrase, "skill_name")
to remove matched phrases from the search requestocp_search
decorator, as many as you want (they run in parallel)result_dict
(track or playlist)self.extend_timeout()
to not let OCP call for a Generic search too soonocp_featured_media
, return a playlist for the OCP menu if selected from GUIrequirements.txt
file with third-party package requirementsskills.json
file for skill metadataThe general interface that OCP expects to receive looks something like the following:
+class OVOSAudioTrack(TypedDict):
+ uri: str # URL/URI of media, OCP will handle formatting and file handling
+ title: str
+ media_type: ovos_plugin_common_play.MediaType
+ playback: ovos_plugin_common_play.PlaybackType
+ match_confidence: int # 0-100
+ album: str | None # Parsed even for movies and TV shows
+ artist: str | None # Parsed even for movies and TV shows
+ length: int | str | None # in milliseconds, if present
+ image: str | None
+ bg_image: str | None
+ skill_icon: str | None # Optional filename for skill icon
+ skill_id: str | None # Optional ID of skill to distinguish where results came from
+
+from os.path import join, dirname
+
+from ovos_plugin_common_play.ocp import MediaType, PlaybackType
+from ovos_utils.parse import fuzzy_match
+from ovos_workshop.skills.common_play import OVOSCommonPlaybackSkill, \
+ ocp_search
+
+
+class MySkill(OVOSCommonPlaybackSkill):
+ def __init__(...):
+ super(....)
+ self.supported_media = [MediaType.GENERIC,
+ MediaType.MUSIC] # <- these are the only media_types that will be sent to your skill
+ self.skill_icon = join(dirname(__file__), "ui", "pandora.jpeg")
+
+ # score
+ @staticmethod
+ def calc_score(phrase, match, base_score=0, exact=False):
+ # implement your own logic here, assing a val from 0 - 100 per result
+ if exact:
+ # this requires that the result is related
+ if phrase.lower() in match["title"].lower():
+ match["match_confidence"] = max(match["match_confidence"], 80)
+ elif phrase.lower() in match["artist"].lower():
+ match["match_confidence"] = max(match["match_confidence"], 85)
+ elif phrase.lower() == match["station"].lower():
+ match["match_confidence"] = max(match["match_confidence"], 70)
+ else:
+ return 0
+
+ title_score = 100 * fuzzy_match(phrase.lower(),
+ match["title"].lower())
+ artist_score = 100 * fuzzy_match(phrase.lower(),
+ match["artist"].lower())
+ if artist_score > 85:
+ score += artist_score * 0.85 + title_score * 0.15
+ elif artist_score > 70:
+ score += artist_score * 0.6 + title_score * 0.4
+ elif artist_score > 50:
+ score += title_score * 0.5 + artist_score * 0.5
+ else:
+ score += title_score * 0.8 + artist_score * 0.2
+ score = min((100, score))
+ return score
+
+ @ocp_search()
+ def search_my_skill(self, phrase, media_type=MediaType.GENERIC):
+ # match the request media_type
+ base_score = 0
+ if media_type == MediaType.MUSIC:
+ base_score += 10
+ else:
+ base_score -= 15 # some penalty for proof of concept
+
+ explicit_request = False
+ if self.voc_match(phrase, "mySkillNameVoc"):
+ # explicitly requested our skill
+ base_score += 50
+ phrase = self.remove_voc(phrase, "mySkillNameVoc") # clean up search str
+ explicit_request = True
+ self.extend_timeout(1) # we know our skill is slow, ask OCP for more time
+
+ for r in self.search_my_results(phrase):
+ yield {
+ "match_confidence": self.calc_score(phrase, r, base_score,
+ exact=not explicit_request),
+ "media_type": MediaType.MUSIC,
+ "length": r["duration"] * 1000, # seconds to milliseconds
+ "uri": r["uri"],
+ "playback": PlaybackType.AUDIO,
+ "image": r["image"],
+ "bg_image": r["bg_image"],
+ "skill_icon": self.skill_icon,
+ "title": r["title"],
+ "artist": r["artist"],
+ "album": r["album"],
+ "skill_id": self.skill_id
+ }
+
+
+{
+ "title": "Plex OCP Skill",
+ "url": "https://github.com/d-mcknight/skill-plex",
+ "summary": "[OCP](https://github.com/OpenVoiceOS/ovos-ocp-audio-plugin) skill to play media from [Plex](https://plex.tv).",
+ "short_description": "[OCP](https://github.com/OpenVoiceOS/ovos-ocp-audio-plugin) skill to play media from [Plex](https://plex.tv).",
+ "description": "",
+ "examples": [
+ "Play Charles Mingus",
+ "Play Jamie Cullum on Plex",
+ "Play the movie Ghostbusters",
+ "Play the movie Ghostbusters on Plex",
+ "Play Star Trek the Next Generation on Plex",
+ "Play the tv show Star Trek the Next Generation on Plex"
+ ],
+ "desktopFile": false,
+ "warning": "",
+ "systemDeps": false,
+ "requirements": {
+ "python": ["plexapi~=4.13", "ovos-workshop~=0.0.11"],
+ "system": {},
+ "skill": []
+ },
+ "incompatible_skills": [],
+ "platforms": ["i386", "x86_64", "ia64", "arm64", "arm"],
+ "branch": "master",
+ "license": "BSD-3-Clause",
+ "icon": "https://freemusicarchive.org/legacy/fma-smaller.jpg",
+ "category": "Music",
+ "categories": ["Music", "Daily"],
+ "tags": ["music", "NeonAI", "NeonGecko Original", "OCP", "Common Play"],
+ "credits": ["NeonGeckoCom", "NeonDaniel"],
+ "skillname": "skill-plex",
+ "authorname": "d-mcknight",
+ "foldername": null
+}
+
+OCP Skills are installed like any other OVOS skill. The preferred pattern is to release a pip package for your OCP skill and install it directly, but skills may also be installed directly from any pip-supported source such as git+https://github.com/OpenVoiceOS/skill-ovos-youtube-music
.
Once a skill has been installed a restart of the mycroft-skills
, ovos-skills
, or neon-skills
service will be required.
Say hi in OpenVoiceOS Chat and a team member would be happy to assist you.
+ +Mycroft Mark2 shipped with a new version of mycroft called "dinkum", this is a total overhaul of mycroft-core and +incompatible
+mycroft-core is now referred to as "Classic Core" by MycroftAI
+MycroftAI now provides what they call sandbox images, to add to the confusion those only work in the mark 2 and "Classic +Core" means the mark-ii/latest branch of mycroft-core, this is a derivative version of the branch that was used in the dev kits (mark-ii/qa) and is also +backwards incompatible, changes in this branch were not done via PRs and had no review or community input
+Mark2 useful links:
+you can find mycroft's guide to porting skills to dinkum here https://mycroft-ai.gitbook.io/mark-ii/differences-to-classic-core/porting-classic-core-skills
+mark2/qa brought some changes to mycroft-core, not all of them backwards compatible and some that were contentious +within the community.
+exactly
and excludes
methods. excludes
will be added upstream in adapt/pull/156. Any skill using these new methods will be incompatible with most core versionsdinkum contains all changes above and also brought further changes to the table
+Any skills using these new "features" will not work outside the mark2
+No, not even classic core skills run in dinkum. We have no plans to support this
+No, dinkum is designed in a very incompatible way, the mycroft
module is not always mycroft-core and the MycroftSkill
class is not always a MycroftSkill, we have no intention of transparently loading dinkum skills in ovos-core
We have a small proof of concept tool to convert a dinkum skill into an ovos/classic core compatible skill, see https://github.com/OpenVoiceOS/undinkumfier
+No, Audio plugin support has been removed, you can run OCP standalone but will be missing the compatibility layers and can't load OCP skills anyway
+It could be made to work but this is not in the roadmap, PRs will be accepted and reviewed
+It should! We don't explicitly target or test it with dinkum, but it is a fairly standalone component
+STT , TTS and WW plugins should work, We don't explicitly target or test compatibility, PRs will be accepted and reviewed
+ +Depending on which image you downloaded you will first see the boot splash which indicates the Operating System is booting. For the buildroot edition the below boot splash will be shown.
+ +If this is the very first time you boot your device, booting might take a bit longer as normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device. +Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown.
+ +Again, if this is the first time you boot your device this might take a bit longer as the ovos-core configuration is populated and skills are being setup.
+ +Grapheme to Phoneme is the process of converting text into a set of "sound units" called phonemes
+These plugins are used to auto generate mouth movements / visemes in the TTS stage, they can also be used to help +configuring wake words or to facilitate training of TTS systems
+These plugins can provide phonemes either in ARPA or IPA alphabets, an automatic conversion will happen behind the scenes when needed
+Mouth movements are generated via a mapping of ARPA to VISEMES,
+Visemes are predefined mouth positions, timing per phonemes will default to 0.4 seconds if the plugin does not report a duration
+ +Mapping based on Jeffers phoneme to viseme map, seen in table 1, partially based on the "12 mouth shapes visuals seen here
+Plugin | +Type | +
---|---|
neon-g2p-cmudict-plugin | +ARPA | +
neon-g2p-phoneme-guesser-plugin | +ARPA | +
neon-g2p-mimic-plugin | +ARPA | +
neon-g2p-mimic2-plugin | +ARPA | +
neon-g2p-espeak-plugin | +IPA | +
neon-g2p-gruut-plugin | +IPA | +
All G2P plugins can be used as follows
+
+utterance = "hello world"
+word = "hello"
+lang="en-us"
+
+plug = G2pPlugin()
+
+# convert a word into a list of phonemes
+phones = plug.get_ipa(word, lang)
+assert phones == ['h', 'ʌ', 'l', 'oʊ']
+
+phones = plug.get_arpa(word, lang)
+assert phones == ['HH', 'AH', 'L', 'OW']
+
+# convert a utterance into a list of phonemes
+phones = plug.utterance2arpa(utterance, lang)
+assert phones == ['HH', 'AH', 'L', 'OW', '.', 'W', 'ER', 'L', 'D']
+
+phones = plug.utterance2ipa(utterance, lang)
+assert phones == ['h', 'ʌ', 'l', 'oʊ', '.', 'w', 'ɝ', 'l', 'd']
+
+# convert a utterance into a list of viseme, duration pairs
+visemes = plug.utterance2visemes(utterance, lang)
+assert visemes == [('0', 0.0775), ('0', 0.155), ('3', 0.2325), ('2', 0.31), ('2', 0.434), ('2', 0.558), ('3', 0.682), ('3', 0.806)]
+
+from ovos_plugin_manager.templates.g2p import Grapheme2PhonemePlugin
+from ovos_utils.lang.visimes import VISIMES
+
+# base plugin class
+class MyARPAG2PPlugin(Grapheme2PhonemePlugin):
+ def __init__(self, config=None):
+ self.config = config or {}
+
+ def get_arpa(self, word, lang, ignore_oov=False):
+ phones = [] # TODO implement
+ return phones
+
+ def get_durations(self, utterance, lang="en", default_dur=0.4):
+ words = utterance.split()
+ phones = [self.get_arpa(w, lang) for w in utterance.split()]
+ dur = default_dur # TODO this is plugin specific
+ return [(pho, dur) for pho in phones]
+
+ def utterance2visemes(self, utterance, lang="en", default_dur=0.4):
+ phonemes = self.get_durations(utterance, lang, default_dur)
+ return [(VISIMES.get(pho[0].lower(), '4'), float(pho[1]))
+ for pho in phonemes]
+
+
+If your plugin uses IPA instead of ARPA simply replace get_arpa
with get_ipa
from ovos_plugin_manager.templates.g2p import Grapheme2PhonemePlugin
+from ovos_utils.lang.visimes import VISIMES
+
+# base plugin class
+class MyIPAG2PPlugin(Grapheme2PhonemePlugin):
+ def __init__(self, config=None):
+ self.config = config or {}
+
+ def get_ipa(self, word, lang, ignore_oov=False):
+ phones = [] # TODO implement
+ return phones
+
+ def get_durations(self, utterance, lang="en", default_dur=0.4):
+ # auto converted to arpa if ipa is implemented
+ phones = [self.get_arpa(w, lang) for w in utterance.split()]
+ dur = default_dur # TODO this is plugin specific
+ return [(pho, dur) for pho in phones]
+
+ def utterance2visemes(self, utterance, lang="en", default_dur=0.4):
+ phonemes = self.get_durations(utterance, lang, default_dur)
+ return [(VISIMES.get(pho[0].lower(), '4'), float(pho[1]))
+ for pho in phonemes]
+
+
+
+ Each image has it's own first boot process.
+When you first boot the Buildroot image, you will be greated with an OVOS splash screen as shown below.
+ +As this is the first time you have booted your device, it might take a bit longer than normal as the system is preparing its local filesystem and extending it over the full size of the sdcard/USB device. +Eventually the progress bar will be filled up indicating the Operating System has been fully booted, after which the ovos-shell animated loading screen will be shown.
+ +Again, since this is your first boot, this might take a bit longer as the ovos-core configuration is populated and skills are being setup.
+The Raspbian image is headless, and therefore you will not see these images. You can still monitor the boot procees by attaching a screen and follow the boot process from the command line.
+The buildroot image supports setting up the network via two options.
+ +You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work. This includes the date and time as a Raspberry Pi does not have a Real Time Clock and therefore does not know this data without being told)
+This is the defult option for the headless images
+Choosing this option will create a temporarily open network - hotspot called "OVOS" to which you can connect from your mobile device. The Raspbian image will give a voice prompt to connect to the hotspot and direct you to a webpage that will allow you to connect your device to WiFi.
+ +On your mobile device go into Settings -> Wi-Fi Settings and the "OVOS" open network will appear in its list.
+ +Connect your device with the OVOS network and a webpage will open to configure the Wi-Fi network on your OpenVoiceOS device. (NOTE: On newer mobile operating systems the captive portal capture has changed and the website will not automatically be opened. If this is the case you can open a browser manually and go to http://start.OpenVoiceOS.com) +The following webpage will be shown;
+ +Select your Wi-Fi network from the list, insert your password and press the "Connect" button.
+ +If everything went fine, you will soon see the green "connected" screen, Buildroot only, on your OpenVoiceOS device. The Raspbian image does NOT have a completed prompt
+Not avaliable on headless images
+Choosing this option allows you to set up your Wi-Fi network straight on your OpenVoiceOS device. If selected a screen with the available networks will be shown on your OpenVoiceOS device.
+ +Select your network from the list and tap / click on it to allow you to insert your password. If you have a touch screen an on-screen keyboard will appear when you tap the password field. If not use a keyboard.
+ +When you have inserted your password, click / tap the connect button and after a short connecting animation, if all went fine you will see the green "connected" screen on your OpenVoiceOS device.
+If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the "Wi-Fi" icon. This brings you to the same on-device configuration screen.
+ +From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks.
+ +This is the suggested method and is default with the headless images
+Only the Buildroot image will have these options no further action is required for the headless images
+Select A Text To Speech (TTS) Engine: A text-to-speech (TTS) system converts normal language text into speech, select an engine from the list
+ +Select A Speech To Text (STT) Engine: A speech-to-text (STT) system converts human speech to text, select an engine from the list
+ +Personal backend is a reverse engineered alternative to selene and requires the backend to be hosted locally.
+OVOS is in early stages, we publish our Raspberry Pi images for download but expect new bugs and new fixes on every release, we are not yet stable!
+These images are development images in alpha stage, bugs and incomplete features are guaranteed.
+You can install OVOS either as an image, container, or manually.
+There are currently two image choices for OVOS. Buildroot, and Raspbian. You can also build images from scratch for both options. (Details in the works)
+Docker containers are avaliable Windows, Macs, and Linux
+In most cases images provide for the easiest install if your hardware is supported. Check the Quick Start Guide for help getting runnin quickly.
+Building your own image can provide for a complete custom build in a package, but is more of an involved process. If you're familiar with Docker, then that option can provide a quick install.
+Guides on building images is located in our technical docs
+ +OVOS provides a couple of different images specificaly for the Raspberry Pi. The most advanced and featureful is the Buildroot image. If you want a GUI, this is currently your best choice.
+Flashing your image to your sdcard or USB drive is not different from flashing any other image. For the non-technical users we advise to use the flashing utility from the Raspberry Pi Foundation which you can find here. +Under "CHOOSE OS" select custom at the very bottom of the list and browse to the downloaded image file. It is not required to unzip / unpack the image as the Raspberry Pi imager software can do that for you on the fly. +Under "CHOOSE STORAGE" select your sdcard or USB device.
+Specific instructions for each image can be found on thier respective Github pages
+If you have a Raspberry Pi 4 we recommend to use a good USB3.1 device. If you have a Raspberry Pi 3, use a proper sdcard. (From fast to slow: USB3.1 - sdcard - USB2)
+ +Prebuilt images come with a default set of skills installed, including, but not limited to the date/time, and weather. Give them a shot.
+Speak these commands and enjoy the spoils
+Hey Mycroft, what time is it?
Hey Mycroft, what is today's date?
Hey Mycroft, what is the weather today?
Hey Mycroft, will it rain today?
While there are several default skills installed, there are many more avaliabe to be used. The link below will show you how to find and install more skills.
+ +But wait, there's more!!
+OVOS ships with a default TTS (Text to Speech) engine which speaks in the origional Alan-Pope
voice that Mycroft used. There are MANY more to choose from. The following link will help you choose and configure a different voice for your assistant.
Your OVOS assistant uses a "wake word" which lets it know it is time to start listening to your commands. By default, the phrase to wake OVOS is Hey Mycroft
. This, like most things in OVOS is totally configurable. Follow the link to learn more.
OVOS ships with default services avaliabe to the public to use. These include public TTS and STT servers, a weather API provided by link to weather provider, access to Wolfram, and more. Part of being an open and private system, you can also change these to your own prefrences.
+OVOS Core supports a GUI Extension framework which allows the GUI service to incorporate additional behaviour for a +specific platform. GUI Extensions currently supported:
+This extension is responsible for managing the smartspeaker GUI interface behaviour, it supports homescreens and +homescreen management. Enabling the smartspeaker GUI extension:
+"gui": {
+ "extension": "smartspeaker",
+ "idle_display_skill": "skill-ovos-homescreen.openvoiceos"
+}
+
+This extension is responsible for managing the plasma bigscreen GUI interface behaviour, it supports window management +and window behaviour control on specific window managers like Kwin. Enabling the Bigscreen GUI extension:
+"gui": {
+ "extension": "bigscreen"
+}
+
+This extension is responsible for managing the mobile GUI interface behaviour, it supports homescreens and additionally +adds support for global page back navigation. Enabling the Mobile GUI extension:
+"gui": {
+ "extension": "mobile",
+ "idle_display_skill": "skill-android-homescreen.openvoiceos",
+}
+
+This extension provides a generic GUI interface and does not add any additional behaviour, +it optionally supports homescreens if the platform or user manually enables it. +This extension is enabled by default when no other extension is specified.
+"gui": {
+ "idle_display_skill": "skill-ovos-homescreen.openvoiceos",
+ "extension": "generic",
+ "generic": {
+ "homescreen_supported": false
+ }
+}
+
+
+ OVOS devices with displays provide skill developers the opportunity to create skills that can be empowered by both voice and screen interaction.
+The display interaction technology is based on the QML user interface markup language that gives you complete freedom to create in-depth innovative nteractions without boundaries or provide you with simple templates within the Mycroft GUI framework that allow minimalistic display of text and images based on your skill development specifics and preferences.
+Mycroft-GUI is an open source visual and display framework for Mycroft running on top of KDE Plasma Technology and built using Kirigami a lightweight user interface framework for convergent applications which are empowered by Qt.
+QML user interface markup language is a declarative language built on top of Qt's existing strengths designed to describe the user interface of a program: both what it looks like, and how it behaves. QML provides modules that consist of sophisticated set of graphical and behavioral building elements.
+A collection of resources to familiarize you with QML and Kirigami Framework.
+ + +The gui service in ovos-core will expose a websocket to +the GUI clients following the protocol outlined here
+The transport protocol works between gui service and the gui clients, mycroft does not directly use the protocol but instead communicates with the gui service via the standard mycroft bus
+OVOS images are powered by ovos-shell, the client side +implementation of the gui protocol
+The GUI library which implements the protocol lives in the mycroft-gui repository.
+ +OVOS uses the standard mycroft-gui framework, you can find the official +documentation here
+The GUI service provides a websocket for gui clients to connect to, it is responsible for implementing the gui protocol +under ovos-core.
+You can find indepth documentation in the dedicated GUI section of these docs
+The gui service has a few sections in mycroft.conf
"gui": {
+ "idle_display_skill": "skill-ovos-homescreen.openvoiceos",
+ "extension": "generic",
+ "generic": {
+ "homescreen_supported": false
+ }
+},
+
+"gui_websocket": {
+ "host": "0.0.0.0",
+ "base_port": 18181,
+ "route": "/gui",
+ "ssl": false
+},
+
+
+ Through these guidelines you will learn how to use principles of Voice User Interface Design to build more effective +skills. These tools will help define and validate the features of the skill before diving deep into development.
+This guide will cover some methods to use that can help plan, prototype and test your skill during the early design stages.
+The full original guide can be found over at the mycroft documentation
+Let's start with an example. A user in Melbourne, Australia might want to know about the weather. To ask for this +information, they might say:
+++"Hey Mycroft, what's today's weather like?"
+"Hey Mycroft, what's the weather like in Melbourne?"
+"Hey Mycroft, weather"
+
Even though these are three different expressions, for most of us they probably have roughly the same meaning. In each +case we would assume the user expects OVOS to respond with today's weather for their current location.
+It is up us as Skill creators to teach OVOS the variety of ways that a user might express the same intent. This is a key +part of the design process. It is the key difference between a Skill that kind of works if you know what to say, and a +Skill that feels intuitive and natural to talk to.
+This is handled by an intent parser whose job it is to learn from your Skill what intents it can handle, and extract +from the user's speech and key information that might be useful for your Skill. In this case it might include the +specified date and location.
+You can think of Prompts as questions and Statements as providing information to the user that does not need a +follow-up response. For example a weather forecast like this would be considered a statement:
+++Today’s forecast is sunny with a high of 60 and a low of 45.
+
For a lot of skills the conversation might end with a simple statement from OVOS, and no further action is necessary. +Try to imagine what the user is trying to accomplish, if a simple statement gets the job done there is no reason to keep +the conversation rolling, and in fact a follow-up might annoy the user with unnecessary interaction.
+It may be tempting to always give users specific instructions like traditional automated phones systems (Interactive +Voice Response). Many phone systems are notorious for being too verbose and difficult to follow.
+With OVOS we’re trying to break that mold and make the interaction natural. If you follow the phone system method you +may be giving the user the exact phrasing to say, but you’re also taking up valuable time and training them to think the +system is very narrow in capability. In the event that the user does give a response that your skill can not handle, +create follow-up prompts that steer the user back on track.
+Remember, there are no errors in a cooperative conversation.
+Avoid
+Speaker | ++ |
---|---|
Mycroft | +How many people are playing? For example, you can say 2 players. | +
Better
+Speaker | ++ |
---|---|
Mycroft | +How many players? | +
User | +My Brother and Myself | +
Mycroft | +I’m sorry, what was the number of players? | +
User | +Two | +
In the first example the user is told explicitly what they can say, but the prompt is unnatural, we don’t typically +suggest responses to a conversation partner in real life. These long-winded prompts can become tiresome if they are used +repeatedly throughout the skill. Remember the phone system example, typically poorly designed automated phone systems +inundate the user with many options and additional instructions at every step of the interaction.
+In the second example we see a better prompt, although the user gives a response that is easy for a human to understand +it is more difficult to design a skill to understand. Instead, the skill designer can apply a re-prompt strategy that +steers the user back on track and doesn't require them to start the interaction over. In this case the re-prompt changes +the wording slightly to make it clear that the user needs to say a number. The next time the user interacts with this +Skill, they will likely say a number in their first interaction.
+Determining whether to respond to the user with a statement or a prompt can be a bit tricky. When the user is +somewhat vague it is tempting to assume the user’s intent in order to speed along the interaction. Sometimes this is the +right approach and sometimes it is not. If you are very confident in the user’s input then it is alright to assume, and +respond with a statement. For example in the Wikipedia Skill the Wikipedia API is used to confidently select the best response.
+Wikipedia Skill
+Speaker | ++ |
---|---|
User | +Tell me about Abraham Lincoln | +
Mycroft | +Abraham Lincoln was an American statesman and lawyer who served as the 16th president of the UnitedStates from 1861 to 1865. Lincoln led the nation through its greatest moral, constitutional, and political crisis in the American Civil War. | +
In contrast, let’s look at an example of where a follow-up prompt is a better approach. You’re working on a skill that +gives important information about birds, such as wingspan, laden and unladen airspeed, etc..
+Avoid
+Speaker | +**** | +
---|---|
User | +what is the airspeed velocity of an unladen swallow? | +
Mycroft | +The unladen airspeed of the european swallow is 20.1 miles per hour. | +
Better
+Speaker | ++ |
---|---|
User | +What is the airspeed velocity of an unladen Swallow? | +
Mycroft | +What do you mean, African or European Swallow? | +
User | +European Swallow. | +
Mycroft | +The unladen airspeed of the European Swallow is 20.1 miles per hour. | +
In the first example Mycroft assumes the user is referring to the European Swallow, however there is nothing to give +confidence to that assumption. In this case it’s dangerous to assume the User meant the European Swallow since it is +just as likely they meant the African Swallow. When there is no confidence in one answer versus another, OVOS should +follow up with a prompt as demonstrated in the second example.
+Another approach to solving this problem is to offer the most likely answer and inform the user that there are other +answers available with a statement.
+Confirmation approaches can also be defined by Statements or Prompts , but when we talk about them in the context of +confirmations we call them Implicit and Explicit.
+This type of confirmation is also a statement. The idea is to parrot the information back to the user to confirm that it +was correct, but not require additional input from the user. The implicit confirmation can be used in a majority of +situations. For example Mycroft’s Timer Skill uses implicit confirmation when creating new timers.
+Timer Skill
+Speaker | ++ |
---|---|
User | +Hey Mycroft set a timer for 5 minutes | +
Mycroft | +Alright, I’m starting a 5 minute timer | +
Avoid
+Speaker | ++ |
---|---|
User | +Hey Mycroft, add bananas to my shopping list | +
Mycroft | +done | +
Better
+Speaker | ++ |
---|---|
User | +Hey Mycroft, add bananas to my shopping list | +
Mycroft | +Adding bananas | +
This type of confirmation requires an input from the user to verify everything is correct. Only use explicit +confirmation when needed as it can be tedious for the user to always confirm their input. This strategy is best when +there are multiple steps, and it would be difficult to fix a misstep without starting over, or when deleting or +modifying an existing multistep input.
+Avoid
+Speaker | ++ |
---|---|
User | +Hey Mycroft, set an alarm for 9:00 am tomorrow | +
Mycroft | +Ok, I’ve set an alarm for 9:00 am tomorrow, is that correct? | +
User | +Yes | +
Mycroft | +9:00 am alarm set | +
Better
+Speaker | ++ |
---|---|
User | +Hey Mycroft, set an alarm for 9:00 am tomorrow | +
Mycroft | +I've set an alarm for tomorrow at 9:00 am, 16 hours from now | +
The first example requires the user to confirm the alarm. Since the interaction is quite short it isn’t necessary to use +explicit confirmation.
+Avoid
+Speaker | ++ |
---|---|
User | +Hey Mycroft, set a calendar event for a meeting with Chris Stevens | +
Mycroft | +When is the event? | +
User | +Tomorrow at 11:00 am | +
Mycroft | +should I send an invite to Chris Stevens? | +
User | +Yes | +
Mycroft | +Alright, I've created a meeting for Tomorrow | +
Mycroft | +Alright, I’ve created a meeting for Tomorrow at 11:00 am with Chris Stevens | +
User | +Cancel my 11:00 am meeting tomorrow | +
Mycroft | +Done | +
Better
+Speaker | ++ |
---|---|
User | +Hey Mycroft, set a calendar event for a meeting with Chris Stevens | +
Mycroft | +When is the event? | +
User | +Tomorrow at 11:00 am | +
Mycroft | +should I send an invite to Chris Stevens? | +
User | +Yes | +
Mycroft | +Alright, I’ve created a meeting for Tomorrow at 11:00 am with Chris Stevens | +
User | +Cancel my 11:00 am meeting tomorrow | +
Mycroft | +Are you sure you want to delete your meeting tomorrow at 11:00 am? | +
User | +Yes | +
Both examples use explicit and implicit confirmation correctly until the end of the interaction when the user wants to +cancel the event. Unlike setting a simple timer or an alarm creating this calendar event requires multiple steps and +would require reasonable effort to recreate. There may also be significant consequences if the request was +misinterpreted and the meeting was cancelled unintentionally.
+Any time the user needs to input a lot of information or the user needs to sort through a variety of options a conversation will be needed. +Users may be used to systems that require them to separate input into different chunks. +For example, many personal navigation devices (GPS units) require users to input the street number, street name, city, and state separately, like this:
+Speaker | ++ |
---|---|
Navigation Device | +What is the street number | +
User | +123 | +
Navigation Device | +What is the street name | +
User | +Main Street | +
Navigation Device | +What is the city | +
User | +Springfield | +
Navigation Device | +Oregon | +
This was a limitation of the technology, both the STT and intent parser were not capable of handling an entire address. +With OVOS you do not need to separate the address into multiple inputs. +It’s also easy to think like a traditional GUI and translate what are typically separate fields into prompts for the user, +but that will only slow down the interaction.
+The best solution is to allow the user to input information both ways, all at once, or in pieces. +For example the Alarm skill allows the user to set a complicated alarm all in one go, or in steps.
+In Steps:
+Speaker | ++ |
---|---|
User | +Hey Mycroft, set a recurring alarm | +
Mycroft | +For what day? | +
User | +weekdays | +
Mycroft | +For what time? | +
User | +7 am | +
Mycroft | +I've set a recurring Alarm for 7 am on weekdays | +
All at once:
+Speaker | ++ |
---|---|
User | +Hey Mycroft, set a recurring alarm for weekdays at 7 am | +
Mycroft | +I've set a recurring Alarm for 7 am on weekdays | +
Conversational context is a given when speaking to another person, but not always when interacting with a computer based system. Many skills are designed with only a quick back and forth interaction in mind, and can operate well without conversational context, but conversational context can be a powerful addition to a skill and can make the interaction much more life like. Here are some examples of how to use context in conversation.
+Speaker | ++ |
---|---|
User | +Who is Shaquille O'Neal | +
Mycroft | +Shaquille O'Neal is an American former professional basketball player who is a sports analyst on the television program Inside the NBA on TNT. He is considered one of the greatest players in National Basketball Association history | +
User | +How tall is he? | +
Mycroft | +seven foot 1 inches tall | +
User | +when did he retire | +
Mycroft | +2011 | +
Inevitably, the user will say something that your skill can’t handle. It’s best not to think of these as errors on the part of the user, remember there aren’t really errors in conversations. Conversations are naturally cooperative with both parties seeking to continue to a successful outcome. Keeping that in mind you should do your best to avoid Error Handling that is a dead-end.
+Avoid
+Speaker | ++ |
---|---|
User | +Timer | +
Mycroft | +I'm sorry I can't help you with that | +
Better
+Speaker | ++ |
---|---|
User | +Timer | +
Mycroft | +A timer for how long? | +
User | +5 minutes | +
In the first example OVOS does not give the user any options to finish the job they set out to do. Try to avoid situations where the user has to start the conversation over.
+ +Most of our guides have you create a user called ovos with a password of ovos, while this makes install easy, it's VERY insecure. As soon as possible, you should secure ssh using a key and disable password authentication.
+Create a keyfile (you can change ovos to whatever you want)
+ssh-keygen -t ed25519 -f ~/.ssh/ovos
+
+Copy to host (use the same filename as above, specify the user and hostname you are using)
+ssh-copy-id -i ~/.ssh/ovos ovos@mycroft
+
+On your dekstop, edit ~/.ssh/config and add the following lines
+Host rp2
+ user ovos
+ IdentityFile ~/.ssh/ovos
+
+On your ovos system, edit /etc/ssh/sshd_config and add or uncomment the following line:
+PasswordAuthentication no
+
+restart sshd or reboot
+sudo systemctl restart sshd
+
+
+ NOTE This repo has been archived and will no longer be updated
+Make a manjaro based OpenVoiceOS image
+source code: https://github.com/OpenVoiceOS/ovos-image-arch-recipe/
+The included Dockerfile can be used to build a default image in a Docker environment.
+The following dependencies must be installed on the build system before running the +container:
+ +First, create the Docker container:
+docker build . -t ovos-image-builder
+
+Then, run the container to create a OVOS Image. Set CORE_REF
to the branch of
+ovos-core
that you want to build and RECIPE_REF
to the branch of ovos-image-recipe
+you want to use. Set MAKE_THREADS
to the number of threads to use for make
processes.
docker run \
+-v /home/${USER}/output:/output:rw \
+-v /run/systemd/resolve:/run/systemd/resolve \
+-e CORE_REF=${CORE_REF:-dev} \
+-e RECIPE_REF=${RECIPE_REF:-master} \
+-e MAKE_THREADS=${MAKE_THREADS:-4} \
+--privileged \
+--network=host \
+--name=ovos-image-builder \
+ovos-image-builder
+
+The entire build process will generally take several hours; it takes 1-2 hours +on a build server with 2x Xeon Gold 5118 CPUs (48T Total).
+The scripts in the automation
directory are available to help automate building a default image.
+For building an image interactively:
bash automation/prepare.sh
+bash /tmp/run_scripts.sh
+
+The below documentation describes how to manually build an image using the individual scripts in this repository.
+The scripts and overlay files in this repository are designed to be applied to a base image
+as the root
user. It is recommended to apply these scripts to a clean base image.
+Instructions are available at opensource.com.
++Note: The GUI shell is not installable under some base images
+
For each step except boot_overlay, the directory corresponding
+to the step should be copied to the mounted image and the script run from a terminal
+chroot
-ed to the image. If running scripts from a booted image, they should be
+run as root
.
From the host system where this repository is cloned, running prepare.sh <base_image>
+will copy boot overlay files, mount the image, mount DNS resolver config from the host system,
+copy all other image overlay files to /tmp
, and chroot
into the image. From here, you can
+run any/all of the following scripts to prepare the image before cleaning up
Configures user accounts and base functionality for RPi. ovos
user is created with proper permissions here.
At this stage, a booted image should resize its file system to fill the drive it is flashed to. Local login and
+ssh connections should use ovos
/ovos
to authenticate and be prompted to change password on login.
Adds Balena wifi-connect to enable a portal for connecting the Pi device to a Wi-Fi network.
+A booted image will now be ready to connect to a network via SSID OVOS
.
For SJ201 board support, the included script will build/install drivers, add required overlays, install required system +packages, and add a systemd service to flash the SJ201 chip on boot. This will modify pulseaudio and potentially overwrite +any previous settings.
+++Note: Running this scripts grants GPIO permissions to the
+gpio
group. Any user that interfaces with the SJ201 board +should be a member of thegpio
group. Group permissions are not modified by this script
Audio devices should now show up with pactl list
.
+Audio devices can be tested in the image by recording a short audio clip and playing it back.
parecord test.wav
+paplay test.wav
+
+Installs ovos-shell and mycroft-gui-app. Adds and enables ovos-gui.service to start the shell +on system boot.
+The image should now boot to the GUI shell.
+Installs ovos-core
and dependencies. Configures services for core modules.
At this stage, the image is complete and when booted should start OVOS.
+Installs the OVOS Dashboard and service +to start the dashboard from the GUI.
+From the GUI Settings
-> Developer Settings
menu, Enable Dashboard
will now start
+the dashboard for remote access to device diagnostics.
Installs libcamera
and other dependencies for using a CSI camera.
The default camera skill can be used to take a photo; libcamera-apps
are also
+installed for testing via CLI.
Enables a custom splash screen and disables on-device TTY at boot.
+On boot, a static image should be shown until the GUI Shell starts.
+cleanup.sh
removes any temporary files from the mounted image before unmounting it.
+After running cleanup.sh
, the image is ready to burn to a drive and boot.
Work in Progress
+ +OVOS provides a couple of prebuilt images to use with a Raspberry Pi
+This is the most advanced image that OVOS provides and is intended to be used as a complete system with a GUI. This image has auto-detection of a large amount of hardware, including but not limited to respeaker microphones, and the sj201 sound board that is used by the Mark2.
+Buildroot images are avaliable for download here. Decompress this file and continue to the next section Burning the image to a SD card or USB drive
+This is a new image created as a headless device. This is supposed to be optimized to run on less powerful devices such as a RPi3 which does not have the power to run a GUI. This image is still in heavy development and expect some bugs while using.
+Raspbian-ovos images are avaliable for download here. Unzip this file and continue to the next section Burning the image to a SD card or USB drive
+There are a few ways to burn your image to a drive to use in a Raspberry PI. Both methods described below will work for either OVOS image that you would like to try.
+This method can be used with a Linux or Windows host machine.
+The people at raspberry pi provide a great little program made for burning an image to a device. You can get it here. The team at OVOS has tried to make the setup as simple as possible, and if you follow the steps below, you should have no problems on your first boot.
+sudo raspi-imager
.Be careful with the dd command, you can easily render your computer useless if the command is entered wrong
+lsblk
command.sdb
. If you are unsure what drive it is, remove it, and run the command again. The one that is missing, is the drive you want to use.dd
command.WARNING: Please make sure you are writing to the correct location, as this is the step that can screw everything up
+sudo dd if=<path_to_unzipped_image> of=<path_to_boot_medium> bs=4M status=progress
sudo sync
With either method used, you should now have a bootable disk to use with your Raspberry PI
+ +OVOS provides a couple of prebuilt images to use with a Raspberry Pi
+This is the most advanced image that OVOS provides and is intended to be used as a complete system with a GUI. This image has auto-detection of a large amount of hardware, including but not limited to respeaker microphones, and the sj201 sound board that is used by the Mark2.
+Buildroot images are avaliable for download here. Decompress this file and continue to the next section Burning the image to a SD card or USB drive
+This is a new image created as a headless device. This is supposed to be optimized to run on less powerful devices such as a RPi3 which does not have the power to run a GUI. This image is still in heavy development and expect some bugs while using.
+Raspbian-ovos images are avaliable for download here. Unzip this file and continue to the next section Burning the image to a SD card or USB drive
+There are a few ways to burn your image to a drive to use in a Raspberry PI. Both methods described below will work for either OVOS image that you would like to try.
+This method can be used with a Linux or Windows host machine.
+The people at raspberry pi provide a great little program made for burning an image to a device. You can get it here. The team at OVOS has tried to make the setup as simple as possible, and if you follow the steps below, you should have no problems on your first boot.
+sudo raspi-imager
.Be careful with the dd command, you can easily render your computer useless if the command is entered wrong
+lsblk
command.sdb
. If you are unsure what drive it is, remove it, and run the command again. The one that is missing, is the drive you want to use.dd
command.WARNING: Please make sure you are writing to the correct location, as this is the step that can screw everything up
+sudo dd if=<path_to_unzipped_image> of=<path_to_boot_medium> bs=4M status=progress
sudo sync
With either method used, you should now have a bootable disk to use with your Raspberry PI
+ +This guide describes two ways to create a headless OVOS system suitable for running on a Raspberry Pi 3 or 4. You can either download and burn a prebuilt image to an installation medium like an SD card, or you can use your own installation of the Raspberry PI OS and run an OVOS install script.
+The RPi3 does not have the processing power to reliably run ovos-shell, the GUI system for OVOS, but has plenty to run the rest of the stack.
+By the end of the guide, you should have a running OVOS stack, (messagebus, phal, skills, listener, and audio), along with a "lite" version of RaspberryPiOS. Which means you also have a package manager (apt) available to you.
+OVOS source files used by this guide can be found at raspbian-ovos. Any issues or pull requests should be made in this repository.
+Raspberry Pi Imager is available here. There have been issues when using Raspberry Pi Imager to burn pre-built images. From Linux we have had success starting the imager with the command sudo raspi-imager
.
Download a pre-built OVOS/PI image from our raspbian-ovos download site.
+Here are two methods to install your OVOS/PI image file onto your SD card.
+Upon completion, you should have a bootable SD card or USB drive.
+Be careful with the dd command, you can easily render your computer useless
+unzip <path-to-zipped-image>
lsusb
command.sdxx
sudo dd if=<path-to-unzipped-image> of=<path-to-sd-card> bs=4M status=progress
Upon completion, you should have a bootable SD card or USB drive.
+Insert the SD card, hook up your audio, and turn on your OVOS Pi.
+This image comes with a predefined user, ovos
with password ovos
. It is recommended that you change your password on first login.
sudo passwd ovos
Enter your new password twice.
+On first boot, you will be voice-prompted to connect to SSID OVOS
and go to the website start.openvoiceos.com
. This is not the official OVOS website but a local hotspot that the image has created on your Raspberry Pi.
Then from a computer that supports wireless networking, connect to the SSID 'OVOS' and go to the website 'start.openvoiceos.com'. There you can enter the credentials of your WiFi network. If your sound isn't working, no worries, you can keep scanning your computer's list of nearby SSIDs until you see OVOS, and then connect to the network without hearing the verbal prompt.
+This image on a RPi3B takes several minutes to boot before you hear the audio prompt, and several more minutes to finish booting. If you don't think it's working, please wait up to 3 minutes each time before thinking something went wrong. You can also follow progress in the OVOS log files found in ~/.local/state/mycroft/*.log.
+If for some reason this method does not work, sudo raspi-config
and nmtui
are also available.
There are lots of guides, but this one is the official guide
+Our experience with Linux is to invoke the raspi-imager with sudo raspi-imager
.
From here you must choose one of two methods:
+Here we use the Raspberry Pi Imager to write your media without selecting any Advanced Options.
+ovos
with a password of your choosing.Run the command sudo raspi-config
to enter the Pi setup utility.
We will be running the OVOS environment as a regular user, and we want OVOS to start at boot time, so we want to autologin.
+Enter the System Options
page.
Enter the Boot / Autologin
page.
+- Use the second option in the menu, Console Autologin
.
+ - This enables OVOS to start up automatically at boot time.
Now we will enable a few interface options. This will allow us to access our device from a ssh shell
and prep the PI for other devices that may be used. Some microphone hats require SPI, or I2C (Respeaker, AIY-Voicebonnet, etc).
Go back to the main menu and enter the Interface Options
page.
+- Enable SSH, SPI, and I2C.
+ - After SSH is enabled, the rest of the guide can be done from a remote computer.
Go back to the main menu and enter the Localisation Options
page.
+- Configure Locale, Timezone, WLAN Country.
You will need an internet connection to complete the rest of the guide
+** Optional: Setup WIFI **
+System Options
again.Wireless LAN
section and follow the prompts.Wireless Lan
is Hostname
. Choose a name for your OVOS device and enter it there. raspi-config
tool. Next find your IP address. The command is the same if you used the WiFi setup or have a LAN connected.ip addr
.In the output, if things were configured correctly, there will be one or more lines that are relevant. Find the device that you used to connect, WiFi will start with something like wlan
and a LAN connection should begin with eth
or enp
or something similar. In the device section, there is an inet
entry. The number located there is your local IP address. It should be in the format 192.168.x.xxx
or something similar. Write this down, or remember it. You will be using it to log in with an SSH shell
.
Now the device setup is done. Exit raspi-config and reboot.
+sudo reboot now
Here we use the Raspberry Pi Imager to write your media and let the Imager handle your network and SSH setup.
+If your network cannot locate computers by their hostnames, this method will not easily work for you. In other words, if you cannot ping a network connection with a host name, and you need to use an IP address to ping other network computers, use Method 1 described above. If you are comfortable looking up the OVOS computer's IP address using your router or other network software, Method 2 will still work.
+Instead of selecting "Write", click on the cog in the lower right of the Imager panel to open Raspberry Pi Imager advanced options.
+In this new panel, check the boxes for:
+Click "Save", then click "Write". Once writing is complete, move the SD card to your OVOS device and boot up.
+After logging in as ovos, run the command sudo raspi-config
to enter the Pi setup utility.
We will be running the OVOS environment as a regular user, and we want OVOS to start at boot time, so we want to autologin.
+Enter the System Options
page.
Enter the Boot / Autologin
page.
Console Autologin
.Now we will enable a few interface options. This will allow us to access our device from a ssh shell
and prep the PI for other devices that may be used. Some microphone hats require SPI, or I2C (Respeaker, AIY-Voicebonnet, etc).
Go back to the main menu and enter the Interface Options
page.
Go back to the main menu and enter the Localisation Options
page.
Now the device setup is done. Exit raspi-config and reboot.
+sudo reboot now
*** From this point on, you should be able to access your device from any SSH terminal. ***
+For guide for how to do this, see raspberrypi documentation remote-access
+From a linux machine, open a terminal and enter the command ssh ovos@<your-remembered-IP-address>
or ssh ovos@<your-hostname>
. There will be a warning making sure you want to connect to this device. Enter yes, and when asked, enter the password for ovos that you made earlier in the setup.
+ovos
As a final configuration step, make sure your system is up to date.
+sudo apt -y update && sudo apt -y upgrade
We should be done with the basic setup now. You should have a running RaspberryPiOS device with the user ovos
There are some recommendations to use a venv for OVOS. This guide DOES NOT do that. The OVOS headless stack on a RPi3 is about all it can handle. It is assumed that this is a dedicated OVOS device, therefore no venv is required.
+We will be cloning code from a git repository, so before starting we need to install git.
+sudo apt install git
We will also be installing everything to the user environment instead of system wide. As ovos is the only user, this should be fine.
+Although not strictly necessary, we assume that we're starting in the ovos home directory.
+cd ~
Clone the repository
+git clone https://github.com/OpenVoiceOS/raspbian-ovos.git
cd raspbian-ovos
Run the install script and follow the prompts. It's fine to say yes "Y" to everything.
+./manual_user_install.sh
You should now have a running OVOS device!!
+Check your installation with
+systemctl --user status ovos-*
The full OVOS can take a few minutes to load (especially on a Pi 3), but the processes should all eventually say active (running)
, except for ovos.service
which should say active (exited)
You can also track progress by watching the files in ~/.local/state/mycroft/*.log. Once things slow down you can try saying "Hey Mycroft". In a few seconds (the first time is slow) you should hear a 'ding' from the system. Then say "What day is it". After a delay you should hear information about today's date.
+Often the audio can take some tuning, and in general is not covered here. Pulseaudio should be running, check with systemctl --user status pulseaudio
. Each piece of hardware is different to set up. I am sure there is a guide somewhere for your hardware. One thing to mention, this is a full raspbian install, so installing drivers should work also.
Once the OVOS processes are running, if you don't hear a 'ding' after two or three times saying "Hey Mycroft", start up alsamixer and make sure your microphone is recognized and the volume is turned up. At least one USB microphone (mine) defaults to "Auto Gain Control" which needs to be turned off and replaced by turning up the microphone volume. You may also need to turn up the speaker volume.
+This installation of ovos-core only has a few default skills shipped with it. Check this page for more information on skills.
+Please enter suggestions and support requests on our raspbian-ovos github page. Thank you!
+ +There are a few ways to install skills in ovos. The preferred way is with pip
and a setup.py
file.
Most skills are found throughout github. The official skills can be found with a simple search in the OVOS github page. There are a few other places they can be found. Neon AI has several skills, and a search through github will for sure find more.
+The preferred method is with pip
. If a skill has a setup.py
file, it can be installed this way.
The syntax is pip install git+<github/repository.git>
.
ex. pip install git+https://github.com/OpenVoiceOS/skill-ovos-date-time.git
should install the ovos-date-time skill
They can be installed locally also.
+Clone the repository
+git clone https://github.com/OpenVoiceOS/skill-ovos-date-time
pip install ./skill-ovos-date-time
After installing skills this way, ovos skills service needs to be restarted
+systemctl --user restart ovos-skills
This is NOT the preferred method and is here for backward compatabity with the origional mycroft-core
skills.
Skills can also be directly cloned to the skill directory, usually located at ~/.local/share/mycroft/skills/
enter the skill directory
+cd ~/.local/share/mycroft/skills
and clone the found skill here with git
+git clone <github/repository.git>
ex. git clone https://github.com/OpenVoiceOS/skill-ovos-date-time.git
will install the ovos-date-time skill.
A restart of the ovos-skills service is not required when installing this way.
+ + +The OVOS skills manager is in need of some love, and when official skill stores are created, this will be updated to use the new methods. Until then, this method is NOT recomended, and NOT supported. The following is included just as refrence.
+Install skills from any appstore!
+The mycroft-skills-manager alternative that is not vendor locked, this means you must use it responsibly!
+Do not install random skills, different appstores have different policies!
+Keep in mind any skill you install can modify mycroft-core at runtime, and very likely has +root access if you are running on a raspberry pi
+pip install ovos-skills-manager
+
+Enable a skill store
+osm enable --appstore [ovos|mycroft|pling|andlo|all]
+
+Search for a skill and install it
+osm install --search
+
+See more osm commands
+osm --help
+osm install --help
+
+
+
+ A user can accomplish the same task by expressing their intent in multiple ways. The role of the intent parser is to +extract from the user's speech key data elements that specify their intent in more detail. This data can then be passed +to other services, such as Skills to help the user accomplish their intended task.
+Example: Julie wants to know about today's weather in her current location, which is Melbourne, Australia.
+++"hey mycroft, what's today's weather like?"
+"hey mycroft, what's the weather like in Melbourne?"
+"hey mycroft, weather"
+
Even though these are three different expressions, for most of us they probably have roughly the same meaning. In each +case we would assume the user expects OVOS to respond with today's weather for their current location. The role of an +intent parser is to determine what this intent is.
+In the example above, we might extract data elements like:
+OVOS has two separate Intent parsing engines each with their own strengths. +Each of these can be used in most situations, however they will process the utterance in different ways.
+Example based intents are trained on whole phrases. These intents are generally more accurate however require you to include sample phrases that cover the +breadth of ways that a User may ask about something.
+**Keyword / Rule based ** these intents look for specific required keywords. They are more flexible, but since these are essentially rule based this can result in a lot of false matches. +A badly designed intent may totally throw the intent parser off guard. The main advantage of keyword based intents is the integration with conversational context, they facilitate continuous dialogs
+OVOS is moving towards a plugin system for intent engines, currently only the default MycroftAI intent parsers are supported
+ + +These plugins can be used to detect the language of text and to translate it
+They are not used internally by ovos-core but are integrated with external tools
+neon-core also makes heavy use of OPM language plugins
+Plugin | +Detect | +Tx | +Offline | +Type | +
---|---|---|---|---|
neon-lang-plugin-cld2 | +yes | +no | +yes | +FOSS | +
neon-lang-plugin-cld3 | +yes | +no | +yes | +FOSS | +
neon-lang-plugin-langdetect | +yes | +no | +yes | +FOSS | +
neon-lang-plugin-fastlang | +yes | +no | +yes | +FOSS | +
neon-lang-plugin-lingua_podre | +yes | +no | +yes | +FOSS | +
neon-lang-plugin-libretranslate | +yes | +yes | +no | +API (self hosted) | +
neon-lang-plugin-apertium | +no | +yes | +no | +API (self hosted) | +
neon-lang-plugin-amazon_translate | +yes | +yes | +no | +API (key) | +
neon-lang-plugin-google_translate | +yes | +yes | +no | +API (key) | +
Open Linguistika is a tool to allow Mycroft Skill developers working on GUI’s to easily translate their GUI’s to other languages.
+For Mycroft’s GUI, the UI interface used currently by Mycroft for which you can find QML files under the UI directory of skills, is based on Qt. Mycroft GUI uses Qt’s translation mechanism to translate GUI’s to other languages.
+To get your skills GUI translated and ready for other languages involves several manual steps from running Qt tools like lupdate against each QML UI file for each translatable language to running Qt’s tool lrelease for specific language targets to compile a language for the QT environment to understand. To make your developer experience smarter and easier the OpenVoiceOS team is introducing an all-in-one toolkit for GUI language translations.
+The Open Linguistika toolkit allows developers to use auto-translate from various supported translator providers, and additionally support more languages, with the possibility for manual translations without having to go through the different Qt tools and command chain required to manually support a skill GUI for a different language.
+As a GUI skill Developer, the only know-how you need is to add the translation calls to your skill QML files, Developers can get more information about how to add them here Internationalization and Localization with Qt Quick | Qt 6.3.
+The “TLDR” version is that for every hard-coded string in your QML UI skill file you need to decorate your strings with the qsTr() decorator and your model list elements with the QT_TR_NOOP() decorator.
+Open Linguistika when installed by default on your distribution by choice, currently supports 6 European languages and 2 auto-translation providers.
+The tool provides extensibility through its JSON configuration interface to add more language support, where using a simple JSON language addition mechanism you can extend the tool to support a number of additional languages you would like to support for your skills UI. You can read more about adding additional languages on the tool’s GitHub repository.
+ + +Open Linguistika is a tool to allow Mycroft Skill developers working on GUI’s to easily translate their GUI’s to other languages.
+For Mycroft’s GUI, the UI interface used currently by Mycroft for which you can find QML files under the UI directory of skills, is based on Qt. Mycroft GUI uses Qt’s translation mechanism to translate GUI’s to other languages.
+To get your skills GUI translated and ready for other languages involves several manual steps from running Qt tools like lupdate against each QML UI file for each translatable language to running Qt’s tool lrelease for specific language targets to compile a language for the QT environment to understand. To make your developer experience smarter and easier the OpenVoiceOS team is introducing an all-in-one toolkit for GUI language translations.
+The Open Linguistika toolkit allows developers to use auto-translate from various supported translator providers, and additionally support more languages, with the possibility for manual translations without having to go through the different Qt tools and command chain required to manually support a skill GUI for a different language.
+As a GUI skill Developer, the only know-how you need is to add the translation calls to your skill QML files, Developers can get more information about how to add them here Internationalization and Localization with Qt Quick | Qt 6.3.
+The “TLDR” version is that for every hard-coded string in your QML UI skill file you need to decorate your strings with the qsTr() decorator and your model list elements with the QT_TR_NOOP() decorator.
+Open Linguistika when installed by default on your distribution by choice, currently supports 6 European languages and 2 auto-translation providers.
+The tool provides extensibility through its JSON configuration interface to add more language support, where using a simple JSON language addition mechanism you can extend the tool to support a number of additional languages you would like to support for your skills UI. You can read more about adding additional languages on the tool’s GitHub repository.
+ + +Make a manjaro based OpenVoiceOS image
+source code: https://github.com/OpenVoiceOS/ovos-image-arch-recipe/
+The included Dockerfile can be used to build a default image in a Docker environment.
+The following dependencies must be installed on the build system before running the +container:
+ +First, create the Docker container:
+docker build . -t ovos-image-builder
+
+Then, run the container to create a OVOS Image. Set CORE_REF
to the branch of
+ovos-core
that you want to build and RECIPE_REF
to the branch of ovos-image-recipe
+you want to use. Set MAKE_THREADS
to the number of threads to use for make
processes.
docker run \
+-v /home/${USER}/output:/output:rw \
+-v /run/systemd/resolve:/run/systemd/resolve \
+-e CORE_REF=${CORE_REF:-dev} \
+-e RECIPE_REF=${RECIPE_REF:-master} \
+-e MAKE_THREADS=${MAKE_THREADS:-4} \
+--privileged \
+--network=host \
+--name=ovos-image-builder \
+ovos-image-builder
+
+The entire build process will generally take several hours; it takes 1-2 hours +on a build server with 2x Xeon Gold 5118 CPUs (48T Total).
+The scripts in the automation
directory are available to help automate building a default image.
+For building an image interactively:
bash automation/prepare.sh
+bash /tmp/run_scripts.sh
+
+The below documentation describes how to manually build an image using the individual scripts in this repository.
+The scripts and overlay files in this repository are designed to be applied to a base image
+as the root
user. It is recommended to apply these scripts to a clean base image.
+Instructions are available at opensource.com.
++Note: The GUI shell is not installable under some base images
+
For each step except boot_overlay, the directory corresponding
+to the step should be copied to the mounted image and the script run from a terminal
+chroot
-ed to the image. If running scripts from a booted image, they should be
+run as root
.
From the host system where this repository is cloned, running prepare.sh <base_image>
+will copy boot overlay files, mount the image, mount DNS resolver config from the host system,
+copy all other image overlay files to /tmp
, and chroot
into the image. From here, you can
+run any/all of the following scripts to prepare the image before cleaning up
Configures user accounts and base functionality for RPi. ovos
user is created with proper permissions here.
At this stage, a booted image should resize its file system to fill the drive it is flashed to. Local login and
+ssh connections should use ovos
/ovos
to authenticate and be prompted to change password on login.
Adds Balena wifi-connect to enable a portal for connecting the Pi device to a Wi-Fi network.
+A booted image will now be ready to connect to a network via SSID OVOS
.
For SJ201 board support, the included script will build/install drivers, add required overlays, install required system +packages, and add a systemd service to flash the SJ201 chip on boot. This will modify pulseaudio and potentially overwrite +any previous settings.
+++Note: Running this scripts grants GPIO permissions to the
+gpio
group. Any user that interfaces with the SJ201 board +should be a member of thegpio
group. Group permissions are not modified by this script
Audio devices should now show up with pactl list
.
+Audio devices can be tested in the image by recording a short audio clip and playing it back.
parecord test.wav
+paplay test.wav
+
+Installs ovos-shell and mycroft-gui-app. Adds and enables ovos-gui.service to start the shell +on system boot.
+The image should now boot to the GUI shell.
+Installs ovos-core
and dependencies. Configures services for core modules.
At this stage, the image is complete and when booted should start OVOS.
+Installs the OVOS Dashboard and service +to start the dashboard from the GUI.
+From the GUI Settings
-> Developer Settings
menu, Enable Dashboard
will now start
+the dashboard for remote access to device diagnostics.
Installs libcamera
and other dependencies for using a CSI camera.
The default camera skill can be used to take a photo; libcamera-apps
are also
+installed for testing via CLI.
Enables a custom splash screen and disables on-device TTY at boot.
+On boot, a static image should be shown until the GUI Shell starts.
+cleanup.sh
removes any temporary files from the mounted image before unmounting it.
+After running cleanup.sh
, the image is ready to burn to a drive and boot.
For playing music (and video as discussed within the next chapter), OpenVoiceOS uses OCP (OpenVoiceOS Common Play) and is basically a full fledge multimedia player on its own designed around open standards like MPRIS and with the vision of being fully integrated within the OpenVoiceOS software stack.
+Skills designed for OCP provide search results for OCP (think about them as media providers/catalogs/scrapers), OCP will play the best search result for you. +OpenVoiceOS comes with a few OCP skills pre-installed, however more can be installed just like any other OVOS skill.
+You can find more OCP skills in the awesome-ocp-skills list
+A voiceassistant with smartspeaker functionality should be able to play music straight out of the box. For that reason the buildroot edition of OpenVoiceOS comes with the Youtube Music OCP Skill pre-installed. +Just ask it to play something will start playback from Youtube assuming the asked sonmg is present on Youtube ofcourse.
+++Hey Mycroft, play disturbed sound of silence
+
This should just start playing utilizing OCP as shown below. More information about the full functionality of OCP can be found at its own chapter. +
+Nothing more relaxing after you woke up, cancelling your alarm set on you OpenVoiceOS device than listening to your favorite news station while drinking some coffee (No OpenVoiceOS can not make you that coffee yet).
+++ +Hey Mycroft, play the BBC news
+
The whole OCP framework has some benefits and features that are not skill specific, such as "Playlists" and a view of the search results. You can access those by swiping to the right when something is playing.
+ +The homescreen skill that comes pre-installed with OpenVoiceOS also comes with a widget for the OCP framework.
+ + +Although the screen used on your OpenVoiceOS device might be small, the whole OCP mediaplaying frame does support video playback.
+You can find video OCP skills in the same awesome-ocp-skills list. The fourth column, "playback type" shows which type of payer is used for that specific skill.
+If you use a skill that utilizes the "video player" the below will be shown on your OpenVoiceOS it's screen at playback.
+ + +Here we look at how to implement the most common types of prompts. For more information on conversation design see the Voice User Interface Design Guidelines.
+Any Skill can request a response from the user - making a statement or asking a question before the microphone is activated to record the User's response.
+The base implementation of this is the get_response()
method.
To see it in action, let's create a simple Skill that asks the User what their favorite flavor of ice cream is.
+from mycroft import MycroftSkill, intent_handler
+
+
+class IceCreamSkill(MycroftSkill):
+ @intent_handler('set.favorite.intent')
+ def handle_set_favorite(self):
+ favorite_flavor = self.get_response('what.is.your.favorite.flavor')
+ self.speak_dialog('confirm.favorite.flavor', {'flavor': favorite_flavor})
+
+
+def create_skill():
+ return IceCreamSkill()
+
+In this Skill we have used get_response()
and passed it the name of our dialog file 'what.is.your.favorite.flavor.dialog'
. This is the simplest form of this method. It will speak dialog from the given file, then activate the microphone for 3-10 seconds allowing the User to respond. The transcript of their response will then be assigned to our variable favorite_flavor
. To confirm that we have heard the User correctly we then speak a confirmation dialog passing the value of favorite_flavor
to be spoken as part of that dialog.
The get_response()
method also takes the following optional arguments:
data
(dict) - used to populate the dialog file, just like speak_dialog()
validator
(function) - returns a boolean to define whether the response meets some criteria for successon_fail
(function) - returns a string that will be spoken if the validator returns Falsenum_retries
(int) - number of times the system should repeat the question to get a successful resultask_yesno()
checks if the response contains "yes" or "no" like phrases.
The vocab for this check is sourced from the Skills yes.voc
and no.voc
files (if they exist), as well as mycroft-cores defaults (contained within mycroft-core/res/text/en-us/yes.voc
). A longer phrase containing the required vocab is considered successful e.g. both "yes" and "yeah that would be great thanks" would be considered a successful "yes".
If "yes" or "no" responses are detected, then the method will return the string "yes" or "no". If the response does not contain "yes" or "no" vocabulary then the entire utterance will be returned. If no speech was detected indicating the User did not respond, then the method will return None
.
Let's add a new intent to our IceCreamSkill
to see how this works.
from mycroft import MycroftSkill, intent_handler
+
+
+class IceCreamSkill(MycroftSkill):
+ @intent_handler('do.you.like.intent')
+ def handle_do_you_like(self):
+ likes_ice_cream = self.ask_yesno('do.you.like.ice.cream')
+ if likes_ice_cream == 'yes':
+ self.speak_dialog('does.like')
+ elif likes_ice_cream == 'no':
+ self.speak_dialog('does.not.like')
+ else:
+ self.speak_dialog('could.not.understand')
+
+
+def create_skill():
+ return IceCreamSkill()
+
+In this example we have asked the User if they like ice cream. We then speak different dialog whether they respond yes or no. We also speak some error dialog if neither yes nor no are returned.
+ask_selection()
provides a list of options to the User for them to select from. The User can respond with either the name of one of these options or select with a numbered ordinal eg "the third".
This method automatically manages fuzzy matching the users response against the list of options provided.
+Let's jump back into our IceCreamSkill
to give the User a list of options to choose from.
from mycroft import MycroftSkill, intent_handler
+
+
+class IceCreamSkill(MycroftSkill):
+ def __init__(self):
+ MycroftSkill.__init__(self)
+ self.flavors = ['vanilla', 'chocolate', 'mint']
+
+ @intent_handler('request.icecream.intent')
+ def handle_request_icecream(self):
+ self.speak_dialog('welcome')
+ selection = self.ask_selection(self.flavors, 'what.flavor')
+ self.speak_dialog('coming.right.up', {'flavor': selection})
+
+
+def create_skill():
+ return IceCreamSkill()
+
+In this example we first speak some welcome.dialog
. The list of flavors is then spoken, followed by the what.flavor.dialog
. Finally, we confirm the Users selection by speaking coming.right.up.dialog
There are two optional arguments for this method.
+min_conf
(float) defines the minimum confidence level for fuzzy matching the Users response against the list of options. numeric
(bool) if set to True will speak the options as a numbered list eg "One, vanilla. Two, chocolate. Or three, mint"
So far we have looked at ways to prompt the User, and return their response directly to our Skill. It is also possible to speak some dialog, and activate the listener, directing the response back to the standard intent parsing engine. We may do this to let the user trigger another Skill, or because we want to make use of our own intents to handle the response.
+To do this, we use the expect_response
parameter of the speak_dialog()
method.
from mycroft import MycroftSkill, intent_handler
+
+
+class IceCreamSkill(MycroftSkill):
+ def __init__(self):
+ MycroftSkill.__init__(self)
+ self.flavors = ['vanilla', 'chocolate', 'mint']
+
+ @intent_handler('request.icecream.intent')
+ def handle_request_icecream(self):
+ self.speak_dialog('welcome')
+ selection = self.ask_selection(self.flavors, 'what.flavor')
+ self.speak_dialog('coming.right.up', {'flavor': selection})
+ self.speak_dialog('now.what', expect_response=True)
+
+
+def create_skill():
+ return IceCreamSkill()
+
+Here we have added a new dialog after confirming the Users selection. We may use it to tell the User other things they can do with their OVOS device while they enjoy their delicious ice cream.
+ +An introduction to QML and additional documentation are available here
+Mycroft-GUI frameworks provides you with some base delegates you should use when designing your QML GUI. +The base delegates provide you with a basic presentation layer for your skill with some property assignments that can help you set up background images, background dim to give you the control you need for rendering an experience.
+Before we dive deeper into the Design Guidelines, lets look at some concepts that a GUI developer should learn about:
+Mycroft.Units.GridUnit is the fundamental unit of space that should be used for all sizing inside the QML UI, expressed in pixels. Each GridUnit is predefined as 16 pixels
+// Usage in QML Components example
+width: Mycroft.Units.gridUnit * 2 // 32px Wide
+height: Mycroft.Units.gridUnit // 16px Tall
+
+OVOS Shell uses a custom Kirigami Platform Theme plugin to provide global theming to all our skills and user interfaces, which also allows our GUI's to be fully compatible with the system themes on platforms that are not running the OVOS Shell.
+Kirigami Theme and Color Scheme guide is extensive and can be found here
+OVOS GUI's developed to follow the color scheme depend on only a subset of available colors, mainly:
+Kirigami.Theme.backgroundColor = Primary Color (Background Color: This will always be a dark palette or light palette depending on the dark or light chosen color scheme)
+Kirigami.Theme.highlightColor = Secondary Color (Accent Color: This will always be a standout palette that defines the themes dominating color and can be used for buttons, cards, borders, highlighted text etc.)
+Kirigami.Theme.textColor = Text Color (This will always be an opposite palette to the selected primary color)
+Let's look at this image and qml example below, this is a representation of the Mycroft Delegate: +
+When designing your first QML file, it is important to note the red triangles in the above image, these triangles represent the margin from the screen edge the GUI needs to be designed within, these margins ensure your GUI content does not overlap with features like edge lighting and menus in the platforms that support it like OVOS-Shell
+The content items and components all utilize the selected color scheme, where black is the primary background color, red is our accent color and white is our contrasting text color
+Let's look at this in QML:
+import ...
+import Mycroft 1.0 as Mycroft
+
+Mycroft.Delegate {
+ skillBackgroundSource: sessionData.exampleImage
+ leftPadding: 0
+ rightPadding: 0
+ topPadding: 0
+ bottomPadding: 0
+
+ Rectangle {
+ anchors.fill: parent
+ // Setting margins that need to be left for the screen edges
+ anchors.margins: Mycroft.Units.gridUnit * 2
+
+ //Setting a background dim using our primary theme / background color on top of our skillBackgroundSource image for better readability and contrast
+ color: Qt.rgba(Kirigami.Theme.backgroundColor.r, Kirigami.Theme.backgroundColor.g, Kirigami.Theme.backgroundColor.b, 0.3)
+
+ Kirigami.Heading {
+ level: 2
+ text: "An Example Pie Chart"
+ anchors.top: parent.top
+ anchors.left: parent.left
+ anchors.right: parent.right
+ height: Mycroft.Units.gridUnit * 3
+ // Setting the text color to always follow the color scheme for this item displayed on the screen
+ color: Kirigami.Theme.textColor
+ }
+
+ PieChart {
+ anchors.centerIn: parent
+ pieColorMinor: Kirigami.Theme.backgroundColor // As in the image above the minor area of the pie chart uses our primary color
+ pieColorMid: Kirigami.Theme.highlightColor // As in the image above the middle area is assigned the highlight or our accent color
+ pieColorMajor: Kirigami.Theme.textColor // As in the image above the major area is assigned the text color
+ }
+ }
+}
+
+OVOS Skill GUIs are designed to be multi-platform and screen friendly, to support this we always try to support both Horizontal and Vertical display's. Let's look at an example and a general approach to writing multi resolution friendly UI's
+Let's look at these images below that represent a Delegate as seen in a Horizontal screen: +
+Let's look at these images below that represent a Delegate as seen in a Vertical screen: +
+Let's look at this in QML:
+import ...
+import Mycroft 1.0 as Mycroft
+
+Mycroft.Delegate {
+ id: root
+ skillBackgroundSource: sessionData.exampleImage
+ leftPadding: 0
+ rightPadding: 0
+ topPadding: 0
+ bottomPadding: 0
+ property bool horizontalMode: width >= height ? 1 : 0 // Using a ternary operator to detect if width of the delegate is greater than the height, which provides if the delegate is in horizontalMode
+
+ Rectangle {
+ anchors.fill: parent
+ // Setting margins that need to be left for the screen edges
+ anchors.margins: Mycroft.Units.gridUnit * 2
+
+ //Setting a background dim using our primary theme / background color on top of our skillBackgroundSource image for better readability and contrast
+ color: Qt.rgba(Kirigami.Theme.backgroundColor.r, Kirigami.Theme.backgroundColor.g, Kirigami.Theme.backgroundColor.b, 0.3)
+
+ Kirigami.Heading {
+ level: 2
+ text: "An Example Pie Chart"
+ // Setting the text color to always follow the color scheme
+ color: Kirigami.Theme.textColor
+ }
+
+ GridLayout {
+ id: examplesGridView
+ // Checking if we are in horizontal mode, we should display two columns to display the items in the image above, or if we are in vertical mode, we should display a single column only
+ columns: root.horizontalMode ? 2 : 1
+
+ Repeater {
+ model: examplesModel
+ delegates: ExamplesDelegate {
+ ...
+ }
+ }
+ }
+ }
+}
+
+
+ Manjaro SSH Details for Mark-2/DevKit Image: Username: ovos | password: ovos
+From Backend
+Since local backend does not provide a web ui a admin api +can be used to manage your devices
+A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device OpenVoiceOS is all about choice, We currently support 3 backend types
+ +The mycroft backend connects your device to mycroft servers and allows you to use their web interface to manage your device, this requires paring and all your Speech to text queries are processed via this backend
+The personal backend is a choice for users who would like to self-host their own backend on the device or in their personal home network, this backend requires additional setup but also provides a cool web interface to configure your device and manage your settings
+Open Voice OS by default comes with no backend, we do not really believe you need a backend to do anything, this is the best choice for +your device if you wish to run completely locally, we provide you with a whole list of Speech to text and Text to speech online and offline options to choose from, All communication to the outside world happens from your own device without data sharing
+At first run of your OpenVoiceOS device a first run setup wizard is started that guides you through the process of setting up your device.
+ + +This is concidered an advanced function and is unnecessasary for normal usage
+The default for ovos-core is no backend.
+You can go without a backend and go offline and use our free proxy for API services with no accounts.
+This setup requires there to be a running personal-backend. Refer to this Github page for details.
+If your instalation has the skill-ovos-setup installed, you will have a gui available to perform the setup of your device to use the personal backend that you setup.
+NOTE it is NOT advised to install this skill manually, as it can cause issues if OVOS was not configured to use it. Skip to the Manual Configuration section for headless devices or if this skill was not pre-installed.
+On first boot, you will be presented with a screen to choose a backend option.
+NOTE The Selene backend shown in the image is no longer available as an option
+ +Select Personal Backend
from the options. The next screen will allow you to enter the IP address of your personal backend server.
Enter the IP address and Port number of your personal backend
+eg. 192.168.1.xxx:6712
If everything is entered correctly, and you backend is running, you should see a screen showing that your connection was successful. You should now be able to configure your device with your backend.
+This section requires shell access to the device, either with direct connection, or SSH.
+The local file ~/.config/mycroft/mycroft.conf
contains local settings that the user has specified. This file may not exist, and will have to be created to continue.
Open the file to edit it
+nano ~/.config/mycroft/mycroft.conf
Add this section to your file. This file must be in valid json or yaml format.
+{
+ "server": {
+ "url": "http://<your_server_IP_address:port_number>",
+ "version": "v1",
+ "update": true,
+ "metrics": true
+ }
+}
+
+You will also have to make sure there is not an identity file
already configured
rm ~/.config/mycroft/identity/identity2.json
Restart your device, and you should be connected to your backend.
+ +Depending on which image you downloaded you will be greeted by the network configuration screen with either one or two option. The buildroot image supports setting up the network via two options.
+ +You can also skip this step to configure it later or never ask it again if you want your device to run fully offline. (Bear in mind you need to configure your device to use local STT and TTS engines and obvious asking your device things that require internet will not work.)
+Choosing this option will create a temporarily open network - hotspot called "OVOS" to which you can connect from your mobile device.
+ +On your mobile device go into Settings -> Wi-Fi Settings and the "OVOS" open network will appear in its list.
+ +Connect your device with the OVOS network and a webpage will open to configure the Wi-Fi network on your OpenVoiceOS device. (NOTE: On newer mobile operating systems the captive portal capture has changed and the website will not automatically be opened. If this is the case you can open a browser manually and go to http://172.16.127.1 ) +The following webpage will be shown;
+ +Select your Wi-Fi network from the list, insert your password and press the "Connect" button.
+ +If everything went fine, you will soon see the green "connected" screen on your OpenVoiceOS device.
+Choosing this option allows you to set up your Wi-Fi network straight on your OpenVoiceOS device. If selected a screen with the available networks will be shown on your OpenVoiceOS device.
+ +Select your network from the list and tap / click on it to allow you to insert your password. If you have a touch screen an on-screen keyboard will appear when you tap the password field. If not use a keyboard.
+ +When you have inserted your password, click / tap the connect button and after a short connecting animation, if all went fine you will see the green "connected" screen on your OpenVoiceOS device.
+If you have skipped the Wi-Fi network setup before, or you just want to reconfigure your network. On the homescreen of your OpenVoiceOS device swipe down the top menu and click the "Wi-Fi" icon. This brings you to the same on-device configuration screen.
+ +From here you can select another network or click the configuration icon on the right of connected network for details or to remove it from the configured networks.
+ + +At first run of your OpenVoiceOS device a first run setup wizard is started that guides you through the process of setting up your device.
+ +A backend is a service that provides your device with additional tools to function, these could range from managing your skill settings to configuring certain aspects of your device OpenVoiceOS is all about choice, We currently support 3 backend types
+ +The mycroft backend connects your device to mycroft servers and allows you to use their web interface to manage your device, this requires paring and all your Speech to text queries are processed via this backend
+The personal backend is a choice for users who would like to self-host their own backend on the device or in their personal home network, this backend requires additional setup but also provides a cool web interface to configure your device and manage your settings
+Open Voice OS by default comes with no backend, we do not really believe you need a backend to do anything, this is the best choice for +your device if you wish to run completely locally, we provide you with a whole list of Speech to text and Text to speech online and offline options to choose from, All communication to the outside world happens from your own device without data sharing
+The Pairing Process
+ +The GUI will now show you a Pairing Code, This pairing code needs to be entered on the mycroft backend which you can find online at https://account.mycroft.ai
+Select A Text To Speech (TTS) Engine: A text-to-speech (TTS) system converts normal language text into speech, select an engine from the list
+ +Select A Speech To Text (STT) Engine: A speech-to-text (STT) system converts human speech to text, select an engine from the list
+ +Personal backend is a reverse engineered alternative to selene and requires the backend to be hosted locally.
+OVOS-shell is the OpenVoiceOS client implementation of the mycroft-gui library used in our embedded device images
+OVOS-shell is tightly coupled to PHAL, the following companion plugins should be installed if you are using ovos-shell
+Other distributions may offer alternative implementations such as:
+The Shell can be configured in a few ways.
+Display settings
+ +Color Theme editor
+ +~/.config/OpenvoiceOS/OvosShell.conf
can be edited to change shell options that
+may also be changed via UI. An example config would look like:
[General]
+fakeBrightness=1
+menuLabels=true
+
+Shell themes can be included in /usr/share/OVOS/ColorSchemes/
or
+~/.local/share/OVOS/ColorSchemes/
in json format. Note that colors should include
+an alpha value (usually FF
).
{
+ "name": "Neon Green",
+ "primaryColor": "#FF072103",
+ "secondaryColor": "#FF2C7909",
+ "textColor": "#FFF1F1F1"
+}
+
+
+ The speech client is responsible for loading STT, VAD and Wake Word plugins
+Speech is transcribed into text and forwarded to the skills service
+OVOS allows you to load any number of hot words in parallel and trigger different actions when they are +detected
+each hotword can do one or more of the following:
+To add a new hotword add its configuration under "hotwords" section.
+By default, all hotwords are disabled unless you set "active": true
.
+Under the "listener"
setting a main wake word and stand up word are defined, those will be automatically enabled unless you set "active": false
.
+This is usually not desired unless you are looking to completely disabled wake word usage
"listener": {
+ // Default wake_word and stand_up_word will be automatically set to active
+ // unless explicitly disabled under "hotwords" section
+ "wake_word": "hey mycroft",
+ "stand_up_word": "wake up"
+},
+
+"hotwords": {
+ "hey mycroft": {
+ "module": "ovos-ww-plugin-precise",
+ "version": "0.3",
+ "model": "https://github.com/MycroftAI/precise-data/raw/models-dev/hey-mycroft.tar.gz",
+ "phonemes": "HH EY . M AY K R AO F T",
+ "threshold": 1e-90,
+ "lang": "en-us",
+ "listen": true,
+ "sound": "snd/start_listening.wav"
+ },
+ "wake up": {
+ "module": "ovos-ww-plugin-pocketsphinx",
+ "phonemes": "W EY K . AH P",
+ "threshold": 1e-20,
+ "lang": "en-us",
+ "wakeup": true
+ }
+},
+
+Two STT plugins may be loaded at once, if the primary plugin fails for some reason the second plugin will be used.
+This allows you to have a lower accuracy offline model as fallback to account for internet outages, this ensures your device never becomes fully unusable
+"stt": {
+ "module": "ovos-stt-plugin-server",
+ "fallback_module": "ovos-stt-plugin-vosk",
+ "ovos-stt-plugin-server": {"url": "https://stt.openvoiceos.com/stt"}
+},
+
+You can modify microphone settings and enable additional features under the listener section such as wake word / utterance recording / uploading
+"listener": {
+ "sample_rate": 16000,
+
+ // if enabled the noise level is saved to a ipc file, useful for
+ // debuging if microphone is working but writes a lot to disk,
+ // recommended that you set "ipc_path" to a tmpfs
+ "mic_meter_ipc": true,
+
+ // Set 'save_path' to configure the location of files stored if
+ // 'record_wake_words' and/or 'save_utterances' are set to 'true'.
+ // WARNING: Make sure that user 'mycroft' has write-access on the
+ // directory!
+ // "save_path": "/tmp",
+ // Set 'record_wake_words' to save a copy of wake word triggers
+ // as .wav files under: /'save_path'/mycroft_wake_words
+ "record_wake_words": false,
+ // Set 'save_utterances' to save each sentence sent to STT -- by default
+ // they are only kept briefly in-memory. This can be useful for for
+ // debugging or other custom purposes. Recordings are saved
+ // under: /'save_path'/mycroft_utterances/<TIMESTAMP>.wav
+ "save_utterances": false,
+ "wake_word_upload": {
+ "disable": false,
+ "url": "https://training.mycroft.ai/precise/upload"
+ },
+
+ // Override as SYSTEM or USER to select a specific microphone input instead of
+ // the PortAudio default input.
+ // "device_name": "somename", // can be regex pattern or substring
+ // or
+ // "device_index": 12,
+
+ // Stop listing to the microphone during playback to prevent accidental triggering
+ // This is enabled by default, but instances with good microphone noise cancellation
+ // can disable this to listen all the time, allowing 'barge in' functionality.
+ "mute_during_output" : true,
+
+ // How much (if at all) to 'duck' the speaker output during listening. A
+ // setting of 0.0 will not duck at all. A 1.0 will completely mute output
+ // while in a listening state. Values in between will lower the volume
+ // partially (this is optional behavior, depending on the enclosure).
+ "duck_while_listening" : 0.3,
+
+ // In milliseconds
+ "phoneme_duration": 120,
+ "multiplier": 1.0,
+ "energy_ratio": 1.5,
+
+ // Settings used by microphone to set recording timeout
+ "recording_timeout": 10.0,
+ "recording_timeout_with_silence": 3.0,
+
+ // instant listen is an experimental setting, it removes the need for
+ // the pause between "hey mycroft" and starting to speak the utterance,
+ //however it might slightly downgrade STT accuracy depending on engine used
+ "instant_listen": false
+},
+
+Voice Activity Detection is used by the speech client to determine when a user stopped speaking, this indicates the voice command is ready to be executed. +Several VAD strategies are supported
+"listener": {
+
+ // Voice Activity Detection is used to determine when speech ended
+ "VAD": {
+ // silence method defined the main vad strategy
+ // valid values:
+ // VAD_ONLY - Only use vad
+ // RATIO_ONLY - Only use max/current energy ratio threshold
+ // CURRENT_ONLY - Only use current energy threshold
+ // VAD_AND_RATIO - Use vad and max/current energy ratio threshold
+ // VAD_AND_CURRENT - Use vad and current energy threshold
+ // ALL - Use vad, max/current energy ratio, and current energy threshold
+ // NOTE: if a vad plugin is not available method will fallback to RATIO_ONLY
+ "silence_method": "vad_and_ratio",
+ // Seconds of speech before voice command has begun
+ "speech_seconds": 0.1,
+ // Seconds of silence before a voice command has finished
+ "silence_seconds": 0.5,
+ // Seconds of audio to keep before voice command has begun
+ "before_seconds": 0.5,
+ // Minimum length of voice command (seconds)
+ // NOTE: max_seconds uses recording_timeout listener setting
+ "min_seconds": 1,
+ // Ratio of max/current energy below which audio is considered speech
+ "max_current_ratio_threshold": 2,
+ // Energy threshold above which audio is considered speech
+ // NOTE: this is dynamic, only defining start value
+ "initial_energy_threshold": 1000.0,
+ // vad module can be any plugin, by default it is not used
+ // recommended plugin: "ovos-vad-plugin-silero"
+ "module": "",
+ "ovos-vad-plugin-silero": {"threshold": 0.2},
+ "ovos-vad-plugin-webrtcvad": {"vad_mode": 3}
+ }
+},
+
+
+ Spotifyd is able to advertise itself on the network without credentials and using zeroconf authentication from Spotify Connect on your mobile device. This is the default configuration shipped with the buildroot image. If for whatever reason zeroconf is not properly working on your network, or you want spotifyd to log in itself you can configure your username and password combination within it's configuration file by uncommenting and configuring the username and password variables within ~/.config/spotifyd/spotifyd.conf
and reboot the device or run systemctl --user restart spotifyd
.
Open spotify on you mobile device and go to the Devices menu within the Settings or tap the devices menu icon on the left bottom of the now playing screen. An OpenVoiceOS "speaker" device will be present which you can select as output device. +
+When you play something on Spotify the music will come from your OpenVoiceOS device which will be indicated by the "OPENVOICEOS" indicator on the device menu icon on the top bottom of the now playing screen on your mobile device. +
+As Spotifyd has full MPRIS support including audio player controls, the full OCP now playing screen will be shown on your OpenVoiceOS device as shown below, just like playing something from YouTube as shown above. +
+ +Your OpenVoiceOS device comes with certain skills pre-installed for basic functionality out of the box. You can also install new skills however more about that at a later stage.
+You can ask your device what time or date it is just in case you lost your watch.
+++ +Hey Mycroft, what time is it?
+
++ +Hey Mycroft, what is the date?
+
Having your OpenVoiceOS device knowing and showing the time is great, but it is even better to be woken up in the morning by your device.
+++ +Hey Mycroft, set an alarm for 8 AM.
+
Sometimes you are just busy but want to be alerted after a certain time. For that you can use timers.
+++ +Hey Mycroft, set a timer for 5 minutes.
+
You can always set more timers and even name them, so you know which timers is for what.
+++ +Hey, Mycroft, set another timer called rice cooking for 7 minutes.
+
You can ask your device what the weather is or would be at any given time or place.
+++ +Hey Mycroft, what is the weather like today?
+
The weather skill actually uses multiple pages indicated by the small dots at the bottom of the screen.
+ +The file browser allows you to browse the filesystem in your device and any connected media, you can view images and play music and videos.
+KDEConnect integration allows you to share files with your mobile devices
+ + +Editors Note This will probably move
+One of OVOS's most important core capabilities is to convert text to speech, that is, to speak a statement.
+Within a Skill's Intent handler, you may pass a string of text to OVOS and OVOS will speak it. For example: self.speak('this is my statement')
That's cool and fun to experiment with, but passing strings of text to Mycroft doesn't help to make Mycroft a multilingual product. Rather than hard-coded strings of text, OVOS has a design pattern for multilingualism.
To support multilingualism, the text that OVOS speaks must come from a file. That file is called a dialog file. The dialog file contains statements (lines of text) that a listener in a particular language would consider to be equivalent. For instance, in USA English, the statements "I am okay" and "I am fine" are equivalent, and both of these statements might appear in a dialog file used for responding to the USA English question: "How are you?".
+By convention, the dialog filename is formed by dot connected words and must end with ".dialog". The dialog filename should be descriptive of the contents as a whole. Sometimes, the filename describes the question being answered, and other times, the filename describes the answer itself. For the example above, the dialog filename might be: how.are.you.dialog or i.am.fine.dialog.
+Multilingualism is accomplished by translating the dialog files into other languages, and storing them in their own directory named for the country and language. The filenames remain the same. Using the same filenames in separate language dependent directories allows the Skills to be language agnostic; no hard-coded text strings. Adjust the language setting for your Device **** and OVOS uses the corresponding set of dialog files. If the desired file does not exist in the directory for that language, Mycroft will use the file from the USA English directory.
+As an example of the concept, the contents of how.are.you.dialog in the directory for the French language in France (fr-fr) might include the statement: "Je vais bien".
+To demonstrate the multilingualism design pattern, we examine the usage of the speak_dialog()
method in the Tomato Skill .
The Tomato Skill has two Intents: one demonstrates simple, straightforward statements, and the other demonstrates the use of variables within a statement.
+The first Intent within the Tomato Skill, what.is.a.tomato.intent, handles inquiries about tomatoes, and the dialog file, tomato.description.dialog, provides the statements for OVOS to speak in reply to that inquiry.
+Sample contents of the Intent and dialog files:
+what.is.a.tomato.intent
what is a tomato
+what would you say a tomato is
+describe a tomato
+what defines a tomato
+
+tomato.description.dialog
The tomato is a fruit of the nightshade family
+A tomato is an edible berry of the plant Solanum lycopersicum
+A tomato is a fruit but nutrionists consider it a vegetable
+
+Observe the statements in the tomato.description.dialog file. They are all acceptable answers to the question: "What is a tomato?" Providing more than one statement in a dialog file is one way to make OVOS to seem less robotic, more natural. +OVOS will randomly select one of the statements.
+The Tomato Skill code snippet:
+@intent_handler('what.is.a.tomato.intent')
+def handle_what_is(self, message):
+ """Speaks a statement from the dialog file."""
+ self.speak_dialog('tomato.description')
+
+With the Tomato Skill installed, if the User utters **** "Hey Mycroft, what is a tomato?", the Intent handler method handle_what_is()
will be called.
Inside handle_what_is()
, we find: self.speak_dialog('tomato.description')
As you can probably guess, the parameter 'tomato.description'
is the dialog filename without the ".dialog" extension. Calling this method opens the dialog file, selects one of the statements, and converts that text to speech. OVOS will speak a statement from the dialog file. In this example, OVOS might say "The tomato is a fruit of the nightshade family".
Remember, OVOS has a language setting that determines from which directory to find the dialog file.
+The Skill Structure section describes where to place the Intent file and dialog file. Basically, there are two choices:
+locale/en-us
dialog/en-us
, and put the Intent file in vocab/en-us
The second Padatious Intent, do.you.like.intent, demonstrates the use of variables in the Intent file and in one of the dialog files:
+do.you.like.intent
do you like tomatoes
+do you like {type} tomatoes
+
+like.tomato.type.dialog
I do like {type} tomatoes
+{type} tomatoes are my favorite
+
+like.tomato.generic.dialog
I do like tomatoes
+tomatoes are my favorite
+
+Compare these two dialog files. The like.tomato.generic.dialog file contains only simple statements. The statements in the like.tomato.type.dialog file include a variable named type
. The variable is a placeholder in the statement specifying where text may be inserted. The speak_dialog()
method accepts a dictionary as an optional parameter. If that dictionary contains an entry for a variable named in the statement, then the value from the dictionary will be inserted at the placeholder's location.
Dialog file variables are formed by surrounding the variable's name with curly braces. +In OVOS parlance, curly braces are known as a mustache.
+For multi-line dialog files, be sure to include the same variable on all lines.
+The Tomato Skill code snippet:
+ @intent_handler('do.you.like.intent')
+ def handle_do_you_like(self, message):
+ tomato_type = message.data.get('type')
+ if tomato_type is not None:
+ self.speak_dialog('like.tomato.type',
+ {'type': tomato_type})
+ else:
+ self.speak_dialog('like.tomato.generic')
+
+When the User utters "Hey Mycroft, do you like RED tomatoes?", the second of the two Intent lines "do you like {type} tomatoes" is recognized by Mycroft, and the value 'RED' is returned in the message dictionary assigned to the 'type' entry when handle_do_you_like()
is called.
The line tomato_type = message.data.get('type')
extracts the value from the dictionary for the entry 'type'. In this case, the variable tomato_type
will receive the value 'RED', and speak_dialog()
will be called with the 'like.tomato.type' dialog file, and a dictionary with 'RED' assigned to 'type'. The statement "I do like {type} tomatoes" might be randomly selected, and after insertion of the value 'RED' for the placeholder variable {type}, OVOS would say: "I do like RED tomatoes".
Should the User utter "Hey Mycroft, do you like tomatoes?", the first line in the Intent file "do you like tomatoes" is recognized. There is no variable in this line, and when handle_do_you_like()
is called, the dictionary in the message is empty. This means tomato_type
is None
,speak_dialog('like.tomato.generic')
would be called, and Mycroft might reply with "Yes, I do like tomatoes".
By default, the speak_dialog()
method is non-blocking. That is any code following the call to speak_dialog()
will execute whilst OVOS is talking. This is useful to allow your Skill to perform actions while it is speaking.
Rather than telling the User that we are fetching some data, then going out to fetch it, we can do the two things simultaneously providing a better experience.
+However, there are times when we need to wait until the statement has been spoken before doing something else. We have two options for this.
+We can pass a wait=True
parameter to our speak_dialog()
method. This makes the method blocking and no other code will execute until the statement has been spoken.
@intent_handler('what.is.a.tomato.intent')
+def handle_what_is(self, message):
+ """Speaks a statement from the dialog file.
+ Waits (i.e. blocks) within speak_dialog() until
+ the speaking has completed. """
+ self.speak_dialog('tomato.description', wait=True)
+ self.log.info("I waited for you")
+
+The mycroft.audio.wait_while_speaking()
method allows us to execute some code, then wait for OVOS to finish speaking.
@intent_handler('what.is.a.tomato.intent')
+def handle_what_is(self, message):
+ """Speaks a statement from the dialog file.
+ Returns from speak_dialog() before the
+ speaking has completed, and logs some info.
+ Then it, waits for the speech to complete. """
+ self.speak_dialog('tomato.description')
+ self.log.info("I am executed immediately")
+ wait_while_speaking()
+ self.log.info("But I waited for you")
+
+Here we have executed one line of code immediately. Our Skill will then wait for the statement from i.do.like.dialog
to be spoken before executing the final line of code.
There may be a situation where the dialog file and the speak_dialog()
method do not give the Skill enough flexibility. For instance, there may be a need to manipulate the statement from the dialog file before having it spoken by OVOS.
The MycroftSkill class provides four multilingual methods to address these needs. Each method uses a file, and multilingualism is accomplished using the country/language directory system.
+The translate()
method returns a random string from a ".dialog" file (modified by a data dictionary).
The translate_list()
method returns a list of strings from a ".list" file (each modified by the data dictionary). Same as translate_template() just with a different file extension.
The translate_namedvalue()
method returns a dictionary formed from CSV entries in a ".value" file.
The translate_template()
method returns a list of strings from a ".template" file (each modified by the data dictionary). Same as translate_list() just with a different file extension.
You can run a local nemo instance using ovos-stt-server
+ +STT plugins are responsible for converting spoken audio into text
+Plugin | +Offline | +Type | +
---|---|---|
ovos-stt-plugin-vosk | +yes | +FOSS | +
ovos-stt-plugin-chromium | +no | +API (free) | +
neon-stt-plugin-google_cloud_streaming | +no | +API (key) | +
neon-stt-plugin-scribosermo | +yes | +FOSS | +
neon-stt-plugin-silero | +yes | +FOSS | +
neon-stt-plugin-polyglot | +yes | +FOSS | +
neon-stt-plugin-deepspeech_stream_local | +yes | +FOSS | +
ovos-stt-plugin-selene | +no | +API (free) | +
ovos-stt-plugin-http-server | +no | +API (self hosted) | +
ovos-stt-plugin-pocketsphinx | +yes | +FOSS | +
STT plugins can be used in your owm projects as follows
+from speech_recognition import Recognizer, AudioFile
+
+plug = STTPlug()
+
+# verify lang is supported
+lang = "en-us"
+assert lang in plug.available_languages
+
+# read file
+with AudioFile("test.wav") as source:
+ audio = Recognizer().record(source)
+
+# transcribe AudioData object
+transcript = plug.execute(audio, lang)
+
+from ovos_plugin_manager.templates.stt import STT
+
+
+# base plugin class
+class MySTTPlugin(STT):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ # read config settings for your plugin
+ lm = self.config.get("language-model")
+ hmm = self.config.get("acoustic-model")
+
+ def execute(self, audio, language=None):
+ # TODO - convert audio into text and return string
+ transcript = "You said this"
+ return transcript
+
+ @property
+ def available_languages(self):
+ """Return languages supported by this STT implementation in this state
+ This property should be overridden by the derived class to advertise
+ what languages that engine supports.
+ Returns:
+ set: supported languages
+ """
+ # TODO - what langs can this STT handle?
+ return {"en-us", "es-es"}
+
+
+# sample valid configurations per language
+# "display_name" and "offline" provide metadata for UI
+# "priority" is used to calculate position in selection dropdown
+# 0 - top, 100-bottom
+# all other keys represent an example valid config for the plugin
+MySTTConfig = {
+ lang: [{"lang": lang,
+ "display_name": f"MySTT ({lang}",
+ "priority": 70,
+ "offline": True}]
+ for lang in ["en-us", "es-es"]
+}
+
+
+ Turn any OVOS STT plugin into a microservice!
+pip install ovos-stt-http-server
ovos-stt-server --help
+usage: ovos-stt-server [-h] [--engine ENGINE] [--port PORT] [--host HOST]
+
+options:
+ -h, --help show this help message and exit
+ --engine ENGINE stt plugin to be used
+ --port PORT port number
+ --host HOST host
+
+Use with OpenVoiceOS companion plugin
+you can create easily create a docker file to serve any plugin
+FROM python:3.7
+
+RUN pip3 install ovos-stt-http-server==0.0.1
+
+RUN pip3 install {PLUGIN_HERE}
+
+ENTRYPOINT ovos-stt-http-server --engine {PLUGIN_HERE}
+
+build it
+docker build . -t my_ovos_stt_plugin
+
+run it
+docker run -p 8080:9666 my_ovos_stt_plugin
+
+Each plugin can provide its own Dockerfile in its repository using ovos-stt-http-server
+ +Turn any OVOS Language plugin into a microservice!
+Use with OpenVoiceOS companion plugin
+pip install ovos-translate-server
ovos-translate-server --help
+usage: ovos-translate-server [-h] [--tx-engine TX_ENGINE]
+ [--detect-engine DETECT_ENGINE] [--port PORT] [--host HOST]
+
+optional arguments:
+ -h, --help show this help message and exit
+ --tx-engine TX_ENGINE
+ translate plugin to be used
+ --detect-engine DETECT_ENGINE
+ lang detection plugin to be used
+ --port PORT port number
+ --host HOST host
+
+
+eg, to use the Google Translate plugin ovos-translate-server --tx-engine googletranslate_plug --detect-engine googletranslate_detection_plug
then you can do get requests
+http://0.0.0.0:9686/translate/en/o meu nome é Casimiro
(auto detect source lang)http://0.0.0.0:9686/translate/pt/en/o meu nome é Casimiro
(specify source lang)http://0.0.0.0:9686/detect/o meu nome é Casimiro
you can create easily crete a docker file to serve any plugin
+FROM python:3.7
+
+RUN pip3 install ovos-utils==0.0.15
+RUN pip3 install ovos-plugin-manager==0.0.4
+RUN pip3 install ovos-translate-server==0.0.1
+
+RUN pip3 install {PLUGIN_HERE}
+
+ENTRYPOINT ovos-translate-server --tx-engine {PLUGIN_HERE} --detect-engine {PLUGIN_HERE}
+
+build it
+docker build . -t my_ovos_translate_plugin
+
+run it
+docker run -p 8080:9686 my_ovos_translate_plugin
+
+Each plugin can provide its own Dockerfile in its repository using ovos-translate-server
+ +Turn any OVOS TTS plugin into a microservice!
+pip install ovos-tts-server
ovos-tts-server --help
+usage: ovos-tts-server [-h] [--engine ENGINE] [--port PORT] [--host HOST] [--cache]
+
+options:
+ -h, --help show this help message and exit
+ --engine ENGINE tts plugin to be used
+ --port PORT port number
+ --host HOST host
+ --cache save every synth to disk
+
+eg, to use the GladosTTS plugin ovos-tts-server --engine neon-tts-plugin-glados --cache
then do a get request http://192.168.1.112:9666/synthesize/hello
Use with OpenVoiceOS companion plugin
+you can create easily crete a docker file to serve any plugin
+FROM python:3.7
+
+RUN pip3 install ovos-utils==0.0.15
+RUN pip3 install ovos-plugin-manager==0.0.4
+RUN pip3 install ovos-tts-server==0.0.1
+
+RUN pip3 install {PLUGIN_HERE}
+
+ENTRYPOINT ovos-tts-server --engine {PLUGIN_HERE} --cache
+
+build it
+docker build . -t my_ovos_tts_plugin
+
+run it
+docker run -p 8080:9666 my_ovos_tts_plugin
+
+use it http://localhost:8080/synthesize/hello
Each plugin can provide its own Dockerfile in its repository using ovos-tts-server
+ +TTS plugins are responsible for converting text into audio for playback
+from ovos_plugin_manager.templates.tts import TTS
+
+
+# base plugin class
+class MyTTSPlugin(TTS):
+ def __init__(self, *args, **kwargs):
+ # in here you should specify if your plugin return wav or mp3 files
+ # you should also specify any valid ssml tags
+ ssml_tags = ["speak", "s", "w", "voice", "prosody",
+ "say-as", "break", "sub", "phoneme"]
+ super().__init__(*args, **kwargs, audio_ext="wav", ssml_tags=ssml_tags)
+ # read config settings for your plugin if any
+ self.pitch = self.config.get("pitch", 0.5)
+
+ def get_tts(self, sentence, wav_file):
+ # TODO - create TTS audio @ wav_file (path)
+ return wav_file, None
+
+ @property
+ def available_languages(self):
+ """Return languages supported by this TTS implementation in this state
+ This property should be overridden by the derived class to advertise
+ what languages that engine supports.
+ Returns:
+ set: supported languages
+ """
+ # TODO - what langs can this TTS handle?
+ return {"en-us", "es-es"}
+
+
+
+# sample valid configurations per language
+# "display_name" and "offline" provide metadata for UI
+# "priority" is used to calculate position in selection dropdown
+# 0 - top, 100-bottom
+# all other keys represent an example valid config for the plugin
+MyTTSConfig = {
+ lang: [{"lang": lang,
+ "display_name": f"MyTTS ({lang}",
+ "priority": 70,
+ "offline": True}]
+ for lang in ["en-us", "es-es"]
+}
+
+
+ Voice Activity Detection is the process of determining when speech starts and ends in a piece of audio
+VAD plugins classify audio and report if it contains speech or not
+Plugin | +Type | +
---|---|
ovos-vad-plugin-silero | +model | +
ovos-vad-plugin-webrtcvad | +model | +
Introducing OpenVoiceOS - The Free and Open-Source Personal Assistant and Smart Speaker.
+OpenVoiceOS is a new player in the smart speaker market, offering a powerful and flexible alternative to proprietary solutions like Amazon Echo and Google Home.
+With OpenVoiceOS, you have complete control over your personal data and the ability to customize and extend the functionality of your smart speaker.
+Built on open-source software, OpenVoiceOS is designed to provide users with a seamless and intuitive voice interface for controlling their smart home devices, playing music, setting reminders, and much more.
+The platform leverages cutting-edge technology, including machine learning and natural language processing, to deliver a highly responsive and accurate experience.
+In addition to its voice capabilities, OpenVoiceOS features a touch-screen GUI made using QT5 and the KF5 framework.
+The GUI provides an intuitive, user-friendly interface that allows you to access the full range of OpenVoiceOS features and functionality.
+Whether you prefer voice commands or a more traditional touch interface, OpenVoiceOS has you covered.
+One of the key advantages of OpenVoiceOS is its open-source nature, which means that anyone with the technical skills can contribute to the platform and help shape its future.
+Whether you're a software developer, data scientist, or just someone with a passion for technology, you can get involved and help build the next generation of personal assistants and smart speakers.
+With OpenVoiceOS, you have the option to run the platform fully offline, giving you complete control over your data and ensuring that your information is never shared with third parties. This makes OpenVoiceOS the perfect choice for anyone who values privacy and security.
+So if you're looking for a personal assistant and smart speaker that gives you the freedom and control you deserve, be sure to check out OpenVoiceOS today!
+Disclaimer: This post was written in collaboration with ChatGPT
+ +WakeWord plugins classify audio and report if a certain word or sound is present or not
+These plugins usually correspond to the name of the voice assistant, "hey mycroft", but can also be used for other purposes
+ +Plugin | +Type | +
---|---|
ovos-ww-plugin-pocketsphinx | +phonemes | +
ovos-ww-plugin-vosk | +text samples | +
ovos-ww-plugin-snowboy | +model | +
ovos-ww-plugin-precise | +model | +
ovos-ww-plugin-precise-lite | +model | +
ovos-ww-plugin-nyumaya | +model | +
ovos-ww-plugin-nyumaya-legacy | +model | +
neon_ww_plugin_efficientwordnet | +model | +
mycroft-porcupine-plugin | +model | +
ovos-ww-plugin-hotkeys | +keyboard | +
first lets get some boilerplate ouf of the way for the microphone handling logic
+import pyaudio
+
+# helper class
+class CyclicAudioBuffer:
+ def __init__(self, duration=0.98, initial_data=None,
+ sample_rate=16000, sample_width=2):
+ self.size = self.duration_to_bytes(duration, sample_rate, sample_width)
+ initial_data = initial_data or self.get_silence(self.size)
+ # Get at most size bytes from the end of the initial data
+ self._buffer = initial_data[-self.size:]
+
+ @staticmethod
+ def duration_to_bytes(duration, sample_rate=16000, sample_width=2):
+ return int(duration * sample_rate) * sample_width
+
+ @staticmethod
+ def get_silence(num_bytes):
+ return b'\0' * num_bytes
+
+ def append(self, data):
+ """Add new data to the buffer, and slide out data if the buffer is full
+ Arguments:
+ data (bytes): binary data to append to the buffer. If buffer size
+ is exceeded the oldest data will be dropped.
+ """
+ buff = self._buffer + data
+ if len(buff) > self.size:
+ buff = buff[-self.size:]
+ self._buffer = buff
+
+ def get(self):
+ """Get the binary data."""
+ return self._buffer
+
+# pyaudio params
+FORMAT = pyaudio.paInt16
+CHANNELS = 1
+RATE = 16000
+CHUNK = 1024
+MAX_RECORD_SECONDS = 20
+SAMPLE_WIDTH = pyaudio.get_sample_size(FORMAT)
+audio = pyaudio.PyAudio()
+
+# start Recording
+stream = audio.open(channels=CHANNELS, format=FORMAT,
+ rate=RATE, frames_per_buffer=CHUNK, input=True)
+
+
+def load_plugin():
+ # Wake word initialization
+ config = {"model": "path/to/hey_computer.model"}
+ return MyHotWord("hey computer", config=config)
+
+
+def listen_for_ww(plug):
+ # TODO - see examples below
+ return False
+
+plug = load_plugin()
+
+print(f"Waiting for wake word {MAX_RECORD_SECONDS} seconds")
+found = listen_for_ww(plug)
+
+if found:
+ print("Found wake word!")
+else:
+ print("No wake word found")
+
+# stop everything
+plug.stop()
+stream.stop_stream()
+stream.close()
+audio.terminate()
+
+new style plugins
+New style plugins expect to receive live audio, they may keep their own cyclic buffers internally
+
+def listen_for_ww(plug):
+ for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)):
+ data = stream.read(CHUNK)
+ # feed data directly to streaming prediction engines
+ plug.update(data)
+ # streaming engines return result here
+ found = plug.found_wake_word(data)
+ if found:
+ return True
+
+old style plugins (DEPRECATED)
+Old style plugins expect to receive ~3 seconds of audio data at once
+def listen_for_ww(plug):
+ # used for old style non-streaming wakeword (deprecated)
+ audio_buffer = CyclicAudioBuffer(plug.expected_duration,
+ sample_rate=RATE, sample_width=SAMPLE_WIDTH)
+ for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)):
+ data = stream.read(CHUNK)
+ # add data to rolling buffer, used by non-streaming engines
+ audio_buffer.append(data)
+ # non-streaming engines check the byte_data in audio_buffer
+ audio_data = audio_buffer.get()
+ found = plug.found_wake_word(audio_data)
+ if found:
+ return True
+
+new + old style plugins (backwards compatibility)
+if you are unsure what kind of plugin you will be using you can be compatible with both approaches like ovos-core
+def listen_for_ww(plug):
+ # used for old style non-streaming wakeword (deprecated)
+ audio_buffer = CyclicAudioBuffer(plug.expected_duration,
+ sample_rate=RATE, sample_width=SAMPLE_WIDTH)
+ for i in range(0, int(RATE / CHUNK * MAX_RECORD_SECONDS)):
+ data = stream.read(CHUNK)
+ # old style engines will ignore the update
+ plug.update(data)
+ # streaming engines will ignore the byte_data
+ audio_buffer.append(data)
+ audio_data = audio_buffer.get()
+ found = plug.found_wake_word(audio_data)
+ if found:
+ return True
+
+from ovos_plugin_manager.templates.hotwords import HotWordEngine
+from threading import Event
+
+
+class MyWWPlugin(HotWordEngine):
+ def __init__(self, key_phrase="hey mycroft", config=None, lang="en-us"):
+ super().__init__(key_phrase, config, lang)
+ self.detection = Event()
+ # read config settings for your plugin
+ self.sensitivity = self.config.get("sensitivity", 0.5)
+ # TODO - plugin stuff
+ # how does your plugin work? phonemes? text? models?
+ self.engine = MyWW(key_phrase)
+
+ def found_wake_word(self, frame_data):
+ """Check if wake word has been found.
+
+ Checks if the wake word has been found. Should reset any internal
+ tracking of the wake word state.
+
+ Arguments:
+ frame_data (binary data): Deprecated. Audio data for large chunk
+ of audio to be processed. This should not
+ be used to detect audio data instead
+ use update() to incrementally update audio
+ Returns:
+ bool: True if a wake word was detected, else False
+ """
+ detected = self.detection.is_set()
+ if detected:
+ self.detection.clear()
+ return detected
+
+ def update(self, chunk):
+ """Updates the hotword engine with new audio data.
+
+ The engine should process the data and update internal trigger state.
+
+ Arguments:
+ chunk (bytes): Chunk of audio data to process
+ """
+ if self.engine.found_it(chunk): # TODO - check for wake word
+ self.detection.set()
+
+ def stop(self):
+ """Perform any actions needed to shut down the wake word engine.
+
+ This may include things such as unloading data or shutdown
+ external processess.
+ """
+ self.engine.bye() # TODO - plugin specific shutdown
+
+
+