Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ready for launch of v 1.3.1 #8

Merged
merged 2 commits into from
Jan 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions assist_microphone/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# Changelog

## 1.3.1

- Added configuration options so that sound can be played on any media player

## 1.3.0

- Update to wyoming-satellite 1.3.0 to get support for timers
Expand Down
45 changes: 45 additions & 0 deletions assist_microphone/DOCS.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,51 @@ Enables or disables output audio.

Multiply sound output volume by fixed value (1.0 = no change, 2.0 = twice as loud). 1.0 is the default.

### Option: `synthesize_using_webhook`

Send text-to-speech text to a Home Assistant webhook for further processing. You can achieve this by using the webhook platform as a trigger inside an automation for example.

<details>
<summary>Example Automation</summary>

```yaml
alias: Satellite response
description: ""
trigger:
- platform: webhook
allowed_methods:
- POST
- PUT
local_only: true
webhook_id: "synthesize-assist-microphone-response" # This must match the webhook_id in the add-on configuration
condition: []
action:
- service: telegram_bot.send_message
metadata: {}
data:
message: "{{ trigger.json.response }}" # This is how you catch whatever the add-on sent
title: Mycroft said
- service: tts.cloud_say
data:
entity_id: media_player.name # Don't forget to change this to your own media player
cache: false
message: "{{ trigger.json.response }}" # This is how you catch whatever the add-on sent
mode: single
```
</details>

Read the HA webhook automation trigger [documentation](https://www.home-assistant.io/docs/automation/trigger/#webhook-trigger) for more information.

If you're using this feature, you will need to set `sound_enabled` to _true_ as well or nothing will happen.

### Option: `webhook_id`

The name of the webhook to use. This is only relevant if `synthesize_using_webhook` is _true_.

### Option: `synthesize_script`

The script that does the heavy lifting of sending the text you want to synthesize to the home assistant webhook. This is only relevant if `synthesize_using_webhook` is _true_.

### Option: `debug_logging`

Enable debug logging.
Expand Down
1 change: 1 addition & 0 deletions assist_microphone/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ RUN \
&& rm -rf /var/lib/apt/lists/*

COPY sounds/ ./sounds/
COPY scripts/ ./scripts/

WORKDIR /
COPY rootfs /
Expand Down
8 changes: 7 additions & 1 deletion assist_microphone/config.yaml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
---
version: 1.3.0
version: 1.3.1
slug: assist_microphone_ajk
name: Assist Microphone - AlfredJKwack
description: Use Assist with local microphone
url: https://github.com/AlfredJKwack/ha-core-addons/blob/master/assist_microphone

Check failure on line 6 in assist_microphone/config.yaml

View workflow job for this annotation

GitHub Actions / YAMLLint

line too long
hassio_api: true
arch:
- amd64
Expand All @@ -24,6 +24,9 @@
auto_gain: 0
mic_volume_multiplier: 1.0
sound_volume_multiplier: 1.0
synthesize_using_webhook: false
webhook_id: "synthesize-assist-microphone-response"
synthesize_script: "/usr/src/scripts/synthesize.sh"
debug_logging: false
schema:
awake_wav: str
Expand All @@ -36,6 +39,9 @@
auto_gain: int
mic_volume_multiplier: float
sound_volume_multiplier: float
synthesize_using_webhook: bool
webhook_id: str
synthesize_script: str
debug_logging: bool
audio: true
homeassistant: 2023.12.1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,10 @@ if bashio::config.true 'sound_enabled'; then
extra_args+=('--snd-command' 'aplay -r 16000 -c 1 -f S16_LE -t raw')
fi

if bashio::config.true 'synthesize_using_webhook'; then
extra_args+=("--synthesize-command" "$(bashio::config 'synthesize_script')")
fi

exec python3 -m wyoming_satellite \
--name 'assist microphone' \
--uri 'tcp://0.0.0.0:10700' \
Expand Down
77 changes: 77 additions & 0 deletions assist_microphone/scripts/synthesize.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
#!/command/with-contenv bashio
# vim: ft=bash
# shellcheck shell=bash
# ==============================================================================

###
# This script is used to send text to a Home Assistant webhook.
#
# It is intended to be used within the context of a wyoming-satellite
# --synthesize-command when text-to-speech text is returned on stdin.
#
# Author: https://github.com/AlfredJKwack
###

set -e

# Take text on stdin and JSON-encode it
text="$(cat | jq -R -s '.')"

# Set the default webhook name if not set in the configuration
if bashio::var.has_value "$(bashio::config 'webhook_id')"; then
webhook_id="$(bashio::config 'webhook_id')"
else
bashio::log.warning "webhook_id is not set. Will set to default"
webhook_id="synthesize-assist-microphone-response"
fi

# Check if SUPERVISOR_TOKEN is set
if [ -z "$SUPERVISOR_TOKEN" ]; then
bashio::log.error "SUPERVISOR_TOKEN is not set. Exiting."
exit 1
fi

# Get the IPv4 address from the first Home Assistant interface
ha_ip=$(curl -s -X GET \
-H "Authorization: Bearer $SUPERVISOR_TOKEN" \
http://supervisor/network/info \
| jq -r '.data.interfaces[0].ipv4.address[0]' \
| cut -d'/' -f1)
if [ -z "$ha_ip" ]; then
bashio::log.error "Failed to get Home Assistant IPv4 address."
exit 1
fi

# Determine if the HA host has SSL enabled.
ssl_enabled=$(curl -s -X GET \
-H "Authorization: Bearer $SUPERVISOR_TOKEN" \
http://supervisor/homeassistant/info \
| jq -r '.data.ssl')
if [ -z "$ssl_enabled" ]; then
bashio::log.error "Failed to determine if SSL is enabled."
exit 1
fi

# Construct webhook URL based on SSL state, IP and webhook
if [[ "$ssl_enabled" == "true" ]]; then
webhookurl="https://${ha_ip}:8123/api/webhook/${webhook_id}"
else
webhookurl="http://${ha_ip}:8123/api/webhook/${webhook_id}"
fi
bashio::log.info "Webhookurl set to : $webhookurl"

# Send the text to the Home Assistant Webhook.
json_payload="{\"response\": ${text}}"
if bashio::config.true 'debug_logging'; then
#only send when in debug to avoid leaking potentially sensitive things.
bashio::log.info "Payload for webhook: ${json_payload}"
fi
response=$(curl -s -o /dev/null -w "%{http_code}" -k -X POST \
-H "Content-Type: application/json" \
-d "$json_payload" \
"${webhookurl}")
if [ "$response" -ne 200 ]; then
bashio::log.error "Failed to send text to webhook. HTTP status code: $response"
exit 1
fi
bashio::log.info "Successfully sent text to webhook."
13 changes: 13 additions & 0 deletions assist_microphone/translations/en.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,19 @@ configuration:
description: >-
Multiply sound output volume by fixed value (1.0 = no change, 2.0 = twice
as loud). 1.0 is the default.
synthesize_using_webhook:
name: Use webhook
description: >-
When text-to-speech text is returned send it to a webhook.
webhook_id:
name: Webhook ID
description: >-
The ID of the webhook to use.
synthesize_script:
name: Synthesize script
description: >-
Path to the script that will doing the heavy lifting of sending the text
to the webhook for further automation.
debug_logging:
name: Debug logging
description: >-
Expand Down
Loading