This is a dataset for the Home Assistant LLM API (blog post).
See the home-assistant-datasets assist
command for more details on how to
run evaluations.
This section contains information about the dataset. You can use the existing definitions or define your own.
The evaluation will set up one synthetic home inventory file which is used to set up the devices in the expected state and test out the conversation agent against it. You can either use existing synthetic homes from the common datasets, or create your own by hand. You may want to choose based on the device types to exercise.
The inventory should be placed in _fixtures.yaml
in the dataset subdirectory.
A synthetic home is that it may include a broader set of devices in the house than specifically whas is being tested in order to make it more realistic. This also helps include distractors useful for evaluating the model such as additional devices not under test.
See synthetic_home for more about an inventory definition or home-assistant-synthetic-home for the custom component.
See home_assistant_datasets/tool/assist/data_model.py
for the definition of
the dataclasses that define the evaluation data. This includes the
The data_model.py
file contains the dataclasses that hold data related to what
is being evaluated. e.g. information about words to say to the conversation agent,
relevant devices and their state used when testing.
Property | Description |
---|---|
category |
A category used for slicing performance statistics |
tests.sentences |
A list of conversation inputs e.g. Please turn on the light |
tests.setup |
Initial inventory entity state to setup in addition to the device fixture. |
tests.expect_changes |
Differences in inventory entity states that are expected to change during the test. |
tests.ignore_changes |
Entity attributes (or keyword state ) to ignore when calculating changes. |
These dataclasses are populated based on yaml files in the dataset subdirectories.
Now that the data collection tasks for the eval are defined, we want to decide which parameters to adjust. This includes things like the model to use, the prompt, etc.
Models are configured in the models.yaml
file in the root of this repository. These
are configuration entries for home assistant integrations.
Here is an example models.yaml
:
models:
- model_id: assistant
domain: homeassistant
- model_id: gemini-1.5-flash
domain: google_generative_ai_conversation
config_entry_data:
api_key: XXXXXXXXXXXX
config_entry_options:
chat_model: models/gemini-1.5-flash-latest
llm_hass_api: assist
- model_id: gpt-4o
domain: openai_conversation
config_entry_data:
api_key: sk-XXXXXXXXXXXX
config_entry_options:
chat_model: gpt-4o
llm_hass_api: assist
# Update when ollama supports tool calling
- model_id: llama3
domain: ollama
config_entry_data:
url: http://ollama.ollama:11434/
model: llama3:latest
- model_id: mistral-7b-instruct
domain: vicuna_conversation
config_entry_data:
api_key: sk-0000000000000000000
base_url: http://llama-cublas.llama:8000/v1
config_entry_options:
llm_hass_api: assist
Create a virtual environment:
$ python3 -m venv venv
$ source venv/bin/activate
$ pip3 install -r requirements_dev.txt
$ pip3 install -r requirements_eval.txt
The above will by default use a somewhat recent version of home assistant but if you want to use one from a local environment then install it:
$ pip3 install -e /workspaces/core
You will need the synthetic-home custom component and you can either install it in a separate directory like this:
$ export PYTHONPATH="${PYTHONPATH}:/workspaces/home-assistant-synthetic-home/"
Or using a custom_components
directory in the local directory if you have multiple
custom components you want to evaluate:
$ export PYTHONPATH="${PYTHONPATH}:${PWD}"
You can collect data from the API using the command, which uses pytest as the underlying framework.
$ DATASET="datasets/assist/"
$ OUTPUT_DIR="reports/assist/2024.8.0b" # Output based on home assistant version used
$ MODEL=llama3.1
$ home-assistant-datasets assist collect --models=${MODEL} --dataset=${DATASET} --model_output_dir=${OUTPUT_DIR}
See home-assistant-datasets assist collect --help
for options to control pytest.
Once you have collected data from the model, you can perform a manual or offline evaluation of the results in the model output directoyr.
$ home-assistant-datasets assist eval --model_output_dir=${OUTPUT_DIR} --output_type=report
You can export the results into a spreadsheet or perform other analysis. In this example, the assistnat pipeline successfully handled around 51% of queries.
$ home-assistant-datasets assist eval --model_output_dir=${OUTPUT_DIR} --output_type=csv > ${OUTPUT_DIR}/report.csv
$ wc -l report.csv
113 report.csv
$ grep Good report.csv | wc -l
58
$ python -c "print(58/113)"
0.5132743362831859
See Annotations for details on how to systematically run human annotations of the output. You can review the outputs manually if the tasks do not support exhaustive scoring.