From cd481896bc68f3691a1acf90fd3c9517a7c2e110 Mon Sep 17 00:00:00 2001 From: Ian Roberts Date: Tue, 7 Sep 2021 18:45:27 +0100 Subject: [PATCH] Be more explicit about how to install the models --- README.md | 2 +- docker/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 2493daf..a5e56a0 100644 --- a/README.md +++ b/README.md @@ -29,7 +29,7 @@ pip ## Models 1. Download models from [the latest release of this repository](https://github.com/GateNLP/ToxicClassifier/releases/latest) (currently available `kaggle.tar.gz`, `olid.tar.gz`) -2. Decompress file inside `models/en/` +2. Decompress file inside `models/en/` (which will create `models/en/kaggle` or `models/en/olid` respectively) ## Basic Usage diff --git a/docker/README.md b/docker/README.md index eb5a0dc..f19a771 100644 --- a/docker/README.md +++ b/docker/README.md @@ -4,7 +4,7 @@ The toxic and offensive classifiers are deployed on GATE Cloud via a two step pr ## Building the Python classifier images -The Python-based classifiers for toxic and offensive language can be built using the `./build.sh` script in this directory. The images are pushed to the GitHub container registry: +The Python-based classifiers for toxic (kaggle dataset) and offensive (olid dataset) language can be built using the `./build.sh` script in this directory. The relevant model files must be downloaded and unpacked in `../models` as described in [the main README](../README.md). The images are pushed to the GitHub container registry: ``` TAG=ghcr.io/gatenlp/toxicclassifier/toxic-classifier:latest ./build.sh en/kaggle Toxic