NLG Server is an example for a simple, Rasa-compatible NLG Server that exposes static responses loaded from a JSON file through a REST API.
Because it does not perform Natural Language Generation itself, its purpose is currently limited to separating the responses from your Rasa model to allow changes of the responses without having to retrain and upload a new model.
The exposed API follows the format defined by Rasa. See Rasa NLG Servers
The NLG server loads the responses from a given JSON file or from a directory consisting of multiple response JSON files. Using further nested directories is also supported.
- Load static response JSON files (single files or from a directory)
- Use templates in your responses in the form {slotName} or $slotName (see examples section)
- Randomly chose a response from a set of given response variations
- Can be used with Rasa (Last tested with Rasa 3.2.1)
reponses=<path to our json response file or to a directory of multiple response files>
The responses have to be defined as JSON in the following format
{
"response_name": "This is an example response.",
"example_response": "Test 1 2 3.",
"response_with_variations": [
"first_variation",
"second_variation"
]
}
address:port/nlg
The default port is 8080
{
"response":"example2",
"arguments":{
},
"tracker":{
}
}
{
"text": "Some text",
}
{
"example": "This is an example response.",
"test": "Test 1 2 3.",
"utter_template": "Your number is {number}."
}
{
"response": "utter_template",
"tracker": {
"sender_id": "user_42",
"slots": {
"number": "80"
}
}
}
{
"text": "Your number is 80."
}
To build the NLG server as a docker container you can use the provided dockerfile as an example
Replace the responses folder with your folder name
docker build -t nlg-server .
Your build container can be run (exposed over port 8082) with the following command or your add it to a docker-compose file.
docker run -p 8082:8080 nlg-server
The POJOs for the Rasa Request and Responses have been generated with https://www.jsonschema2pojo.org/