diff --git a/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc b/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc index 9a26f0518..437584aef 100644 --- a/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc +++ b/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc @@ -208,7 +208,7 @@ increases the speed of {infer} requests. The value of this setting must not exceed the number of available allocated processors per node. You can view the allocation status in {kib} or by using the -{ref}/get-trained-models-stats.html[get trained model stats API]. If you to +{ref}/get-trained-models-stats.html[get trained model stats API]. If you want to change the number of allocations, you can use the {ref}/update-trained-model-deployment.html[update trained model stats API] after the allocation status is `started`.