diff --git a/docs/en/stack/ml/nlp/images/ml-nlp-test-ner.png b/docs/en/stack/ml/nlp/images/ml-nlp-test-ner.png index 36541436f..e0f187e68 100644 Binary files a/docs/en/stack/ml/nlp/images/ml-nlp-test-ner.png and b/docs/en/stack/ml/nlp/images/ml-nlp-test-ner.png differ diff --git a/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc b/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc index 8f7adf10e..06c063f1b 100644 --- a/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc +++ b/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc @@ -173,7 +173,7 @@ When you have dedicated deployments for different purposes, you ensure that the Having separate deployments for search and ingest mitigates performance issues resulting from interactions between the two, which can be hard to diagnose. [role="screenshot"] -image::images/ml-nlp-deployment-id-elser-v2.png["Model deployment on the Trained Models UI."] +image::images/ml-nlp-deployment-id-elser-v2.png["Model deployment on the Trained Models UI.",width=640] Each deployment will be fine-tuned automatically based on its specific purpose you choose. @@ -231,7 +231,7 @@ The simplest method to test your model against new data is to use the field of an existing index in your cluster to test the model: [role="screenshot"] -image::images/ml-nlp-test-ner.png[Testing a sentence with two named entities against a NER trained model in the *{ml}* app] +image::images/ml-nlp-test-ner.png["Testing a sentence with two named entities against a NER trained model in the *{ml}* app",] Alternatively, you can use the {ref}/infer-trained-model.html[infer trained model API].