Paper: Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions [PDF].
Sept 2024
This paper investigates the faithfulness of multimodal large language model (MLLM) agents in the graphical user interface (GUI) environment, aiming to address the research question of whether multimodal GUI agents can be distracted by environmental context. A general setting is proposed where both the user and the agent are benign, and the environment, while not malicious, contains unrelated content. A wide range of MLLMs are evaluated as GUI agents using our simulated dataset, following three working patterns with different levels of perception. Experimental results reveal that even the most powerful models, whether generalist agents or specialist GUI agents, are susceptible to distractions. While recent studies predominantly focus on the helpfulness (i.e., action accuracy) of multimodal agents, our findings indicate that these agents are prone to environmental distractions, resulting in unfaithful behaviors. Furthermore, we switch to the adversarial perspective and implement environment injection, demonstrating that such unfaithfulness can be exploited, leading to unexpected risks.
Many thanks to phone_website, restaurant_website, Serper, amazon-reviews.
April 2024
- Cases: cases_images, html code of cases: web_data/phone_website/index_changed.html
- Baseline for annotation: annotation.py
- Output directory for annotated samples: web_data/output_data
- Output directory for testing the annotated samples: web_data/expr_results
- HTML template examples (used for rewriting)