A curated list of awesome publications and researchers on AI testing updated and maintained by The Intelligent System Security (IS2) Lab.
👏 What is AI Testing? AI Testing refers to the process of evaluating and verifying the performance of artificial intelligence (AI) systems. It involves testing the AI models, algorithms, and systems to ensure that they function correctly, produce accurate results, and are reliable in their decision-making processes.
👏 How AI Testing works? AI Testing can be done in several ways, such as functional testing, performance testing, security testing, usability testing, and more. It also involves the creation of test cases and data sets, evaluating the accuracy of training data, validating models against real-world scenarios, and monitoring the performance of AI systems in production.
👏 What is the goal of AI Testing? The goal of AI testing is to identify and fix any errors, biases, or vulnerabilities in AI systems, ensuring that they meet the required standards and perform optimally in different environments. This is crucial for applications such as autonomous vehicles, medical diagnosis, and financial forecasting, where accuracy and reliability are essential.
- Information Securrity
CCS | S&P | USENIX | NDSS
- Software Engineering
ICSE | ASE | ISSTA | FSE
[IEEE Access 2023] Toward Deep-Learning-Based Methods in Image Forgery Detection: A Survey
[ACM Computing Surveys 2023] A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning
[Neurocomputing 2023] Adversarial examples based on object detection tasks: A survey
[ICSE 2021] What Are We Really Testing in Mutation Testing for Machine Learning? A Critical Reflection.
[ICSE 2022] CARE: Causality-based Neural Network Repair
[Information and Software Technology 2023] A probabilistic framework for mutation testing in deep neural networks.
[Journal of Systems and Software 2023] The language mutation problem: Leveraging language product lines for mutation testing of interpreters.
ISSTA '22 ACM SIGSOFT International Symposium on Software Testing and Analysis
ICSE '22 International Conference on Software Engineering
S&P '22 IEEE Symposium on Security and Privacy
USENIX '22 USENIX Security Symposium
CCS '22 ACM SIGSAC Conference on Computer and Communications Security
NDSS '22 Annual Network and Distributed System Security Symposium
ASE '22 IEEE/ACM International Conference on Automated Software Engineering
This section provides links to the latest achievements for researchers to use and study.
OpenAI November 30, 2022
ChatGPT is a model which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.