Skip to content

Latest commit

 

History

History
49 lines (32 loc) · 3.64 KB

README.md

File metadata and controls

49 lines (32 loc) · 3.64 KB

ko-fi

EU AI Act Engineering

This repository provides the reference list for the emerging term "AI Act Engineering".

EU AI Act regulates the development, deployment, and use of AI systems within the European Union. It aims to promote trustworthy AI by mitigating risks and ensuring fundamental rights are protected.

We define the term "AI Act Engineering" as a set of engineering practices, processes, and methodologies needed to develop and deploy AI systems that comply with the requirements of the proposed European Union (EU) AI Act.

Engineering for the EU AI Act Compliance involves the following aspects:

  • Risk Assessment: Classifying AI systems based on their risk level (unacceptable, high, limited, low) as defined by the EU AI Act.
  • Mitigating High Risks: For high-risk systems, implementing safeguards like robust data management, human oversight mechanisms, and explainable AI techniques. Engineering AI systems under the EU AI Act would require robust testing and validation to ensure that they meet safety, accuracy, and reliability standards. This might include implementing and documenting extensive risk assessments, mitigation strategies, and quality control measures.
  • Documentation and Transparency: Properly documenting the AI system's development process and ensuring a level of transparency appropriate for the risk level. Developers would need to maintain detailed records of AI training data, algorithms, and processes to meet transparency requirements. This documentation would be crucial for audits and for providing explanations of AI system decisions when necessary.

Reading List

  1. The AI Engineer's Guide to Surviving the EU AI Act. By Larysa Visengeriyeva
  2. A Machine Learning Engineer’s Guide To The AI Act

Risk Assessment

  1. Method for (AI System) Risk Classification

Mitigating High Risks

  1. AI safety
  2. AI Trust Lab: Engineering for Trustworthy AI (CMU)
  3. ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) 1.Guidelines for secure AI system development

Documentation and Transparency

  1. Model Cards for Model Reporting (paper)
  2. Model Card Toolkit

AI Act Engineering Tooling Landscape

  1. The LF AI & Data Foundation
  2. GenAI Infra Stack
  3. GenAI App Stack
  4. AI Infrastructure Stack
  5. The 2024 MAD (Machine Learning, AI and Data) Landscape
  6. The State of Data Engineering
  7. ML Testing Landscape

AI Engineering

  1. LLM engineer handbook