home ::
syllabus ::
timetable ::
groups ::
moodle(591,
791) ::
video tbd ::
© 2021
Well, so much disagreement, but some overlap: -“Ethically-aligned design: A vision for prioritizing human well-begin with autonomous and intelligence systems.” 2019.
- “Ethics guidelines for trustworthy artificial intelligence.” 2018. Available: https://ec.europa.eu/digital-single-market/en/news/ ethics-guidelines-trustworthy-ai
- “Microsoft AI principles,” 2019. [Online]. Available: http://tiny.cc/ Microsof
From the above I state that common concerns are for AI that is accountable, transparent, inclusive, can integrate with human agency, and allows human oversight
We can Optimize for fairness, just ike anything else.
You now that learners adjsut their internal parameters e.g. Neural nets, gradient descent
Symbolic rule learning
Tantithamthavorn, et al. “Automated parameter optimization of classification techniques for defect prediction models.” ICSE’16
Cruz, A. F., Saleiro, P., Belém, C., Soares, C., & Bizarro, P. (2021). Promoting Fairness through Hyperparameter Optimization. arXiv arXiv:2103.12715.
"Fairness" = Effects of different learner control parameters (fairness = ratio of true positive rates across projected attributes)
Aaarg! so many blue dots.
- Q: how to explore them all?
- A: epsilon domination
The output space of these learners actually "grids" into a small number of chunks.
How to explore that grid:
- A1: a funky tabu search. Agrawal, A., Fu, W., Chen, D., Shen, X., & Menzies, T. (2019). How to" DODGE" Complex Software Analytics. IEEE TSE
- A2: something much simpler
An accidental hyperparameter optimizer (with succinct rules)
Does t work for fairness? Lest find out
Algorithm:
- Generate 16 trees
- test trees on training data
- pock "best" one
- apply that to test data