[community] The very unique issue of AI equity and fairness
Jutta Treviranus
jtreviranus at ocadu.ca
Wed Apr 14 13:58:39 UTC 2021
As many of you may be aware of, in our We Count project w e have been concerned with how AI and automated decisions treat people who are highly unique and therefore outliers or tiny minorities from a data perspective.
As automated decision systems are deployed in more and more applications, AI ethics is receiving quite a bit of attention lately, but the measures for detecting bias only address data gaps and the reflection of human bias in the algorithms. This does not address the bias against small minorities and outliers. Most people with disabilities fall into these categories. The only common characteristic of disability from a data perspective is sufficient distance from the norm or average that things are not designed for you. Even if we have full proportional representation in the data and even if all human bias is removed, machine learning and decisions based on statistical probability will still be biased against small minorities and outliers.
This becomes a greater issue as bias auditing systems are used to certify automated decision systems as bias free. They won’t be bias free as far as people with disabilities are concerned. There is an emerging industry of AI bias auditing systems that claim to detect bias and will certify systems as bias free. This is misleading.
We will be collaborating with Julia Stoyanovich in an effort to raise awareness of the problem. She is doing similar research and has created great comic books and courses on the topic:
https://dataresponsibly.github.io/comics/vol2/fairness_en.pdf
https://p2pu.github.io/we-are-ai/
We hope to alert people to this complex issue. There is no easy fix for AI bias against small minorities and outliers.
Jutta
Director & Professor
Inclusive Design Research Centre
OCAD University
More information about the community
mailing list