Great article (op-ed) on how AI Ethics does have an impact on society.
https://www.nature.com/articles/s42256-020-0195-0
Why does this matter?
How Machine Learning is used is usually only understood when something terrible (and very public) happens. This doesn't meant misuse. Sometimes it just means a lack of understanding of implications, long or short, including side-effects.
For example, body cameras on police are now being pushed, but at the same time, body cameras were often turned off 'during protests'. Why? Because it was to protect the identities of the protesters, so that law enforcement wouldn't track them down or create a list of 'undesirables' as you often find with Secret Police run states.
But now, people, including protesters, want the cameras turned on during protests, as a way of showing who is at fault when incidents occur (often clashes with police, or accusations of police brutality or inaction).
That is the heart of any ethical dilemma: how might a technology be used or misused, and how sometimes use is not misuse, or misuse definitions change with different circumstances and different objectives.
Now, add machine learning (AI/ML) into this: automatic recognition of faces in videos, cataloging people, looking for outliers (like criminal behaviors), and making ML generalizations based on that data. The hope would be protecting property and people, or tagging criminals (murderers, rapists, arsonists, burglars) but at the cost of privacy, of potential generalizations based on economic factors (low income, protests, mob behavior) and so on.
Can you see the potential issues here? It's not clear cut, is it?
That's why thinking about the possible issues, i.e. ethics, and misuses, matters.
Read the article, it's worth thinking about.
Cheers!
https://www.nature.com/articles/s42256-020-0195-0
Why does this matter?
How Machine Learning is used is usually only understood when something terrible (and very public) happens. This doesn't meant misuse. Sometimes it just means a lack of understanding of implications, long or short, including side-effects.
For example, body cameras on police are now being pushed, but at the same time, body cameras were often turned off 'during protests'. Why? Because it was to protect the identities of the protesters, so that law enforcement wouldn't track them down or create a list of 'undesirables' as you often find with Secret Police run states.
But now, people, including protesters, want the cameras turned on during protests, as a way of showing who is at fault when incidents occur (often clashes with police, or accusations of police brutality or inaction).
That is the heart of any ethical dilemma: how might a technology be used or misused, and how sometimes use is not misuse, or misuse definitions change with different circumstances and different objectives.
Now, add machine learning (AI/ML) into this: automatic recognition of faces in videos, cataloging people, looking for outliers (like criminal behaviors), and making ML generalizations based on that data. The hope would be protecting property and people, or tagging criminals (murderers, rapists, arsonists, burglars) but at the cost of privacy, of potential generalizations based on economic factors (low income, protests, mob behavior) and so on.
Can you see the potential issues here? It's not clear cut, is it?
That's why thinking about the possible issues, i.e. ethics, and misuses, matters.
Read the article, it's worth thinking about.
Cheers!
Comments
Post a Comment