This repository has been archived by the owner on Aug 20, 2024. It is now read-only.
Responsible Artificial Intelligence #127
AndreaGriffiths11
started this conversation in
Talks in English
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Speaker: Luis Beltran & Carla Mamani
"When we talk about AI, we usually mean a machine learning model that is used within a system to automate something. For example, an autonomous car can take images from sensors. An ML model can use these images to make predictions (example: the object in front of us is a tree). The car uses these predictions to make decisions (example: turn left to avoid the tree). We refer to this whole system as Artificial Intelligence.
This is just one example. AI can be used for anything from underwriting insurance to cancer screening. The defining characteristic is that there is no limited human participation in the decisions the system makes. This can lead to many potential problems and companies must define a clear approach to the use of AI. Responsible AI is a governance framework meant to do just that.
Responsible AI can include details about what data can be collected and used, how models should be evaluated, and how to best implement and monitor models. It can also define who is responsible for the negative outcomes of AI. You can define specific approaches and others more open to interpretation. They all seek to achieve the same thing: create AI systems that are interpretable, fair, secure, and respectful of user privacy.
The objective of this session is to talk about the responsible use of Artificial Intelligence in the generation of fair, equitable and explainable machine learning models."
Beta Was this translation helpful? Give feedback.
All reactions