Objectivity can be challenging. Machines are not shielded from this by any means.
As humans we are leveraging technology in a progressive manner, making our lives easier, becoming an extension of ourselves in our day-to-day existence. By trying to simplify things, complexity creeps in (it is ironic, I know!), and as we try to create something for the good, very often it can be used to go against the principle upon why they were created in the first place.
Photo by Pavel Danilyuk from Pexel |
Therefore, as creators of this technology, usually with the best intentions, it is understandable when model predictions become susceptible to bias.
Those in charge of building the models need to be aware of the common human biases that will find their way into the data used, allowing them to take proactive mitigation steps.
To help remove those biases then we need to have structure. A framework that allows us to build AI systems in an ethical manner that can benefit our communities. AI ethics is that “set of guidelines that advise on the design and outcomes of Artificial Intelligence (AI)”.
AI and bias
Algorithm bias “describes systematic and repeatable errors in a computer system that create unfair outcomes” - Wikipedia.
Putting close attention to the algorithms, and the data, selected for the implementation of the solution “X” allows to identify, prevent and/or mitigate real concerns, such as systematic and unfair discrimination.
AI bias can happen due to cognitive biases, which is an unconscious error in thinking that leads you to misinterpret information from the world around you, affecting the rationality and accuracy of decisions and judgement.
The teams in charge of training the models can be ingesting those biases through the data collected.
Note: The use of incomplete data for data modelling training is another potential way to have AI bias as your outcome.
AI bias types in ML
Here are a group of bias types that can be found in data (Go to the Google Developers link in our references section for more details and examples):
Automation bias Selection bias Group Attribution bias Implicit bias
Reporting bias
When the frequency of events, properties, and/or outcomes captured in a data set does not accurately reflect their real-world frequency.
Automation bias
a tendency to favour results generated by automated systems over those generated by non-automated systems.
Selection bias
Occurs if a data set's examples are chosen in a way that is not reflective of their real-world distribution.
Group Attribution bias
A tendency to generalize what is true of individuals to an entire group to which they belong.
Implicit bias
occurs when assumptions are made based on one's own mental models and personal experiences that do not necessarily apply more generally.
Reference
- Google > Machine learning
- Wikipedia > Algorithmic bias
- IBM > AI ethics
- Simply Psychology
- AI Multiple > Example AI bias