Skip to main content

AI bias

[PLACEHOLDER]

Objectivity can be challenging. Machines are not shielded from this by any means.

As humans we are leveraging technology in a progressive manner, making our lives easier, becoming an extension of ourselves in our day-to-day existence. By trying to simplify things, complexity creeps in (it is ironic, I know!), and as we try to create something for the good, very often it can be used to go against the principle upon why they were created in the first place.

 

Photo by Pavel Danilyuk from Pexel

Therefore, as creators of this technology, usually with the best intentions, it is understandable when model predictions become susceptible to bias.

Those in charge of building the models need to be aware of the common human biases that will find their way into the data used, allowing them to take proactive mitigation steps.

To help remove those biases then we need to have structure. A framework that allows us to build AI systems in an ethical manner that can benefit our communities. AI ethics is that “set of guidelines that advise on the design and outcomes of Artificial Intelligence (AI)”.

AI and bias

Algorithm bias “describes systematic and repeatable errors in a computer system that create unfair outcomes” - Wikipedia.

Putting close attention to the algorithms, and the data, selected for the implementation of the solution “X” allows to identify, prevent and/or mitigate real concerns, such as systematic and unfair discrimination.
AI bias can happen due to cognitive biases, which is an unconscious error in thinking that leads you to misinterpret information from the world around you, affecting the rationality and accuracy of decisions and judgement.

The teams in charge of training the models can be ingesting those biases through the data collected.
Note: The use of incomplete data for data modelling training is another potential way to have AI bias as your outcome.

The use of incomplete data for data modeling training is another potential way to have AI bias as your outcome


AI bias types in ML 

Here are a group of bias types that can be found in data (Go to the Google Developers link in our references section for more details and examples): 

Automation bias Selection bias Group Attribution bias Implicit bias


Reporting bias

When the frequency of events, properties, and/or outcomes captured in a data set does not accurately reflect their real-world frequency.

Automation bias

a tendency to favour results generated by automated systems over those generated by non-automated systems.

Selection bias

Occurs if a data set's examples are chosen in a way that is not reflective of their real-world distribution.

Group Attribution bias

A tendency to generalize what is true of individuals to an entire group to which they belong.

Implicit bias

occurs when assumptions are made based on one's own mental models and personal experiences that do not necessarily apply more generally.


Reference

Trending posts

Goal setting frameworks for Product Management - OKR and HOSKR

As a business analyst and product manager we often use various frameworks to synthesize and organize our product ideas and goals. I think of frameworks as tools in our product management tool kit which we use depending on the task at hand.  And speaking of goals, OKR is a very popular framework that I often use to set the goals for the products I am managing. However recently I participated the #ProductCon conference hosted by Product School  and I stumbled upon one of the talks in which Rapha Cohen, the CPO at Google Waze introduced a more effective framework for setting product goals. The framework is called HOSKR.  In this post I'll describe both the OKR and HOSKR frameworks in more details using examples. I hope this will provide you, our readers, more practical insights on how to effectively use these frameworks to set your product goals.  OKR OKR stands for O bjectives and K ey R esults. If you are reading this post then you are on our Beolle blog and I am goi...

Digital Sovereignty in a Polarised World - Data, Cloud Power, and the Search for Trusted Alternatives

 Relationships have deteriorated, with trust diminished to an extent that may preclude restoration. The world, once structured to favour certain regions, has undergone significant shifts; for numerous countries, such advantages never existed. In this polarised reality, stakeholders are re-evaluating alliances, as former partners now often embody the role of "frenemy," thereby threatening freedom. This phenomenon is longstanding, rooted in historical power dynamics. When politics and influence supersede principles of fairness, respect, and integrity, ethical boundaries become blurred. Previously, issues that did not directly affect you would get overlooked out of principle, but current risks necessitate action to safeguard sovereignty. Information has consistently served as a key strategic asset, a trend only intensified by technological advancements that have elevated data as the principal factor. In other words, technology has amplified that, and data is the name of the game...

Assembling MLOps practice - part 2

 Part I of this series, published in May, discussed the definition of MLOps and outlined the requirements for implementing this practice within an organisation. It also addressed some of the roles necessary within the team to support MLOps. Lego Alike data assembly - Generated with Gemini   This time, we move forward by exploring part of the technical stack that could be an option for implementing MLOps.  Before proceeding, below is a CTA to the first part of the article for reference. Assembling an MLOps Practice - Part 1 ML components are key parts of the ecosystem, supporting the solutions provided to clients. As a result, DevOps and MLOps have become part of the "secret sauce" for success... Take me there Components of your MLOps stack. The MLOps stack optimises the machine learning life-cycle by fostering collaboration across teams, delivering continuous integration and depl...

AGILE For DIGITAL AGENCIES

Introduction Some Digital agencies have a project process where waterfalls still plays a big part of it, and as far as I can tell, the tech team is usually the one suffering as they are at the last part of the chain left with limited budget and time for execution. I do believe that adopting an Agile approach could make a Digital Agency better and faster. In this article I’m presenting you just another point of view of why it make sense looking at Agile Methodology.  Why Agile for a Digital Agency? The Agile movement started in the software development industry, but it has being proven to be useful in others as well. It becomes handy for the type of business that has changing priorities, changing requirements and flexible deliverables. In the Digital Agency of today you need a different mindset. Creative will always play a huge role (“the bread and butter”). But the “big guys” need to understand that without technology there is no Digital Agency. Technical resources ...

This blog uses cookies to improve your browsing experience. Simple analytics might be in place for pageviews purposes. They are harmless and never personally identify you.

Agreed