Skip to main content

AI bias

[PLACEHOLDER]

Objectivity can be challenging. Machines are not shielded from this by any means.

As humans we are leveraging technology in a progressive manner, making our lives easier, becoming an extension of ourselves in our day-to-day existence. By trying to simplify things, complexity creeps in (it is ironic, I know!), and as we try to create something for the good, very often it can be used to go against the principle upon why they were created in the first place.

 

Photo by Pavel Danilyuk from Pexel

Therefore, as creators of this technology, usually with the best intentions, it is understandable when model predictions become susceptible to bias.

Those in charge of building the models need to be aware of the common human biases that will find their way into the data used, allowing them to take proactive mitigation steps.

To help remove those biases then we need to have structure. A framework that allows us to build AI systems in an ethical manner that can benefit our communities. AI ethics is that “set of guidelines that advise on the design and outcomes of Artificial Intelligence (AI)”.

AI and bias

Algorithm bias “describes systematic and repeatable errors in a computer system that create unfair outcomes” - Wikipedia.

Putting close attention to the algorithms, and the data, selected for the implementation of the solution “X” allows to identify, prevent and/or mitigate real concerns, such as systematic and unfair discrimination.
AI bias can happen due to cognitive biases, which is an unconscious error in thinking that leads you to misinterpret information from the world around you, affecting the rationality and accuracy of decisions and judgement.

The teams in charge of training the models can be ingesting those biases through the data collected.
Note: The use of incomplete data for data modelling training is another potential way to have AI bias as your outcome.

The use of incomplete data for data modeling training is another potential way to have AI bias as your outcome


AI bias types in ML 

Here are a group of bias types that can be found in data (Go to the Google Developers link in our references section for more details and examples): 

Automation bias Selection bias Group Attribution bias Implicit bias


Reporting bias

When the frequency of events, properties, and/or outcomes captured in a data set does not accurately reflect their real-world frequency.

Automation bias

a tendency to favour results generated by automated systems over those generated by non-automated systems.

Selection bias

Occurs if a data set's examples are chosen in a way that is not reflective of their real-world distribution.

Group Attribution bias

A tendency to generalize what is true of individuals to an entire group to which they belong.

Implicit bias

occurs when assumptions are made based on one's own mental models and personal experiences that do not necessarily apply more generally.


Reference

Trending posts

Apple's App Tracking Transparency sealing Meta's fate

If you have been following the recent news on Meta (formerly Facebook) you may have read that Meta recently projected their ad revenue will be cut by a staggering $10 billion in 2022 due to Apple’s new App Tracking Transparency feature (also known as ATT). This has resulted in Meta’s stock to plummet by over 20%. Photo by julien Tromeur on Unsplash - modified by Beolle So what is Apple’s ATT and how does it impact ad revenue? Apple has been releasing multiple privacy features for the last few years. This included Apple’s Mail Privacy Protection and Apple’s App Tracking Transparency feature. You can learn more about Apple’s Mail Privacy Protection in our earlier post by clicking here .  Apple’s App Tracking Transparency (ATT) was launched in iOS 14.5 and iPadOS 14.5 where it prompted users to select if they wanted the app to track their activities across other apps on the device. The prompt is displayed when the user opens an app like Facebook or Instagram for the first time o...

Assembling MLOps practice - part 2

 Part I of this series, published in May, discussed the definition of MLOps and outlined the requirements for implementing this practice within an organisation. It also addressed some of the roles necessary within the team to support MLOps. Lego Alike data assembly - Generated with Gemini   This time, we move forward by exploring part of the technical stack that could be an option for implementing MLOps.  Before proceeding, below is a CTA to the first part of the article for reference. Assembling an MLOps Practice - Part 1 ML components are key parts of the ecosystem, supporting the solutions provided to clients. As a result, DevOps and MLOps have become part of the "secret sauce" for success... Take me there Components of your MLOps stack. The MLOps stack optimises the machine learning life-cycle by fostering collaboration across teams, delivering continuous integration and depl...

SLA-SLO-SLI and DevOps metrics

Companies are in need of the metrics that will allow them to stay in business by making sure they meet the expectations of their customers. The name of the game is higher customer satisfaction by winning their trust and loyalty. To do so, you want to provide good products and services. Therefore you need to find ways to monitor performance, drive continuous improvements and deliver the quality expected by the consumer in this highly competitive market. Photos from AlphaTradeZone via Pexel and Spacejoy via Unsplash SLAs, SLOs and SLIs are a good way to achieve the above. They allow clients and vendors to be on the same page when it comes to expected system performance. If we go one level deeper, vendors/providers work on NFRs (Non-Functional Requirements) when working on their solutions. NFRs define the quality attributes of a system. I bring them up because the relationship between them and the SLAs is that they provide, in a way, foundational aspects for the SLA-SLO-SL...

Research around JIRA vs TFS

By: Carlos G.    An opportunity came from a colleague to discuss the case of company “X” for improving the ALM by introducing tools to this company. The challenge was to decide between Microsoft and Attlasian . He came to me because I’m a Microsoft kind of guy and he wanted the opinion from my perspective, not as a consultant, but as a friend of what he was trying to accomplish. He said that even though I was inclined to a technology I was able to explore other things and be “fair”. I agreed to be a part of his research because of 3 things: because of my curiosity I'm always willing to learn new techy stuff. Sometimes is good to be the dumbest one of the group. You learn so much! This was a story that I could blog about. (Of course no names are used in this post). My first impression was thinking “cool”; let’s compare Visual Studio TFS vs JIRA. Immediately I got a comment back with: “ Sure but JIRA by itself is more like an issue tracker in simple terms ”. Th...

This blog uses cookies to improve your browsing experience. Simple analytics might be in place for pageviews purposes. They are harmless and never personally identify you.

Agreed