Skip to main content

AI bias

[PLACEHOLDER]

Objectivity can be challenging. Machines are not shielded from this by any means.

As humans we are leveraging technology in a progressive manner, making our lives easier, becoming an extension of ourselves in our day-to-day existence. By trying to simplify things, complexity creeps in (it is ironic, I know!), and as we try to create something for the good, very often it can be used to go against the principle upon why they were created in the first place.

 

Photo by Pavel Danilyuk from Pexel

Therefore, as creators of this technology, usually with the best intentions, it is understandable when model predictions become susceptible to bias.

Those in charge of building the models need to be aware of the common human biases that will find their way into the data used, allowing them to take proactive mitigation steps.

To help remove those biases then we need to have structure. A framework that allows us to build AI systems in an ethical manner that can benefit our communities. AI ethics is that “set of guidelines that advise on the design and outcomes of Artificial Intelligence (AI)”.

AI and bias

Algorithm bias “describes systematic and repeatable errors in a computer system that create unfair outcomes” - Wikipedia.

Putting close attention to the algorithms, and the data, selected for the implementation of the solution “X” allows to identify, prevent and/or mitigate real concerns, such as systematic and unfair discrimination.
AI bias can happen due to cognitive biases, which is an unconscious error in thinking that leads you to misinterpret information from the world around you, affecting the rationality and accuracy of decisions and judgement.

The teams in charge of training the models can be ingesting those biases through the data collected.
Note: The use of incomplete data for data modelling training is another potential way to have AI bias as your outcome.

The use of incomplete data for data modeling training is another potential way to have AI bias as your outcome


AI bias types in ML 

Here are a group of bias types that can be found in data (Go to the Google Developers link in our references section for more details and examples): 

Automation bias Selection bias Group Attribution bias Implicit bias


Reporting bias

When the frequency of events, properties, and/or outcomes captured in a data set does not accurately reflect their real-world frequency.

Automation bias

a tendency to favour results generated by automated systems over those generated by non-automated systems.

Selection bias

Occurs if a data set's examples are chosen in a way that is not reflective of their real-world distribution.

Group Attribution bias

A tendency to generalize what is true of individuals to an entire group to which they belong.

Implicit bias

occurs when assumptions are made based on one's own mental models and personal experiences that do not necessarily apply more generally.


Reference

Trending posts

AGILE For DIGITAL AGENCIES

Introduction Some Digital agencies have a project process where waterfalls still plays a big part of it, and as far as I can tell, the tech team is usually the one suffering as they are at the last part of the chain left with limited budget and time for execution. I do believe that adopting an Agile approach could make a Digital Agency better and faster. In this article I’m presenting you just another point of view of why it make sense looking at Agile Methodology.  Why Agile for a Digital Agency? The Agile movement started in the software development industry, but it has being proven to be useful in others as well. It becomes handy for the type of business that has changing priorities, changing requirements and flexible deliverables. In the Digital Agency of today you need a different mindset. Creative will always play a huge role (“the bread and butter”). But the “big guys” need to understand that without technology there is no Digital Agency. Technical resources are

Key takeaways from landmark EU AI Act

 Recently, the European Parliament voted and passed the landmark EU AI Act. It's the first of its kind and sets a benchmark for future AI regulations worldwide . The EU AI Act lays the foundation for AI governance, and it's pertinent for organizations delving into AI systems to comply with the legislation, build robust and secure AI systems, and avoid non-compliance fines.  Photo by Karolina Grabowska via Pexels My three key takeaways from the legislation are as follows: The Act introduces the definition of an AI system: "An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" The Act introduces the classification of AI systems based on risk to society. The Act outlin

AI with great power comes responsibility

Generative AI continues to be front and centre of all topics. Companies continue to make an effort for making sense of the technology, investing in their teams, as well as vendors/providers in order to “crack” those use cases that will give them the advantage in this competitive market, and while we are still in this phase of the “AI revolution” where things are still getting sorted.   Photo by Google DeepMind on Unsplash I bet that Uncle Ben’s advise could go beyond Peter Parker, as many of us can make use of that wisdom due to the many things that are currently happening. AI would not be the exception when using this iconic phrase from one of the best comics out there. Uncle Ben and Peter Parker - Spiderman A short list of products out there in the space of generated AI: Text to image Dall.E-2 Fotor Midjourney NightCafe Adobe Firefly

Small Language Models

 Open source models will continue to grow in popularity. Small Language Models (SLMs) are smaller, faster to train with less compute.  They can be used for tackling specific cases while being at a lower cost.  Photo by Tobias Bjørkli via Pexels  SLMs can be more efficient SLMs are faster in inference speed, and they also require less memory and storage.    SLMs and cost Small Language models can run on less powerful machines, making them more affordable. This could be ideal for experimentation, startups and/or small size companies. Here is a short list Tiny Llama. The 1.1B parameters AI Model, trained on 3T Tokens. Microsoft’s Phi-2. The 2.7B parameters, trained on 1.4T tokens. Gemini Nano.  The 6B parameters. Deepseek Coder

This blog uses cookies to improve your browsing experience. Simple analytics might be in place for pageviews purposes. They are harmless and never personally identify you.

Agreed