Skip to main content

Key takeaways from landmark EU AI Act

[PLACEHOLDER]

 Recently, the European Parliament voted and passed the landmark EU AI Act. It's the first of its kind and sets a benchmark for future AI regulations worldwide.

The EU AI Act lays the foundation for AI governance, and it's pertinent for organizations delving into AI systems to comply with the legislation, build robust and secure AI systems, and avoid non-compliance fines. 

curly-haired-woman-looking-at-the-printed-paper
Photo by Karolina Grabowska via Pexels

My three key takeaways from the legislation are as follows:

  • The Act introduces the definition of an AI system:
    • "An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments"
  • The Act introduces the classification of AI systems based on risk to society. The Act outlines four risk levels:
    • Unacceptable risk: An AI system that poses an unacceptable risk must be prohibited. Examples of these systems include AI systems driving facial recognition. 
    • High risk: AI systems that pose a high risk to society must comply with a range of requirements, including testing, data training, and cybersecurity, to ensure they comply with governing EU laws. Examples of these systems include automated insurance processing using AI. 
    • Transparency risk: These include limited-risk chatbots, deep fakes, and AI-generated content that must comply with transparency requirements. 
    • Minimal risk: These include common AI systems like spam filters and recommendation engines, which pose minimal risk to society and must follow currently applicable legislation, including GDPR.
Pyramid-risk-to-society-4-risk-levels
Pyramid representing risk to society - 4 risk levels

  • Fines for non-compliance: The severity of the infringement determines the fine. It can be up to 30 M euro or 6% of the total worldwide annual turnover, whichever is higher. 

The EU AI Act lays the foundation for AI governance

Photo by Kindel Media from Pexels modified with Adobe Firefly


Reference

Artificial intelligent act
AI Act press release
AI Act

Trending posts

SLA-SLO-SLI and DevOps metrics

Companies are in need of the metrics that will allow them to stay in business by making sure they meet the expectations of their customers. The name of the game is higher customer satisfaction by winning their trust and loyalty. To do so, you want to provide good products and services. Therefore you need to find ways to monitor performance, drive continuous improvements and deliver the quality expected by the consumer in this highly competitive market. Photos from AlphaTradeZone via Pexel and Spacejoy via Unsplash SLAs, SLOs and SLIs are a good way to achieve the above. They allow clients and vendors to be on the same page when it comes to expected system performance. If we go one level deeper, vendors/providers work on NFRs (Non-Functional Requirements) when working on their solutions. NFRs define the quality attributes of a system. I bring them up because the relationship between them and the SLAs is that they provide, in a way, foundational aspects for the SLA-SLO-SL...

Unlocking the Future of Brand Visibility with Adobe's LLM Optimizer

The rapid rise of AI tools like ChatGPT, Gemini, and Perplexity is transforming how consumers interact with brands and make purchasing decisions. These tools are quickly becoming the go-to resources for research, leading to higher conversion rates and more informed buying choices. As traffic from AI technologies continues to surge, brands must adapt to stay relevant in this evolving landscape. Leaves falling - search box - Created with Adobe Express Enter Adobe’s innovative LLM Optimizer, an AI-first tool designed to help brands navigate the complexities of this new reality and ensure they capitalize on the benefits of AI-driven engagement. Here’s how LLM Optimizer can drive significant value for your brand through Generative Engine Optimization (GEO): 1. Gain Insights into Your Brand's Current Standing. LLM Optimizer empowers brands to understand their visibility in AI-driven search results. By providing comprehensive reports on current mentions, citations, and recommendations in ...

Assembling MLOps practice - part 1

In one of our previous articles it was highlighted how DevOps manages the End-to-End application cycle, leveraging agility and automation. CI/CD pipelines, collaboration and transparency, monitoring and automation are part of the list on how DevOps leverages and facilitates agility. What if then we bring those to support ML? That is how MLOps comes to the table and starts making sense! Lego Alike data assembly - Generated with Gemini A big tech corporation, or a startup, nowadays will see how it is becoming a requirement to incorporate AI and Machine learning (ML) in their operations. ML components are key parts of the ecosystem, supporting the solutions provided to clients. As a result, DevOps and MLOps have become part of the "secret sauce" for success.  What is MLOps Just to bring the definition of what you probably know (or put together based on the above) MLOps focuses on the life-cycle management of machine learning models. It combines machine learning with traditional ...

AI Beyond the Hype: Responsible Adoption for Lasting Impact

 These days, simply mentioning “AI” isn’t going to win anyone over. Clients expect authentic, data-driven results—not just bold claims or industry jargon. The expanding reach of AI brings some real concerns with it, such as errors, bias, and privacy risks are all in the mix. If those issues aren’t addressed, trust can erode quickly. It also affects an organisation’s dynamics, operations, and culture—regardless of size—including shifts in client relationships as expectations evolve. What can set a tech-consulting firm apart, among other things, is the dedication to building AI solutions on foundational values like fairness, transparency, accountability, privacy, security, and reliability. In other words, Responsible AI can be hard work, and it is a genuine differentiator. If an organisation can ensure the solutions implemented are ethical, clear, and consistently trustworthy, then it is likely it will foster customer confidence and loyalty. Photo by Google DeepMind from Pexels ...

This blog uses cookies to improve your browsing experience. Simple analytics might be in place for pageviews purposes. They are harmless and never personally identify you.

Agreed