Blog

The UE AI Act. How does Europe regulate the use of artificial intelligence | AI in business #68

What is the AI Act, and why is it important for SMEs (small and medium-sized enterprises)?

The AI Act is the first regulation of its kind in the world. The first to comprehensively address artificial intelligence from a human-centric approach to AI. It aims to ensure that AI systems used in Europe are safe and stimulate innovation, while respecting fundamental rights, with a special focus on:

  • healthcare – for example, respecting the privacy of patient data,
  • education – adhering to values promoted by the European Union and avoiding discrimination,
  • border protection – ensuring safety without violating citizens’ rights,
  • public services – following best practices in data protection, the right to information, and clear communication.

For SMEs, the AI Act will primarily bring greater legal certainty as it clearly defines the framework for innovation: the principles of designing, developing, and applying AI systems. It will make it easier for companies to invest in AI-based solutions, reducing legal risks. Additionally, regulations applicable across the entire Union will prevent market fragmentation.

European regulations on artificial intelligence were adopted by the European Parliament and the Council of Europe on December 9, 2023. Now, they must be formally accepted by both institutions to come into effect.

Key aspects of the AI Act for business

The EU AI Act introduces a set of requirements for AI systems, depending on the level of risk. These requirements include, among others:

  • the obligation of transparency and notifying users in the case of interactions with chatbots, biometric systems, or emotion recognition technologies,
  • prohibition of using sensitive attributes for biometric categorization,
  • mandatory compliance assessment before market entry for high-risk systems,
  • mandatory registration in the EU database — after the AI Act comes into effect, AI systems used in key sectors such as education, employment, healthcare, and law enforcement will be required to be registered.

Manufacturers and companies using AI systems will also be obligated to monitor risks after introducing them to the market. This will directly impact companies designing and implementing AI systems.

Risk levels of AI systems according to the EU AI Act

The AI Act classifies AI systems into four categories based on their level of risk:

  1. Unacceptable risk
  2. High risk
  3. Limited risk
  4. Minimal risk

Let’s take a closer look at how each group will be defined, along with examples of systems and their classifications.

Unacceptable risk

The European Union imposes a complete ban on the use of technologies listed in the AI Act as systems of unacceptable risk. These are mainly systems designed to exploit individuals’ susceptibility to suggestion for manipulation or deception, systems giving artificial intelligence the ability to make decisions in crucial matters, and those that could lead to an abuse of power. Examples of AI of unacceptable risk include:

  • autonomous weapons operating without human supervision,
  • credibility assessment systems used by law enforcement agencies,
  • automatic identification systems of individuals in public spaces, such as those based on surveillance camera footage,
  • artificial intelligence systems that may be harmful to individuals with physical or mental disabilities,
  • AI employed for law enforcement purposes, with limited exceptions,
  • artificial intelligence systems utilizing harmful “subliminal” and manipulative techniques.

For businesses, the last category is particularly important. This is why transparency in the operation of systems providing suggestions to users and customers will be crucial for compliance with the new legislation of the European Union.

High risk

Artificial intelligence classified as a high-risk AI solution will have to meet rigorous requirements before entering the market. This involves compliance assessments and strict testing before being approved for use. This risk category includes eight areas, such as:

  • autonomous vehicles,
  • medical diagnostic systems,
  • predictive algorithms supporting the justice system,
  • migration and asylum management, border control, as well as
  • employment and workforce management.
Limited risk

To limited-risk systems, most commonly used in business, legislators have fortunately paid much less attention. This category includes:

  • AI chatbots – used for customer service or responding to frequently asked questions in the form of free-flowing conversation,
  • emotion recognition systems – used, for example, to gather data on customer opinions about a company,
  • biometric categorization systems – such as assessing the gender or age of customers in a physical store,
  • image, audio, or video generation – despite the threat posed by deep fakes, this area will be subject to a limited set of obligations.
Low or minimal risk

Low-risk AI solutions are not subject to legal regulations. The AI Act will only mention that creators and users of such solutions should voluntarily establish usage policies. This pertains to solutions such as:

  • content recommendation systems in streaming services, or
  • chatbots on websites responding to typical customer questions.

What requirements does the AI Act impose on solutions used by my company?

To check if the artificial intelligence used by a company complies with the AI Act, you should:

  • classify it into one of the four risk categories described above,
  • if it’s AI of high risk, conduct a compliance assessment,
  • use good practices regarding low-risk AI.

Let’s look at examples of frequently used AI solutions by companies. What requirements will they have to meet?

A customer service chatbot providing basic information about products or answering typical customer questions will likely be classified as a minimal-risk system. It will need to:

  • inform users that they are interacting with AI,
  • provide the option to be redirected to a human consultant,
  • adhere to general requirements regarding transparency, non-discrimination, etc.

A product recommendation system in e-commerce will likely be considered a low-risk system. It will be necessary to inform customers that they are receiving personalized recommendations and provide the option to disable them.

On the other hand, a system for automatic medical diagnostics will be classified as a high-risk system. It will need to undergo a rigorous assessment before market entry and be supervised by a human. Additionally, monitoring its operation and reporting incidents will be necessary.

The urban crime prevention system will also be considered high-risk. It will have to comply with regulations on the protection of privacy and other fundamental rights. Its operation will have to be under constant human supervision.

It is unclear into which category AI-based decision-making systems will fall. It is likely that an AI-based recruitment system that makes hiring decisions on its own will be considered an AI solution of unacceptable risk. On the other hand, a recruitment support system that helps people do their jobs would be considered a high-risk solution.

For the sake of users’ welfare, as well as possible changes in classifications, it is very important to approach the construction and use of AI systems in an ethical and responsible manner from the outset.

What will be the consequences of not complying with the AI Act?

Not following the AI Act could lead to significant fines for companies, ranging from €35 million or 7% of global turnover for large enterprises to €7.5 million or 1.5% for SMEs. Unlawful AI systems might be taken off the market, and their use could be limited.

How to get ready for the AI Act taking effect?

So how do you prepare a company that uses artificial intelligence for the AI Act coming into effect in 2025? Here are some tips on how SMEs and startups can prepare for this moment:

  • keep abreast of the progress of the work and the timeline for implementation of the regulations,
  • evaluate AI systems already in use and adapt them to the new requirements,
  • pay particular attention to ethical aspects in the design of AI.

Summary

The introduction of the AI Act is a big change for the AI ecosystem in Europe. However, with clear and consistent rules, it promises to ensure the safe and ethical development of this technology, which should benefit SMEs and start-ups in particular.

The AI Act, which will come into effect in 2025, will bring significant changes in how small and medium-sized enterprises (SMEs) can leverage artificial intelligence. For SMEs, this primarily means the necessity of careful consideration and analysis of the AI solutions they use, both in terms of compliance with regulations and potential impact on customers and the community.

For small business owners and managers, it’s essential to grasp how their AI systems are categorized in terms of risk and what actions are needed to align them with the upcoming regulations. Take, for example, AI systems used in customer management or marketing, which were previously used quite freely. Now, they’ll require a thorough analysis for compliance with the AI Act. This could create new opportunities for firms specializing in tech legal advice, providing SMEs with support in adapting to these new requirements.

If you like our content, join our busy bees community on Facebook, Twitter, LinkedIn, Instagram, YouTube, Pinterest, TikTok.

Author: Robert Whitney

JavaScript expert and instructor who coaches IT departments. His main goal is to up-level team productivity by teaching others how to effectively cooperate while coding.

Robert Whitney

JavaScript expert and instructor who coaches IT departments. His main goal is to up-level team productivity by teaching others how to effectively cooperate while coding.

Recent Posts

Sales on Pinterest. How can it help with building your e-commerce business?

Pinterest, which made its debut on the social media scene a decade ago, never gained…

4 years ago

How to promote a startup? Our ideas

Thinking carefully on a question of how to promote a startup will allow you to…

4 years ago

Podcast in marketing: what a corporate podcast can give you

A podcast in marketing still seems to be a little underrated. But it changes. It…

4 years ago

Video marketing for small business

Video marketing for small business is an excellent strategy of internet marketing. The art of…

4 years ago

How to promote a startup business? Top 10 pages to upload a product

Are you wondering how to promote a startup business? We present crowdfunding platforms and websites…

4 years ago

How to use social media to increase sales?

How to use social media to increase sales? Well, let's start like that. Over 2.3…

4 years ago