Scroll Top

Widespread AI Regulation is Coming – How can Businesses Prepare?

Artificial Intelligence (AI) is no longer just science fiction. On various scales, AI systems and generation capabilities have become not only a mainstay of many companies, but also a cornerstone. Improving customer service, optimising logistics, high-level data analytics – all these and more can be enabled by AI technologies, which offer unprecedented opportunities to boost business growth and efficiency.

But, as is often the case in life, with greater opportunities come greater risks, which mean even greater responsibilities. Seeing that such risks need to be identified and addressed early, the European Union (EU) has, for the first time in world history, created an “AI Act”, which is slowly coming into force.

Depending on the level of risk, the Act will be fully applied in 2026 and 2027, but the extent to which AI is being used to make decisions or generate content will be revealed this summer. The Act will apply to all companies without any opt-in.

Persona raksta uz tastatūras, priekšplānā redzama sarakste starp cilvēku un čatbotu – vizualizācija Skrivanek darbam ar drošu mākslīgā intelekta izmantošanu klientu saziņā un tulkošanas procesos
Robots ar dokumentu portfeli starp kandidātiem uz interviju – simbolisks attēlojums Skrivanek pieejai par to, kā sagatavoties mākslīgā intelekta regulai un tās ietekmei uz tulkošanas nozari.
Persona pie datora un viedtālruņa, vizualizēts mākslīgā intelekta čatbots ar runas viļņiem un ikonu 'Prompt:', attēlojot Skrivanek interesi par mākslīgā intelekta regulējumu un tā integrāciju valodu tehnoloģijās.

Risks of using AI

The AI Act is based on categorising AI technologies into four risk levels.

Unacceptable risk level

AI systems that meet the unacceptable risk level pose a serious threat to the security, right or dignity of the individual. These types of systems secretly manipulate the consciousness of individuals and certain vulnerable groups, and apply various types of “social rating” systems, affecting the ability of individuals to integrate into society.

Under the AI Act, these types of systems are categorically prohibited in order to protect the rights and values of individuals at EU level.

High risk level

AI systems that fall into this category have a significant impact on important sectors in the country, such as healthcare, security services, transport, employment, and education. The use of this type of technology is permissible, but only under strict conditions.

High-risk AI systems must be transparent, with specific risk management plans, mandatory human oversight and frequent data quality assessments to prevent inaccuracies as well as biases in decision-making.

This category can include, for example, toys, cars, lifts, medical devices. High-risk AI products will be assessed before they are launched on the EU market, as well as reassessed from time to time.

Limited risk level

The limited risk level mostly includes chatbots or virtual assistants that interact directly with a company’s customers or partners but have no practical impact on personal freedom or fundamental rights.

Such systems are subject to less oversight, but transparency and openness must still be respected. Users should always know when they are interacting with AI.

Minimal risk level

The lowest or minimum risk level includes fixed spam filters, productivity tools and everyday software. Such technologies are subject to minimal regulation because they have no intrinsic impact on human rights or security.

By clearly categorising AI systems, companies can quickly and clearly understand their responsibilities to accurately and efficiently indicate their appropriate level of risk. Businesses will not only need to indicate the level of risk, but also to ensure that the relevant requirements are met.

Inaccurate risk categorisation or even non-compliance with the Act can lead not only to significant fines (specific figures are given below) but also to reputational damage.

Key conditions for compliance with the Act

Companies that use AI in any way – especially in high-risk sectors – must develop guidelines for their daily work that comply with the Act.

  • Thorough risk assessment. Companies must carry out a thorough assessment to understand the potential risks of the AI systems or applications they use or create.
  • Human monitoring. As a prerequisite for a fair decision-making process for AI systems, a dedicated system to ensure human oversight is required.
  • Transparency and documentation. Companies should create and maintain detailed documentation on their AI systems from the planning to the operational stages.
  • Data quality, avoiding bias. It must be ensured that training data included in AI systems is of high quality and free from harmful bias to avoid discrimination.

Transparency with users and customers

To a large extent, regardless of the level of risk, AI Act is permeated by one basic principle: transparency or openness. Companies must clearly inform customers when AI is used so that it is clear how the data is used. Especially when the data provided by users is processed by AI. Similarly, if users are provided with content that has been created with AI – images, video or text – it must be made clear that it has been created or enhanced with AI. This also applies to AI-generated articles (e.g. blogs) or raw machine translations.

Equally important is openness at the level of the cooperation partners. To ensure that the company’s own data, as well as that of its partners (and customers), is not used to train the AI content, it is advisable to choose closed-loop AI systems that are tailored to the specific company.

Often, even in paid versions of various AI content creation software, the data entered can be used to improve and train the specific system. Accordingly, confidential data may end up at the disposal of other companies.

The GDPR and the AI Act work hand in hand

Companies already comply with a similar level of regulation – the General Data Protection Regulation (GDPR) – which determines how and for how long different types of customer data can be used. The AI Act complements the GDPR by providing data protection for situations where artificial intelligence is involved.

The key principles of the GDPR are privacy, consent and protection, while the AI Act provides an additional layer where the ethical and transparent use of AI is ensured – in particular in automated decision-making.

Fantasy fans also model more futuristic scenarios of AI taking over the world and guiding humans, but for AI to be able to “enslave” humans, such decisions would require human-like brain capacity or consciousness. Despite huge scientific progress, consciousness and the potential of the human brain are still unexplored, so how can a computer program understand this better than we do? AI is just a tool for solving complex problems based on human defined methods, algorithms and other such aspects.

Penalties and application

Compliance with the AI Act is not voluntary. The Act fixes strict penalties to ensure compliance. Companies should expect increased scrutiny in the form of various assessments and investigations from an early stage. Failure to comply with the AI Act can result in severe fines of up to 7% of a company’s worldwide turnover or EUR 35 million, whichever is higher.

On the other hand, if not only the AI Act but also the General Data Protection Regulation is breached, additional fines of up to 4% of a company’s worldwide turnover or EUR 20 million can also be imposed.

Given the potential penalties, it is important to clearly define the level of liability. Who should be liable if an AI system causes damage? The developer, the user or the company? This question needs to be answered now.

The AI Act is an important step at global level to not only harness the power of AI, but also to take responsibility for its deliberate or unintentional use that may have a negative impact on customers or business partners.

Businesses that prepare in advance for the full scope of the AI Act will not only be better prepared for new challenges, but are also more likely to be recognised by customers and partners.

How can Skrivanek Baltic help?

Clear, accurate and appropriately localised internal and external documentation is particularly important when preparing for the new AI Act, whether it is risk assessments, procedure descriptions, customer information material or cooperation agreements. Skrivanek Baltic, one of the leading language service providers in Latvia and Europe, offers: