You might also be interested in:
AI Law – What companies need to know now
Artificial intelligence is no longer a futuristic topic; it has already become a central part of our everyday lives. Whether in online shopping, text creation, image recognition, or production control – AI systems support companies and consumers in countless situations. With this rapid development, however, comes a growing call for clear regulations. This is precisely where the European Union's AI Act comes in.
Many still remember the introduction of the GDPR in 2018. Back then, there was great excitement: companies suddenly had to document their data flows, adapt privacy policies, and restructure processes. The AI Act will be similar – only this time it concerns the use of algorithms and learning systems.
But why is such a law even necessary? The answer is simple: AI can bring enormous benefits, but it also carries risks. From discriminatory decision-making systems to opaque "black boxes," there are numerous examples that show how harmful the uncontrolled use of AI can be. At the same time, politicians and businesses want to strengthen people's trust in AI – because only those who have trust will actually use the technology.
For companies, this means that anyone who already uses AI or plans to use it must address the AI Act. And this doesn't just apply to tech giants like Google or Microsoft. Medium-sized retailers, manufacturers, and service providers are also faced with the task of reviewing their systems. AI is now embedded in many applications, often without being visible at first glance: in chatbots, in recommendation systems for online shops, in fraud detection, or in automatic translation.
The AI Act is not an isolated issue. It is part of a whole wave of EU digital laws: GDPR, Digital Services Act, Data Act – the list is constantly growing. This can quickly become confusing for companies. But those who take the requirements seriously can gain a real competitive advantage. Because customers appreciate transparency and the responsible use of new technologies.
This article therefore aims to provide a clear overview:
- What is behind the AI Act?
What obligations do companies face?
How does this specifically affect e-commerce and ERP environments like Microsoft Dynamics 365?
And what can you do today to be prepared?
What does the AI law regulate in detail?
The Artificial Intelligence Act (AI Act) is the world's first comprehensive regulation for artificial intelligence. It was passed by the European Union with the aim of creating clear rules for the development and use of AI systems. The basic idea: AI should be used safely, transparently, and in accordance with European values – without unnecessarily slowing down innovation.
One could say: Just as the GDPR shaped data protection worldwide, the AI Act will set the standard for dealing with artificial intelligence.
Why the EU needs an AI law and why it makes sense
Artificial intelligence is increasingly being used in schools, public authorities, healthcare, industry, and commerce, and has become indispensable in these areas. However, with each new use case, the risk of incorrect decisions, discrimination, or misuse also grows.
Examples frequently cited in the public debate:
- Facial recognition in public spaces that violates personal rights
- Chatbots that do not clearly inform users that they are speaking to a machine
- Algorithms that reject applications or credit decisions based on inaccurate data
- Generative AI that creates deceptively real fake images or videos
The EU therefore sees it as its task to create trust in AI while simultaneously protecting fundamental rights such as data protection, equal treatment, and security.
The central goals of the AI Act
The AI Act pursues several overarching goals. The focus is on protecting fundamental rights to prevent discriminatory decisions or unlawful surveillance by AI systems. Transparency is equally important: Applications should remain comprehensible and explainable, rather than disappearing into opaque "black boxes." Another core concern is security – systems must operate reliably, must not be easily manipulated, and must not cause serious errors.
At the same time, the EU aims to promote innovation: Clear guidelines should provide guidance for companies without hampering progress and creativity. Finally, harmonization ensures a uniform legal framework in all member states, preventing a patchwork of individual regulations.
Which technologies are covered by the AI Act?
The law applies to all applications related to AI – and this is broader than many people think.
This includes, among others:
- Classic machine learning systems (e.g., recommendation systems, fraud detection)
- Speech and text AI (e.g., chatbots, translation tools, generative AI such as ChatGPT)
- Image and video analysis (e.g., facial recognition, object recognition in logistics)
- Prediction and scoring systems (e.g., credit checks, applicant selection)
Important: This is not just about "AI products," but also about applications within software solutions, ERP systems, or online shops. Many companies are already using AI indirectly without even realizing it at first glance – for example, through integrated features in Microsoft Dynamics 365 or in marketing tools.
Why the AI Act is globally relevant
Even though the law is a European project, it will have a global impact. The GDPR has already shown that anyone doing business in Europe must comply with European regulations – regardless of whether the company is based in Munich, New York, or Shanghai.
This means:
- International providers must adapt their AI products to EU requirements.
- Companies outside the EU often voluntarily comply with them in order to maintain access to the European market.
- The AI Act is thus becoming a de facto global standard.
This presents an opportunity for European companies: They can advertise their compliance with the regulations and build customer trust – an advantage that many US or Asian competitors have yet to catch up with.
From chatbots to facial recognition – which AI is allowed?
The core of the AI Act is the classification of applications into risk classes. These determine the obligations a company must fulfill when using AI systems. The EU pursues a risk-based approach: the higher the risk to society and fundamental rights, the stricter the rules.
This makes the AI Act more flexible than a blanket ban – while still being practical for companies.
Overview of the four risk levels
- Unacceptable risk – certain applications are prohibited.
- High risk – strict requirements and audits.
- Limited risk – transparency obligations.
- Minimal risk – few requirements, free use.
1. Unacceptable risk – absolute prohibitions
Certain AI applications are considered so dangerous to fundamental rights that they are fundamentally prohibited in the EU.
Examples:
- Social scoring, in which people are "rated" based on their behavior or data.
- Manipulative AI: Systems that deliberately exploit people's weaknesses (e.g., gaming addiction in children).
- Real-time biometric surveillance in public spaces (with a few exceptions, e.g., counterterrorism).
For companies in trade or industry, these scenarios are rather theoretical – but they clearly demonstrate: The EU is drawing a clear red line.
2. High risk – strict regulations
This is where things get exciting for many companies. High-risk systems may be used, but they must adhere to strict rules.
This includes applications in areas such as:
- Education: AI that grades exams or sorts students
- Healthcare: Diagnostic tools that determine treatment options
- Workplace: AI in application processes or personnel decisions
- Critical infrastructure: e.g., energy, water, transport
- Law & justice: Systems that influence judgments or sentences
The AI Act prescribes clear obligations for companies with high-risk applications. This includes strict documentation with comprehensive evidence so that it is always clear how a system works and what data it is based on. Equally important is a transparent decision-making logic that ensures that results remain explainable and don't disappear into a black box. In addition, companies must establish consistent risk and quality management to identify and avoid sources of error early on. Finally, registration in an EU database is also required to ensure that regulators and the public have transparency regarding the use of high-risk AI.
3. Limited Risk – Transparency First
This category is already much more common in e-commerce practice.
Examples:
- Chatbots: Customers must clearly recognize that they are speaking to a machine.
- Emotion recognition in marketing or HR systems.
- Generative AI: Content may need to be labeled.
This is primarily about transparency obligations. Companies must inform their users that they are interacting with AI – no more, but also no less.
For online retailers, this means, for example: A chatbot in a store may not pretend to be a human.
4. Minimal risk – free use
Most AI applications fall into this category. These include:
- Spam filters in email programs
- Product recommendations in online shops
- Automatic translation tools
There are no additional legal obligations here. Companies can use such tools freely – and have often been doing so for years without even perceiving them as "AI."
Practical classification for e-commerce & ERP
To ensure that the risk categories do not remain abstract, here are a few typical examples:
- Product recommendation in an online store (Dynamics 365 + PIM) → minimal risk
- Chatbot for customer service → limited risk (transparency notice required)
- AI-supported applicant selection → high risk (documentation, quality management)
- Facial recognition for customer analysis → unacceptable risk (practically prohibited)
AI law in practice: obligations for companies
The AI Act stipulates different obligations depending on the risk class. While systems with minimal risk can be used almost freely, high-risk applications are subject to strict requirements. It is crucial that those who familiarize themselves with the requirements early on avoid unpleasant surprises and potentially high costs.
Transparency Obligations
Transparency is the foundation of the AI Act. Users should always be able to understand whether they are interacting with an AI and how decisions are made.
This means:
- Labeling requirements: Chatbots or generative AI content must be clearly identifiable as such.
- Explainability: AI decisions must not disappear into a "black box." Companies must explain the criteria used to generate results.
- Disclosure of data sources: For sensitive applications, the training data used must be identified to disclose any bias or discrimination.
E-commerce example: A chatbot in customer service must be clearly identified as a bot. A rating system should explain which factors are used in the calculation.
Documentation & Evidence Requirements
Especially for high-risk applications, proper documentation is essential for legally compliant operations. Companies must:
- maintain a technical dossier on the structure and functionality of the AI,
- conduct and regularly update a risk assessment,
- store logs on AI deployment,
- submit a declaration of conformity before using a high-risk system.
These obligations are reminiscent of CE marking or ISO certifications – complex, but ultimately provide security for companies and customers.
Technical Requirements
The AI Act also sets clear minimum technical standards:
- Robustness & Accuracy: Systems must operate reliably and are fault-tolerant.
- Cybersecurity: Protection against manipulation and hacker attacks is mandatory.
- Data Quality: Training and deployment data must be up-to-date, representative, and non-discriminatory.
- Human Oversight: Decisions must never be made entirely by machines without human oversight.
For e-commerce, this means: Anyone using AI-supported price forecasts or demand predictions must regularly review and validate their results.
Supervisory Authorities & Sanctions
In Germany, the final decision on which authority will assume oversight has not yet been made – likely a combination of the Federal Network Agency and the Data Protection Authority. However, one thing is clear: violations will be expensive.
- Up to €35 million or 7% of global revenue for prohibited applications.
- Up to €15 million or 3% for violations of high-risk obligations.
- Up to €7.5 million or 1.5% for incomplete or false information.
Transitional Periods & Timeline
The law will take effect gradually:
- From 2024/2025: Prohibited applications apply immediately.
- From 2025: Rules for high-risk systems take effect.
- From 2026: All obligations apply in full.
AI law as an opportunity for e-commerce
For online retail companies, this means: Even if the strictest regulations don't immediately affect everyone, the issue should by no means be postponed. AI has long been a part of many shop systems, ERP solutions, and marketing tools – often without it being apparent at first glance. It is especially important to inform customers transparently when AI is being used, for example in chatbots. Product recommendations and AI-supported reviews should also be made transparent to build trust. Another focus is on the AI features in ERP systems such as Microsoft Dynamics 365, which are continuously evolving.
Those who ensure transparency early on and adapt internal processes can even use the AI law as an opportunity: Responsible AI becomes a real competitive advantage – and trust is often more important than price in e-commerce.
(Note: This article contains AI-generated content.)
array(8) {
["@type"]=>
string(11) "NewsArticle"
["identifier"]=>
string(17) "#/schema/news/528"
["headline"]=>
string(42) "AI Law – What companies need to know now"
["datePublished"]=>
string(25) "2025-09-17T06:55:00+02:00"
["url"]=>
string(52) "/news/headless-commerce-advantages-mach-architecture"
["description"]=>
string(129) "The AI law explains: Which rules apply, what obligations companies have, and why transparency in e-commerce is now crucial."
["author"]=>
array(2) {
["@type"]=>
string(6) "Person"
["name"]=>
string(13) "Steffi Greuel"
}
["image"]=>
array(6) {
["@type"]=>
string(11) "ImageObject"
["caption"]=>
string(0) ""
["contentUrl"]=>
string(46) "/assets/images/h/ki-gesetz-egyhkbrhf5tje6f.jpg"
["identifier"]=>
string(51) "#/schema/image/0fd29df4-92fc-11f0-a664-95b3dc673774"
["license"]=>
string(0) ""
["name"]=>
string(0) ""
}
}