Many companies are facing critical decisions about the implementation of AI technologies, and this is where legal professionals play a crucial role.
It is a familiar scenario for many legal professionals: A business unit has spent months developing an exciting solution, the team is enthusiastic, and implementation is almost ready. Only then are the legal advisors brought in, with the expectation that getting the legal green light will be a mere formality.
However, when it comes to AI, entirely new rules come into play, and the traditional division between legal, business and IT can often become an obstacle to business initiatives.
With the EU's upcoming AI Act and the extensive regulatory requirements it entails, legal advice has become a critical success factor for companies looking to harness the potential of AI.
Early legal involvement in AI projects is not just about risk minimisation ‒ it is a strategic necessity for ensuring sustainable value creation. With the right legal framework, companies can confidently explore AI's opportunities while complying with current and future legislation.
The use case must take center stage
The classic mindset, "law hinders innovation," likely stems from processes where business or technology is the focal point, and only afterwards does one ask: "Are we allowed to do this in our business?" or: "Are we allowed to use this technology?"
Legal professionals have learned to consider the purpose behind a business initiative or the use of a technology.
Example: "Are we allowed to use a tape recorder?" Here, the legal professional will ask: "For what purpose?"
The same question arises when the business asks: "Are we allowed to use ChatGPT?" If the business then responds: "To communicate with our customers," the legal professional will again ask: "About what?"
By focusing on the use case, many potential misunderstandings between legal and business perspectives can be avoided. This approach enables the business to more easily articulate what they need the legal team to assess.
The above example, with the use case at the center, would look like this:
"Are we allowed to record all customer conversations and transcribe them using AI technology to document agreements with customers and improve service levels?"
Not only does the legal professional receive a clear use case to evaluate, but the business is also compelled to articulate what they intend to do and the specific value they expect to create.
Screening: Is it legal?
The broad application possibilities of AI and its expected role as a competitive game-changer have made AI a strategic priority. Therefore, this process must be replaced by a much better process that involves legal professionals early on and focuses on the use case rather than the technology itself.
By integrating legal screening as the first step in any AI use case, businesses can:
- Save valuable development and implementation time
- Avoid costly adjustments later in the process
- Ensure documentation from the start
- Build trust between the business and the legal department
- Design solutions that are compliant from the outset.
Practical approach to early screening:
- Establish a quick and efficient process for the initial legal assessment of AI use cases.
- Develop a checklist with key questions on:
- Data foundation and GDPR compliance
- AI system's risk classification under the AI Act
- Transparency requirements and documentation needs
- Liability implications.
- Foster a structured dialogue between business and legal through:
- Fixed touchpoints in the early phases of the project
- Clear templates for describing use cases
- Defined escalation processes.
The legal complications
Many companies overlook fundamental legal requirements in their eagerness to implement AI solutions. Two key examples from GDPR illustrate why early screening in the process is crucial.
- Automated Decisions (GDPR, Article 22): Individuals have the right to avoid decisions solely based on automated processing if these decisions have significant legal or practical implications. AI systems, therefore, cannot make decisive decisions on their own ‒ meaningful human involvement is required, especially in areas such as HR, credit assessments, pricing, and customer segmentation. Businesses should include human judgment, document this process and ensure transparency in their systems.
- Compatibility Assessment (GDPR, Article 6(4)): When data is used for new purposes, an assessment must be made to determine whether it aligns with the original purpose. AI projects therefore require documentation, transparency and often additional legal bases. Historical data cannot simply be reused, and the sensitivity of the data and its impact on individuals must be carefully evaluated.
These examples highlight that legal screening should be incorporated from the outset to ensure that design, data and processes comply with the requirements. Businesses should document decisions, involve their advisors early and consider the need for a Data Protection Impact Assessment (DPIA) to establish compliance from the start.
Compliance as a strategic asset
When the legal screening gives the green light to a use case, a new phase of the journey begins. Many companies view compliance tasks as an administrative burden that takes resources away from core activities. This is a dangerous misunderstanding in the AI era.
With the new EU AI Act and existing regulations like GDPR, ongoing compliance tasks are becoming an unavoidable part of running a modern, data-driven business. These are not just legal requirements ‒ they form the foundation for responsible and sustainable value creation with AI.
Here are some practical recommendations you can implement:
- Budget realistically for compliance tasks from the outset
- Integrate compliance activities into project plans and resource allocations
- Automate compliance processes where possible
- Build internal competencies for managing ongoing compliance
- Establish clear responsibilities and reporting lines.
The qualifications of legal professionals must keep up
AI can effectively identify patterns, process large volumes of data, suggest standard solutions and flag potential issues, but the technology requires an active role from legal professionals to ensure responsible use. The legal professional must validate AI’s conclusions, assess context and nuances and make ethically sound decisions, especially in complex situations.
To succeed in balancing automation with legal expertise, it is essential for legal professionals to understand AI’s functionalities, identify biases and develop systematic methods for validation. At the same time, they must document assessments and decisions to create transparency and accountability, ensuring the optimal synergy between technology and legal judgment.
AI is not just another IT project or legal challenge ‒ it is a transformative force that demands new ways of collaboration. The companies that succeed in breaking down silo thinking and fostering true interdisciplinary cooperation will be the strongest players in the market of the future.
Do you need sparring?
Do you want to ensure responsible and compliant AI implementation?
Contact us to discuss how legal screening and strategic compliance can help you leverage AI safely and effectively.