close
close

LatticeFlow publishes a framework for checking compliance of LLMs with EU AI Law

LatticeFlow publishes a framework for checking compliance of LLMs with EU AI Law

Start LatticeFlow AG today released COMPL-AI, a framework that can help companies check whether their large language models comply with the EU AI Law.

Zurich-based LatticeFlow is backed by more than $14 million in venture funding. It provides a platform for finding technical problems in artificial intelligence training datasets. In addition, the company helps organizations ensure that their neural networks meet security requirements.

LatticeFlow created COMPL-AI in response to the rollout of the EU AI Act earlier this year. The legislation introduces a new set of rules for companies offering advanced AI models on the block. In particular, AI applications deemed high-risk by regulators must follow strict security and transparency requirements.

Some of the rules implemented with the AI ​​Act are only defined in relatively broad terms, meaning that developers must interpret how they apply to their projects. This can complicate regulatory compliance efforts. According to LatticeFlow, its new COMPL-AI framework translates the high-level requirements set out in the AI ​​Act into concrete steps developers can take to ensure regulatory compliance.

COMPL-AI includes a list of technical requirements that must be met to ensure that an LLM adheres to the legislation. In addition, the framework provides an open source compliance assessment tool. The software can analyze an LLM to determine how well it implements the rules of the AI ​​Act.

LatticeFlow says its assessment tool measures LLMs’ regulatory compliance using 27 different benchmarks. These benchmarks evaluate a model’s reasoning capabilities, how often it generates harmful results, and other factors.

“With this framework, any company, whether working with public, custom or private models, can now assess their AI systems against the technical interpretation of the EU AI Law,” said co-founder and advisor LatticeFlow delegate Petar Tsankov.

LatticeFlow put their open source assessment tool to the test by using it to analyze LLMs from several major AI vendors. Companies on the list included OpenAI, Meta Platforms Inc., Google LLC, Anthropic PBC and Alibaba Group Holding Ltd. LatticeFlow determined that most AI models evaluated include effective guardrails against harmful outcomes, but many fall short when it comes to cybersecurity and justice.

According to the company, the results of the analysis also suggest that there are opportunities to refine some provisions of the AI ​​Act. Using the current rules as a benchmark, its open-source assessment tool found it challenging to measure how well LLMs protect user privacy. It also proved difficult to assess the extent to which AI models address copyright considerations.

European Commission spokesman Thomas Regnier said that “the European Commission welcomes this study and the AI ​​Model Assessment Platform as a first step in translating the EU AI Law into requirements technicians, helping AI model providers to implement the AI ​​Act.”

Photo: unsplash

Your upvote is important to us and helps us keep content FREE.

A click below supports our mission to provide free, deep and relevant content.

Join our community on YouTube

Join the community of over 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, ​​Dell Technologies Founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner for the industry. You really are a part of our events and we really appreciate you coming and I know people appreciate the content you create too” – Andy Jassy

THANK YOU