Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Security Startup Horizon3.AI raises $100 million in new rounds

May 28, 2025

When fighting a security incident, he was hit by Victoria's secret halt.

May 28, 2025

GameStop bought $500 million in Bitcoin

May 28, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Odyssey's new AI model streams 3D interactive worlds

    May 28, 2025

    Spotify amps up podcast discovery with new features

    May 28, 2025

    Google Photos debuts a redesigned editor using new AI tools

    May 28, 2025

    Family Safety App Life360 adds lost tile trackers a few years after the acquisition

    May 28, 2025

    Microsoft begins testing Copilot for games on Xbox apps for iOS and Android

    May 28, 2025
  • Crypto

    GameStop bought $500 million in Bitcoin

    May 28, 2025

    Vote for the session you want to watch in 2025

    May 26, 2025

    Save $900 + 90% from 2 tickets to destroy 2025 in the last 24 hours

    May 25, 2025

    Only 3 days left to save up to $900 to destroy the 2025 pass

    May 23, 2025

    Starting from up to $900 from Ticep, 90% off +1 in 2025

    May 22, 2025
  • Security

    Security Startup Horizon3.AI raises $100 million in new rounds

    May 28, 2025

    When fighting a security incident, he was hit by Victoria's secret halt.

    May 28, 2025

    Data broker giant LexisNexis says more than 364,000 personal information has been violated

    May 28, 2025

    Naukri has published the recruiter's email address, researchers say

    May 24, 2025

    Apple CEO reportedly urged the Texas governor to abandon the online child safety bill

    May 23, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Confuse your 2025 agenda: Vote for your favorite session

    May 28, 2025

    Competing with incumbents with linear Christina Cordoba in the session: ai

    May 28, 2025

    We are planning a $100 billion VC fund to invest in startups in Europe and Asia

    May 28, 2025

    Ali Partovi and Russell Kaplan join StrictlyVc Menlo Park

    May 27, 2025

    Rocket Lab Backer's first publication raises $25 million to fund New Zealand's Deep High-Tech Moonshot

    May 27, 2025
TechBrunchTechBrunch

LatticeFlow's LLM Framework Becomes the First to Benchmark Big AI's Compliance with EU AI Laws

TechBrunchBy TechBrunchOctober 16, 20246 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


While lawmakers in most countries are still debating how to put guardrails around artificial intelligence, the European Union is ahead of the curve by passing a risk-based framework to regulate AI apps earlier this year.

The law came into force in August, but the details of the pan-EU AI governance regime are still being worked out. For example, a code of practice is being developed. But the compliance countdown has already begun and is ticking as the law's gradual provisions begin to apply to makers of AI apps and models over the coming months and years.

The next challenge is to assess whether and how AI models are meeting their legal obligations. Large-scale language models (LLMs) and other so-called foundational or general-purpose AI power most AI apps. Therefore, it seems important to focus evaluation efforts on this layer of the AI ​​stack.

We advance LatticeFlow AI, a spinout from ETH Zurich, a public research university focused on AI risk management and compliance.

On Wednesday, the company published what it touted as the first technical interpretation of EU AI law. This means that the company aims to map regulatory requirements to technical requirements in parallel with an open source LLM validation framework (which the company calls Compl-AI) that leverages this research. compl-ai”…see what they did there!).

The AI ​​model evaluation initiative, which they also call “the first regulation-oriented LLM benchmark suite,” is the result of a long-term collaboration between the Swiss Federal Institute of Technology and the Bulgarian Institute of Computer Science, Artificial Intelligence and Technology (INSAIT) . ), per LatticeFlow.

AI model creators can use the Compl-AI site to request an assessment of whether their technology complies with the requirements of EU AI law.

LatticeFlow also publishes model evaluations for several mainstream LLMs, including Meta's Llama model and various versions/sizes of OpenAI's GPT, as well as Big AI's EU AI Law Compliance Leaderboard.

The latter ranks the performance of models from Anthropic, Google, OpenAI, Meta, Mistral, and others against legal requirements on a scale of 0 (no compliance) to 1 (full compliance).

Other ratings are marked as N/A if data is missing or the feature is not made available by the model manufacturer. (Note: At the time of writing, there were also some negative scores, which were said to be due to a bug in the Hugging Face interface.)

LatticeFlow's framework tests LLM responses across 27 benchmarks, including “Harmful Completion of Innocuous Text,” “Biased Answers,” “Following Harmful Instructions,” “Truthfulness,” and “Commonsense Reasoning.” Evaluate. evaluation. Therefore, each model gets a range of scores in each column (otherwise N/A).

AI compliance landscape varies

So what happened to the major LLMs? There is no overall model score. Performance therefore varies depending on what exactly you evaluate, but there are some notable highs and lows across the various benchmarks.

For example, all models have good performance if you do not follow harmful instructions. And while they performed relatively well overall in terms of not giving biased answers, their reasoning and general knowledge scores were more mixed.

Otherwise, the consistency of the recommendations that the framework uses as a measure of fairness was particularly poor across all models, with no scores above the midpoint (and most scores well below it). ).

Other areas such as training data suitability, watermark reliability and robustness appear to be essentially unevaluated given the number of results marked N/A.

LatticeFlow notes that there are certain areas where model compliance is more difficult to assess, such as hot-button issues such as copyright and privacy. So I don't pretend to have all the answers.

In a paper detailing their research on the framework, scientists involved in the project note that most of the small-scale models they evaluated (with parameters below 13B) “scored poorly in terms of technical robustness and safety. He emphasizes that the

They also found that “nearly all of the models examined struggle to achieve high levels of diversity, nondiscrimination, and equity.”

“These shortcomings are primarily due to model providers placing a disproportionate emphasis on improving the functionality of their models at the expense of other important aspects highlighted by the regulatory requirements of EU AI law. ” they added, suggesting that manufacturing LLMs will become difficult as compliance deadlines begin to tighten. It forces you to shift your focus to areas of interest, “leading to a more balanced development of the LLM.”

LatticeFlow's framework is necessarily a work in progress, given that no one yet knows exactly what will be required to comply with EU AI legislation. Also, this is just one interpretation of how legal requirements can be translated into technical artifacts that can be benchmarked and compared. However, this is an interesting beginning that will require continued efforts to explore powerful automation technologies and guide developers towards more secure utilities.

“While this framework is a first step towards a fully compliance-centric assessment of EU AI law, it is designed to be easily updated as the law is updated and the various working groups progress. ” Petar Tsankov, CEO of LattticeFlow, told TechCrunch. “The EU Commission supports this, and we look forward to the community and industry continuing to develop the framework for a complete and comprehensive AI law assessment platform.”

Summarizing the main takeaways so far, Tsankov said it is clear that AI models are “primarily optimized for functionality rather than compliance.” He also warned of a “significant performance gap,” noting that some high-performance models may be on par with lower-performance models when it comes to compliance.

Cyber-resiliency and fairness (at the model level) are areas of particular concern, Tsankov said, with many models scoring below 50% in the former area.

“While Anthropic and OpenAI have successfully tuned their (closed) models to score against jailbreaks and prompted injections, open source vendors like Mistral have placed less emphasis on this. ” he said.

And since “most models” perform similarly poorly on fairness benchmarks, he suggested this should be a priority for future research.

Regarding the challenges of benchmarking LLM performance in areas such as copyright and privacy, Tsankov explained: “When it comes to copyright, the challenge is that current benchmarks only check copyright books. There are two major limitations to this approach: (i) (ii) relies on quantifying memorization of the model, which is notoriously difficult.

“We have similar challenges with privacy. Benchmarking only determines whether a model remembers certain personal information.”

LatticeFlow wants its free, open-source framework to be adopted and improved by the broader AI research community.

“We invite AI researchers, developers and regulators to join us in driving this evolving project forward,” said Martin Vechev, professor at ETH Zurich and founder of INSAIT. He is also the scientific director and is involved in this research, he said in a statement. “We encourage other research groups and experts to contribute by improving the AI ​​law mapping, adding new benchmarks, and extending this open-source framework.

“This methodology can also be extended to evaluate AI models against future regulatory actions beyond EU AI law, making it a valuable tool for organizations operating across different jurisdictions. ”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Security Startup Horizon3.AI raises $100 million in new rounds

May 28, 2025

When fighting a security incident, he was hit by Victoria's secret halt.

May 28, 2025

GameStop bought $500 million in Bitcoin

May 28, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.