Open-source AI models Versus AI Regulation

Raphaëlle d'Ornano
4 min readDec 13, 2023
Credit Markus Winkler

To get a sense of the technological and cultural impact Generative AI (“genAI”) has had over the past year, one doesn’t need to know much about the underlying technology. Instead, just look at The New York Times which splashed stories about the drama over the ouster of OpenAI’s CEO and his subsequent reinstatement across the front page.

This kind of breathless coverage of a startup that had rapidly evolved from an obscure Silicon Valley non-profit to tech titan in just a couple of years is testament to the stakes surrounding genAI. While the company has raised more than 10.5B€ from a mix of VCs and Microsoft, the specter of its technology and competitors like Google’s Bard has incited morale panic among some and been hailed as a revolutionary tool by others.

Amid this uncertainty comes a push to regulate. In October, the Biden Administration in the U.S. issued an executive order that requires AI developers to share safety test results with the U.S. government. Europe took this even further last week by agreeing to the EU AI Act, a sweeping piece of legislation.

While European policymakers finally reached a compromise even though the Act has yet to be finalized, the process demonstrated just how problematic it is to design the appropriate rules for a technology that is developing so rapidly and moving in directions that is impossible to fully predict. Almost everyone in the Tech sector agrees that genAI is a “big” thing, but it is less clear what it means and where it is going. The challenge here is that a region like Europe wants to capture the benefits of AI and establish economic and technical leadership while at the same time preventing its citizens and society from the risks and potential harms such as job loss, misinformation, and threats to national security.

In other words, regulators want to regulate something that is a moving object, while basically preserving the ability to keep the object moving.

The EU is typically a slow, deliberative institution, and indeed discussions about AI regulations started a couple of years ago. The emergence of genAI came after an initial draft of the rules had been written, and its seismic impact sent policy makers scrambling back to include rules around such issues as the large language models that power tools like OpenAI’s ChatGPT and DALL-E. That process quickly splintered member states, with France, Germany, and Italy favoring less restrictive rules to allow homegrown startups to develop, while others wanted to require detailed disclosures of LLMs and focus on pre-emptively mitigating risk.

The details of the compromise haven’t been published, but the general agreement targets “high-risk AI” and bans certain types of biometric capabilities, creates ways for citizens to lodge complaints, and imposes great obligations for disclosures for AI used in sensitive areas like banking and insurance. Negotiators dropped references to “foundation model” — and so the Act would not apply to open source models like newly minted French Unicorn Mistral’s one, unless they are considered high risk or being used for banned purposes. They instead target “general-purpose AI with systemic risk.” As Politico noted, many of these terms must still be defined and specific rules actually written.

Still, it could take two years before any rules are formally adopted and go into effect. Basically, an eternity in the fast-moving world of AI. But the underlying goal here seems to be greater transparency, and in that respect, we may be seeing a better solution emerging when it comes to regulating models: Open-Source.

Open-source foundational models are built on open technology and are promising just the kind of transparency regulators are seeking while still developing at a breakneck pace. Even better, investors have been converted to the cause. Paris-based Mistral AI just raised a $415 million Series A round at a reported $2 billion valuation. Berlin-based Aleph Alpha recently raised a $500 million Series B round. And over the summer, Hugging Face — a U.S.-based startup by French founders — raised a $235 million funding round at a $4.5 billion valuation for its open-source platform that lets developers train machine learning models. Throw in the new AI-research institute Kyutai that is backed with €300 million from French entrepreneurial godfather Xavier Niel, shipping scion Rodolphe Saadé, and former Google CEO Eric Schmidt, and you have a formidable movement behind the open model approach to AI.

This transparency can be a key step towards building trust among the public. That’s important because we can’t overlook the many benefits that AI promises. As Marc Andreessen recently wrote in his Techno-Optimist Manifesto, AI is poised to deliver tremendous improvements to medicine, health care, productivity, safety, and even the challenge of climate change. “We believe Artificial Intelligence is best thought of as a universal problem solver. And we have a lot of problems to solve,” Andreessen wrote.

AI needs guardrails to ensure proper governance and monitor the risks and potential harms so the worst actors cannot pursue harmful uses. That’s true also for open-source models. genAI leaders such as OpenAI and Google have argued that open-source is too risky because there is a lack of moderation and safety measures in place. Mistral AI is counting on its community of users to spot those problems and correct them. The debate is still open.

In any case, the impact of AI will be enormous on areas like pay and employment, and so it’s reasonable for people to ask questions and be cautious. But it would be a tragedy if unfounded fears led to a blocking of a technology that promises innovations that could make radical improvements in so many facets of society and the economy.

--

--

Raphaëlle d'Ornano

Managing Partner + Founder D’Ornano + Co. A pioneer in Hybrid Growth Diligence. Paris - NY. Young Leader French American Foundation 2022. Marathon runner.