President Joe Biden has issued a “landmark” executive order on AI, an expansive ordinance that seeks to balance the advancement of AI technologies with mitigating its many risks.
In the order, the White House makes it very clear that it doesn’t want to stop the AI gravy train. Describing AI as “one of the most powerful technologies of our time,” the mandate specifically notes that it intends to ensure that America “leads the way in seizing the promise” of AI amid the global arms race for the technology. And though it does list a number of specific AI concerns, it mostly uses vague, sweeping language to discuss its plans to address them, providing little in the way of solutions or timelines.
The mandate breaks the White House’s AI focuses into eight distinct categories, each with its own subset of concerns. These broader categories of interest include focuses on renewed safety and security standards for private AI developers, civil rights and equity in AI algorithms and their applications, consumer privacy protections, and support for workers who might find themselves displaced by AI systems, among others.
One of the more notable provisions in the order includes new requirements for AI “red-teaming” practices, or the broad risk assessment processes by which AI makers test their technologies for issues like racial bias, misinformation generation, and other potentially problematic outputs.
Until now, there’s been little to no required government oversight of AI red-teaming, meaning that we’ve been relying on AI companies to develop their own processes, test their technology themselves, and check their own work.
Under the new mandates, though, companies “developing any foundation model” — basically, any large language model like OpenAI’s GPT-4 or Meta’s LLaMa — are required to “notify the federal government when training the model, and must share the results of all red-team safety tests.”
In short, AI red teams will have to submit their AI tests to the government. Then, ideally, the government will do its due diligence and give them a grade.
“At the end of the day, the companies can’t grade their own homework here,” White House chief of staff Jeff Zients told NPR. “So we’ve set the new standards on how we work with the private sector on AI, and those are standards that we’re going to make sure the private companies live up to.”
But other firm rules for companies are few and far between in Biden’s executive order. Mostly, the White House talks broadly about what it’s going to eventually do.
On the issue of protecting against the threat of AI-designed bioweapons, for example, the administration notes that it will develop “strong new standards for biological synthesis screening,” and that “agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.”
The order, however, provides nothing in the way of an expected timetable for that plan and doesn’t go into any further detail than that.
This pattern repeats throughout the document. In the civil rights section, for example, the White House accurately states that “irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing.” The administration then promises to do things like “provide clear guidance” to landlords and federal benefits programs on how AI should be used to avoid bias, “address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and federal civil rights offices,” and “ensure fairness throughout the criminal justice system by developing best practices on the use of AI.”
But again, it fails to offer any details regarding what any of those guidances and best practices might look like or when we should expect to see them roll out.
Elsewhere, the administration promises to soon produce a report on how AI might impact workforces, which we’ll definitely be keeping an eye out for. And speaking of workforces, the order also declares that the US will lower the bar to immigration entry for foreign AI researchers and experts as it seeks to build out its federal AI sector.
On the whole, it’s good to see the administration make a strong statement about AI’s importance, its very real and current risks, and the likelihood of its impact on our future.
Still, let’s not mistake this order for a firm regulatory crackdown. In fact, the order even admits that these promises are only “vital steps forward” and that “more action will be required.”
The administration “will continue to work with Congress to pursue bipartisan legislation,” it adds, “to help America lead the way in responsible innovation.”
Passing any legislation, especially federal regulation, is difficult and tedious in pretty much any sector. This is particularly true in the world of technology, which lawmakers on Capitol Hill tend to exhibit little to absolutely zero understanding of. While this document might make the White House’s belief in the transformative power of AI very clear, it’s still a very early step on the road from Silicon Valley lawlessness to more specific, enforceable federal AI regulation.
And that’s especially relevant as companies like OpenAI have actively been calling on Congress to come up with meaningful AI regulations.
If anything, it offers a glimpse into the Biden administration’s thinking about AI — meanwhile attempting to plug one or two holes in a woefully regulation-free marketplace where it feasibly can at this stage.
To that end, though, it’s worth wondering how effective an order like this will even be at this point. The AI market has moved incredibly quickly over the past year, and in many ways, from the omnipresence of open-source systems to the public availability of still-unregulated AI technologies, the Pandora’s Box of AI has already been wrenched open.
All of the mitigation measures presented by the administration will surely take time, a reality that raises the question: by the time most of the propositions in the executive action finally take effect, how far gone will AI already be?
More on AI: Google AI Chief Says There’s a 50% Chance We’ll Hit AGI in Just 5 Years
Share This Article