Perhaps it was inevitable, as the AI world absorbed the news of GPT-4, that some people would think of Frankenstein’s monster. Or HAL 9000. Or the Terminator—any of science fiction’s great stories of technologies that that wrought havoc before human beings had thought their implications through. (GPT-4 is the new system from OpenAI that appears able to answer complicated questions with unprecedented precision, tell what it sees in visual images, and carry on written conversations deceptively similarly to that of an actual human.)
Even as the latest large language model has taken the tech world by surprise, the industry is scrambling—to burnish its ethical AI credentials, and to keep its standards for AI ethics ahead of the rapid advances in the field. A prime case: Microsoft, which has had a Responsible AI initiative since 2017, has just added new open-source applications to what it calls its Responsible AI Toolbox, coding that developers can use “to make it easier and faster for developers to incorporate responsible AI principles into their solutions.” (Not unrelatedly, in a recent round of layoffs, Microsoft closed down an Ethics and Society team that it said had guided early AI efforts. A spokesperson, contacted by Spectrum, says there has been no letup in “the interdisciplinary way in which we work across research, policy, and engineering.”)
“I think, for good reasons, people are understandably skeptical of voluntary standards. At the same time, based on the swell of interest, and guidance from people from many different sectors, there’s clearly a need to develop guidelines and move more swiftly than regulation.”
—Claire Leibowicz, Partnership for Artificial Intelligence
“AI may well represent the most consequential technology advance of our lifetime,” wrote Brad Smith, Microsoft’s vice chair and president, in a blog post in February. His words were tempered: “Will all the changes be good? While I wish the answer were yes, of course that’s not the case.”
Separately, the Partnership for Artificial Intelligence, or PAI, a nonprofit that seeks to promote discussions of AI issues, has just published “Responsible Practices for Synthetic Media,” a set of guidelines for how to create and share multimedia content generated by AI. Members of the partnership include companies such as OpenAI, Adobe, Tiktok and the BBC’s R&D arm, as well as several AI startups.
But how effectively can major tech companies be in policing AI’s development, especially given how widely the use of AI tools is spreading beyond the tech giants? If you’re concerned about deepfakes, watch the spread of “cheap fakes,” images or videos fabricated with AI’s help which may often be crude, but which can be made, for free, by anyone who finds an AI app online. The largest social media companies, including Meta,Twitter and Google (which owns YouTube), have committed to removing misinformation or offensive posts. But the job keeps getting harder as more offenders and malefactors use more and increasingly sophisticated AI technologies.
Last month, for instance, a video turned up on Twitter of President Biden announcing he was going to start drafting American troops to protect Ukraine. It was, of course, fake—the conservative influencer who posted it came on camera after Biden to say so. He claimed it was an AI-powered warning of what the White House might do. As of this week it was still online, viewed more than four million times. It didn’t violate Twitter’s rules because it didn’t claim to be real. But a lot of people who reacted on Twitter apparently didn’t watch long enough to see the disclaimer.
How to decide, in such cases, what to do? Can the big tech companies—can anyone—set rules in advance that will work for everything that might be done with AI in the future?
“Everyone, I think, is operating in this Wild West and is eager to have some set of guidelines,” says Claire Leibowicz of the Partnership on AI. “I think, for good reasons, people are understandably skeptical of voluntary standards. At the same time, based on the swell of interest, and guidance from people from many different sectors, there’s clearly a need to develop guidelines and move more swiftly than regulation.”
Government, particularly in the United States, has moved slowly to make AI rules. That’s fine with many developers who would argue that regulators will be heavy-handed and behind the curve. For now, that leaves companies in charge. They so far have tended to set fairly general standards. The PAI’s framework, for example, recommends that content creators be transparent when they’ve altered or faked something, perhaps using labels or digital watermarks so that users can easily tell. They agree, at least in public, that they cannot go it alone.
“Microsoft believes that some regulation of AI, particularly for high-risk uses of the technology, is necessary,” says Besmira Nushi, a principal researcher at Microsoft Research, in an email. “As governments worldwide debate approaches to regulating certain uses of AI, Microsoft is committed to doing our part to develop and deploy AI responsibly.”
Leibowicz, at the PAI, says that if companies agree on a list of harmful and responsible uses of AI, it needs to be a living document, adaptable in a fast-changing field. “And it’s our hope that that will catalyze or galvanize the field of people who have a major role to play in this effort. And, to that end, it will be a complement to regulation that’s absolutely necessary.
“But,” she adds, “I think there’s also a degree of maintaining some humility at being unable to predict the future.”
From Your Site Articles
Related Articles Around the Web