Photo-Illustration: Intelligencer; Photo: Getty Images
In the AI world, everyone always seems to be going for broke. It’s AGI or bust — or as the gloomier title of a recent book has it, If Anyone Builds It, Everyone Dies. This rhetorical severity is backed up by big bets and bigger asks, hundreds of billions of dollars invested by companies that now say they’ll need trillions to build, essentially, the only companies that matter. To put it another way: They’re really going for it.
This is as clear in the scope of the infrastructure as it is in stories about the post-human singularity, but it’s happening somewhere else, too: In the quite human realm of law and regulation, where AI firms are making bids and demands that are, in their way, no less extreme. From The Wall Street Journal:
OpenAI is planning to release a new version of its Sora video generator that creates videos featuring copyright material unless copyright holders opt out of having their work appear, according to people familiar with the matter …
The opt-out process for the new version of Sora means that movie studios and other intellectual property owners would have to explicitly ask OpenAI not to include their copyright material in videos the tool creates.
This is pretty close to the maximum possible bid OpenAI can make here, in terms of its relationship to copyright — a world in which rights holders must opt out of inclusion in OpenAI’s model is one in which OpenAI is all but asking to opt out of copyright as a concept. To arrive at such a proposal also seems to take for granted that a slew of extremely contentious legal and regulatory questions will be settled in OpenAI’s favor, particularly around the concept of “fair use.” AI firms are arguing in court — and via lobbyists, who are pointing to national-security concerns and the AI race with China — that they should be permitted not just to train on copyrighted data but to reproduce similar and competitive outputs. By default, according to this report, OpenAI’s video generator will be able to produce images of a character like Nintendo’s Mario unless Nintendo takes action to opt out. Questions one might think would precede such a conversation — how did OpenAI’s model know about Mario in the first place? What sorts of media did it scrape and train on? — are here considered resolved or irrelevant.
As many experts have already noted, various rights holders and their lawyers might not agree, and there are plenty of legal battles ahead (hence the simultaneous lobbying effort, to which the Trump administration seems at least somewhat sympathetic). But copyright isn’t the only area where OpenAI is making startlingly ambitious bids to alter the legal and regulatory landscape. In a deeply strange recent interview with Tucker Carlson, Sam Altman forced the conversation back around to an idea he and his company have been floating for a while now: AI “privilege.”
If I could get one piece of policy passed right now relative to AI the thing I would most like, and this is intentional with some of the other things that we’ve talked about, is I’d like there to be a concept of AI privilege.
When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information …
We have decided that society has an interest in that being privileged and that we don’t, and that a subpoena can’t get, that the government can’t come asking your doctor for it or whatever. I think we should have the same concept for AI. I think when you talk to an AI about your medical history or your legal problems or asking for legal advice or any of these other things, I think the government owes a level of protection to its citizens there that is the same as you’d get if you’re talking to the human version of this.
Coming from anyone else, this could be construed as an interesting philosophical detour through questions of theoretical machine personhood, the effect of AI anthropomorphism on users’ expectations of privacy, and how to manage incriminating or embarrassing information revealed in the course of intimate interactions with novel new sort of software. People already use chatbots for medical advice and legal consultation, and it’s interesting to think about how a company might offer or limit such services responsibly and without creating existential legal peril.
Coming from Altman, though, it assumes an additional meaning: He would very much prefer that his company not be liable for potentially risky or damaging conversations that its software has with users. In other words, he’d like to operate a product that dispenses medical and legal advice while assuming as little liability for its outputs, or its users’ inputs, as possible — a mass-market product with the legal protections of a doctor, therapist, or lawyer but with as little responsibility as possible. There are genuinely interesting issues to work out here. But against the backdrop of numerous reports and lawsuits accusing chatbot makers of goading users into self-harm or triggering psychosis, it’s not hard to imagine why getting blanket protections might feel rather urgent right now.
On both copyright and privacy, his vision is maximalist: not just total freedom for his company to operate as it pleases, but additional regulatory protections for it as well. It’s also probably aspirational — we don’t get to a copyright free-for-all without a lot of big fights, and a chatbot version of attorney-client privilege is the sort of thing that will likely arrive with a lot of qualifications and caveats. Still, each bid is characteristic of the industry and the moment it’s in. So long as they’re building something, they believe they might as well ask for everything.
From Intelligencer - Daily News, Politics, Business, and Tech via this RSS feed