Are we all going to get wet?
I think it will be awhile. Sam Altman talks copyright in Congress, more American pressure on European regulators, some thoughts on world government, and a little grounding
The dubious copyright status of the LLMs already gives lawmakers plenty of leverage should they really want to grapple with the hard questions about Foundation Models. When even the leading legal experts cited as saying AI training is fair use are now starting to say that an LLM fair use defense will surely fail, the sustainability of the current approach must be deeply in doubt. (See previous newsletters here and here for discussion of the Henderson paper.)
The question of copyright did come up during Sam Altman’s “listening tour”, sorry, testimony, at the Senate Judiciary Committee, but it wasn’t really given any space in the press coverage or commentary. That’s partially because Altman’s answers were so vague, there wasn’t much to hook onto. The first Senator to raise the issue was Marsha Blackburn (R-TN), and she mobilized the case of Nashville singers and songwriters having their works used for training, and their voices and styles appropriated by model outputs. “Then, then let me ask you this. How do you compensate the artist?”
To which Altman’s reply: “Okay. we’d like to, we’re working with the artists now, visual artists, musicians to figure out what people want there. There’s a lot of different opinions, unfortunately.”
Really? Artists seem to want fair compensation and the right to opt out from having their works be used for AI training.
Blackburn: “Can you commit, as you’ve done with Consumer Data, not to train chat, GPT, OpenAI Jukebox or other AI models on artists and songwriters copyrighted works, or use their voices and their likenesses without first receiving their consent?“
There follows some further disjointed exchange as Altman dodges, seemingly unprepared to have OpenAI Jukebox used against him. Blackburn brings up Napster, “which cost a lot of artists a lot of money” and then we get this:
Blackburn: “…I think it’s a fair warning to you all if we are not involved in this from the get-go, and you all already are a long way down the path on this, but if we don’t step in, then this gets away from you. So are you working with a copyright office? Are you considering protections for content generators and creators in generative ai?“
It’s the most directly challenging statement from a lawmaker to Altman in the session. But noo problem…
Altman: “Yes, we are absolutely engaged on that. Again, to reiterate my earlier point, we think that content creators, content owners, need to benefit from this technology. Exactly what the economic model is... We’re still talking to artists and content owners about what they want. I think there’s a lot of ways this can happen, but very clearly, no matter what the law is, the right thing to do is to make sure people get significant upside benefit from this new technology. And we believe that it’s really going to deliver that. But that content owners, likenesses, people totally deserve control over how that’s used and to benefit from it.“
It’s amazing to me how Sam Altman manages to both sound reasonable and give the impression that his own firm’s efforts are all that matter. “Exactly what the economic model is… we’re still talking to artists and contents owners about what they want…” The fact that content creators and content owners have real economic and moral interests protected by copyright (among other legal mechanisms) would seem to be a lot more important than OpenAI’s determinations on the matter! And gee, there are important public venues for these conversations. The only accountability Altman seems to think is important is his firm’s accountability to itself. “Ownself check ownself” as we say in Singapore.
This blind spot comes up again when Lindsay Graham (R-SC) asks Altman “Okay. So under the law it exists today. This tool you create, if I’m harmed by it, can I sue you?“
Altman hesitates… “That is beyond my area of legal…”
Graham was looking for a Section 230 safe harbour claim, but he’s not getting that. Still, he can’t quite believe the response… “You’ve never been sued?”
“Not for that, no.” Only for “pretty frivolous things, like happens to any company”.
I guess you wouldn’t expect Altman to dive into this with gusto, but it still seems revealing that Altman doesn’t try to portray OpenAI as a firm like any other, standing up and being responsible in its product offerings, as IBM does in the same hearing.
And Altman neglects to mention that OpenAI is being sued over the legitimacy of training its software-writing tool on open source software repositories like Github, a lawsuit with at least a non-trivial chance of causing some serious issues. Or is that lawsuit also just “pretty frivolous”?
The Flip Side
The flip side of Altman’s appearance on Capital Hill came a few days later in London, speaking to reporters in London. He threatened to pull OpenAI out of Europe if the EU’s AI Act was too burdensome. “We will try to comply, but if we can’t comply we will cease operating.” The Financial Times reported that Google’s Sundar Pichai was in Europe the same week, lobbying on the same issues. In fact Google released a White Paper setting out principles for the sort of regulation it would like to see.
Two elements of the European draft law have excited particular comment, as covered previously, 1) that creators of foundation models should share responsibility for their outputs with companies or individuals that deploy them and 2) that the companies also reveal what copyrighted material was used in their training. The two points seem rather innocuous in the context of Altman’s comments on Capital Hill. But we must grant Altman a fair point: “the details matter”. Still, to read the FT story is to see the US tech industry ranked up against European regulators. The story quotes a senior SalesForce executive:
Brussels “will act without reference to reality, as it has before” and that, without any European companies leading the charge in advanced AI, the bloc’s politicians have little incentive to support the growth of the industry. “It will basically be European regulators regulating American companies, as it has been throughout the IT era.”
Guess Salesforce doesn’t much like GPDR? The American version is so much better…no, wait…
And on the flip flip side…
The Google line is “AI is too important not to regulate, and too important not to regulate well.” So let me include a quick report from the other side of the equation, looking ahead to how regulation might be done. The Guardian just ran a piece profiling Rumman Chowdhury, a former MD at Accenture and and senior at Twitter, as the sort of person best placed to drive regulation efforts. (“She has always found people much more interesting than what they build or do.”)
But Chowdhury’s ambition for a global AI regulator (at least as set out in an editorial she wrote for Wired) seems as unmoored as Altman’s desire to have OpenAI just sort everything out themselves. Chowdhury calls for “a generative AI global governance body to solve these social, economic, and political disruptions beyond what any individual government is capable of, what any academic or civil society group can implement, or any corporation is willing or able to do…”
“Second, this group’s job is not just to identify problems, but to experiment with novel ways to fix them. Using the “tax” that corporations pay to join, this group might set up an education or living support fund for displaced workers that people can apply for to supplement unemployment benefits, or a universal basic income based on income levels, regardless of employment status, or proportional payout compared to the data that could be attributed to you as a contributing member of digital society.“
So these societal solutions will come from the global regulator of AI? At least on a trial basis….? These are her own words mind you. The whole thing seems off the rails...
—
Well, there’s a good chance I am the one off the rails as I am writing this from a friend’s exquisite little house in Bali, where I am locked up in isolation after testing positive for COVID-19 five days ago. Yes, there could be (there have been!) worse places, and my symptoms were mild and have now largely passed. I imagine I caught the coronavirus eight days ago at the crowded and woefully under-ventilated domestic air terminal in Denpasar, catching a turboprop to Waingapu, East Sumba, Nusa Tenggara Timor. But really who knows? Here I go, seeking to impose a narrative on the complex stochastic process of disease transmission.
Sumba is one of Indonesia’s least developed islands, though we saw only one of its more developed edges, along the northeastern coast from Waingapu to Malolo, served by an excellent metalled road. 4G coverage faded out as we got five or six miles up the different river valleys we assayed, again on good roads. OK, by no means a scientific sample. Sumba has a semidry, savannah-like climate, with wet-rice cultivation confined to the greener river valleys, carved out of the limestone. It is not densely populated. It would be expensive to cover the whole territory. I did see solar panels on many of the houses not obviously connected by power lines – cheap solar would seem to have lots of potential to an island like Sumba.
I was reminded of a conversation earlier in the trip, in Bali with Indonesian publishing powerhouse Laura Prinsloo, as we discussed the tenure of our mutual acquaintance Indonesia’s wunderkind Gojek founder and Minister of Education, Nadiem Makarim, whose political career stumbled a bit when his ambitions to digitise education were caught out by the real limits of infrastructure in an archipelago that stretches 14,000 islands across an eighth of the world’s circumference. Always worth a reminder…
I come back to Bali, my laptop and this newsletter really wondering about the hype, as Bard, Bing and Chat-GPT continue to hallucinate each time I use them. How does the assumption that this is all going to be earth-changing still have such hold? These are language models…. (Bill Gates says that they will enable us to have personal digital assistants! What a vision!)
Maybe it was the impact of hanging out with some rather more grounded Sumbanese, and finally experiencing what it feels like to be in the magnificent central ritual house of a true “house society”1, lending my presence to the hundreds coming to mourn for the King of Rinde, his body right there, across the massive smoking hearth at the center of the house, surrounded by mourners, and wrapped this whole last year in hand-woven textiles, to be interred the next day in one of a row of megalithic tombs in the centre of the village.
Or maybe it was the effect of reading Meghan O’Gieblyn’s God, Human, Animal, Machine, a book I can’t recommend highly enough. It is a thoroughly enjoyable philosophical inoculation for discussions on AI, putting transhumanism into historical context, tracing the roots of our search for an artificial intelligence in religious thought. She cites John Searle:
“Computers, he argued, have always been used to simulate natural phenomena—digestion, weather patterns—and they can be useful to study these processes. But we veer into superstition when we conflate the simulation with reality. “Nobody thinks, ‘well, if we do a simulation of a rainstorm, we’re all going to get wet’ he said, ‘And similarly, a computer simulation of consciousness isn’t thereby conscious.’”
see Roxana Waterson, The Living House: An Anthropology of Architecture in South-East Asia, Singapore : Oxford University Press, 1990.)