

Enlighten me. I hope I read it wrong.
It sounds like the EFF is advocating stripping/ignoring copyright information (as is currently done) when generating LLM’s to ease burden of small startups tracking down copyright owners. Something I had to do in productions and yeah, it sucked, but it’s how it works. (Radio is a tad different)
The first article has some good points taken very literally. I see how they arrive at some conclusions. They break it down step by step very well. Copyright is merky as hell, I’ll give them that, but the final generated product is what’s important in court.
The second paper, while well written, is more of a press piece. But they do touch on one important part relevant to this conversation:
This is important because a prompt “create a picture of ____ in the style of _____” can absolutely generate output from specific sampled copyright material, which courts have required royalty payments in the past. An LLM can also sample a voice of a voice actor so accurately as to be confused with the real thing. There have been Union strikes over this.
All in all, this is new territory, part of the fun of evolving laws. If you remove the generative part of AI, would that be enough?