

Just because something is theoretically circumventable doesn’t mean we shouldn’t make it as hard as possible to circumvent it.
The reason why misinformation is so common these days is because of concerted effort by fascists to obtain control over media companies. Once they are in power and have significant influence within those companies they can poison them, turning them into massive misinformation engines churning out content at a pace even faster than we ever believed possible. This problem has existed since the rise of mass media especially in the 19th century. But social media presents far faster and more direct throughlines to spreading misinformation to the masses.
And those masses do not care if something is labeled as AI or not. They will believe it one way or the other. This still doesn’t change that it is necessary to directly label AI generated content as such. What is and isn’t made by a human is extremely important. We cannot equate algorithms with people, and it’s necessary to make that distinguishment as clearly as possible.
I’m curious what you would suggest to aid identifying generated content if not clear labeling. Sure its circumventable but again its more than what already exists. It provides legal precedence for repercussions to companies trying to pass off AI generated content as human created.