Starting Monday, YouTube creators will be required to label when realistic-looking videos were made using artificial intelligence, part of a broader effort by the company to be transparent about content that could otherwise confuse or mislead users. From a report: When a user uploads a video to the site, they will see a checklist asking if their content makes a real person say or do something they didn’t do, alters footage of a real place or event, or depicts a realistic-looking scene that didn’t actually occur. The disclosure is meant to help prevent users from being confused by synthetic content amid a proliferation of new, consumer-facing generative AI tools that make it quick and easy to create compelling text, images, video and audio that can often be hard to distinguish from the real thing.

Online safety experts have raised alarms that the proliferation of AI-generated content could confuse and mislead

Link to original post from Teknoids News

Read the original story