For a “vibrant content ecosystem,” Google’s VP of Trust says web publishers need “choice and control over their content, and opportunities to derive value from participating in the web ecosystem.” (Does this mean Google wants to buy the right to scrape your content?)

In a blog post, Google’s VP of trust starts by saying that unfortunately, “existing web publisher controls” like your robots.txt file (a community-developed web standard) came from nearly 30 years ago, “before new AI and research use cases…”
We believe it’s time for the web and AI communities to explore additional machine-readable means for web publisher choice and control for emerging AI and research use cases. Today, we’re kicking off a public discussion, inviting members of the web and AI communities to weigh in on approaches to complementary protocols. We’d like a broad range of voices from across web publishers, civil society, academia and more fields from

Link to original post https://tech.slashdot.org/story/23/07/08/2158211/google-suggests-robotstxt-file-updates-for-emerging-ai-use-cases?utm_source=rss1.0mainlinkanon&utm_medium=feed from Teknoids News

Read the original story