Looks like the Telegraph is the latest publisher none too happy with Google for the way it chews up and indexes its news stories and then hangs them out alongside everyone else’s for all the world to see.
While it hasn’t come out and directly said “Oi, Google – stop pinching our stuff!”, editor Will Lewis made comments at the 6th Newsroom Summit about the paper’s ability to protect its content and said that they were under attack from the likes of Google and Yahoo wanting to access it for free.
Google has been here before, of course, with a court case in Belgium which prevented it aggregating newspaper content and a public spat with Agence France-Presse which was later settled, although both are keeping schtum on the details.
It’s a tricky issue – on the one hand The Telegraph owns the copyright on the content. Just because it’s published it on the internet doesn’t mean it’s relinquished those rights. But does Google’s use actually infringe copyright? It’s not like it’s taking entire articles and republishing them verbatim? And what about the increase in traffic that results from a top listing?
All good question, and ones that will remain up in the air until it all gets hammered out in court – well, if it ever gets that far.
However, as WebProNews points out, The Telegraph could stop Google right now, if it really wanted to. How? Through a robots.txt file – a simple text file on a web server that says what automated programs (or robots) are allowed to do.
It allows the web host to tell any unwelcome bots to turn around and go back to where they came from. With just a couple of lines of text The Telegraph could be completely untouched by Google. Of course, that also means that you fall out of the search results – but then that wouldn’t be a big deal, would it?