According to Google, almost one trillion images now exist online, thanks to the explosion in popularity of digital cameras and camera phones. and the company is looking at ways to improve how users can search for the pictures they want.
Currently, Google’s image search relies on textual information stored in and around images on web pages. This is fine to a point, but not only does it have the potential to be abused by people trying to make their web pages more popular, but it relies on a human to correctly categorise a picture and what it contains.
Google’s idea is to use various new image processing techniques which will “crawl” the pixels of an image, recognise patterns and thus work out what’s in the photo.
In an interesting video interview, Google’s Director of Product Management for Consumer Search Properties, R.J. Pittman, explains how the system might be used to find new images that match one the user already has – a group photo for example – using advanced face recognition.
He also foresees much greater use of “geotagging” of images, when taken using cameras containing a GPS chip. This data can be fed into software such as Google’s own Panoramio to allow people to search for photos of a specific location.
Sounds like a really useful service, with the downside that, if it gets too advanced, it’s going to be near impossible to remain “untagged” from your embarrassing Facebook photos — the intelligent image search engine will find you anyway. Even with a red face and wearing nothing but your underwear.