Recently Google changed the way they view sites. In the past search engines tended to see a website much as a user using a text-only browser would.
This has since changed. Now search engines have switched to looking at sites as a human would a modern web browser; rich with imagery, video content and a variety of other media.
If you do this Google have gone on record as saying:
This is a major change and indicates that how a site looks to a human user now factors as a ranking metric. After all Google want to provide the end user the best possible sites. And in this day and age a text-based site simply wont do. We have come to expect imagery, video and a host of other points of interaction.
Google now provide tools as part of Google Webmaster Tools that allow you to see the results of a Google crawl and if any elements on a page are blocked.
You or your developers should not just use a default robots.txt for a particular CMS without checking each line to ensure that it’s required or if additional lines need to be added.
Make sure that every site has a robots.txt even if it is empty. Sometimes when Google’s crawler bots can’t find a robots.txt file it can assume that the entire site no longer exists and you could take a substantial rankings hit and ultimately lose traffic.
Another problem that is often encountered with robots.txt files comes during the launch of a new site. Developers put robots.txt files to block all crawlers during the development phase. This is done so as not to allow a half built site to rank. Forgetting to remove these is a problem I see more often than I would like to.
If you would like any further clarification on this issue do not hesitate to get in contact with a member of the team. We’d be happy to help.