But i'm not sure what will happen, with the /sitemap.xml.gz file telling the spiders not to crawl and the robot.txt file telling them they can...
I'm guesing that the /sitemap.xml.gz file is created by the XML Sitemap plugin, but i don't understand why it would be set to User-agent: *
...but reading further, it says that a robot.txt file should ONLY be uploaded if you want to block Googlebot (or any other bot) from accessing certain pages. I don't want anything blocked at the moment, i want the Googlebot to access all the pages.
Another thing that has me concerned, is that when i locate my site in Google, it just shows as the domain "http://www.mysite.org".
It has no anchor text or and short description, just the domain name. Why is this?? Is this because Googlebot is unable to crawl the site and get any more information?
and found that I had used the wrong affiliate link and was sending the traffic to a completely unrelated offer no less! It happens. I just keep building and learning everyday :)
This topic was started on Dec 26, 2009 and has been closed due to inactivity. If you want to discuss this topic further, please create a new forum topic.