11-19-2011, 03:30 PM
Hello all
I have lists of literally 10's of thousands of URLs with links pointed at my pages.
Most of them have been generated by SE NUKE X.
I have used scrapebox successfully to confirm the links are there using the Check Links radial button, and loading a list of my money sites URL's and then my lists I have pulled out of my 6 nuke computers.
So my list is clean, only with URL's on it with actual links on.
I have done runs of scrapebox with over a million blogs on with lists I have bought from scrapeboxlist.com.
However, doing random samples, I still have found it difficult to get large numbers of URL's cached.
Has anyone got any tips?
I get a success rate of over 50% looking at data from the Status box on the bottom right hand courner of scrapebox.
Should I harvest my own lists?
How do I work on getting just DO Follow links?
Do nofollow links help get pages cached that are not currently cached?
If I hit 1 million blogs with 22000 URL's surely that would be enough to get most of those 22000 URL's cached?
Also, has anyone used scrapebox Google Cache Extractor to look at the cached links to their site, and compared it with the data you get for your money site in webmaster tools?
Does webmaster tools show half or even less than half of the cached links?
I always use the following:
A large list of names
A large list of email addresses
A selection of very well spun comments
There must be a lot of people like me who have exactly the same problem with SENUKE X. I am sure!
I have lists of literally 10's of thousands of URLs with links pointed at my pages.
Most of them have been generated by SE NUKE X.
I have used scrapebox successfully to confirm the links are there using the Check Links radial button, and loading a list of my money sites URL's and then my lists I have pulled out of my 6 nuke computers.
So my list is clean, only with URL's on it with actual links on.
I have done runs of scrapebox with over a million blogs on with lists I have bought from scrapeboxlist.com.
However, doing random samples, I still have found it difficult to get large numbers of URL's cached.
Has anyone got any tips?
I get a success rate of over 50% looking at data from the Status box on the bottom right hand courner of scrapebox.
Should I harvest my own lists?
How do I work on getting just DO Follow links?
Do nofollow links help get pages cached that are not currently cached?
If I hit 1 million blogs with 22000 URL's surely that would be enough to get most of those 22000 URL's cached?
Also, has anyone used scrapebox Google Cache Extractor to look at the cached links to their site, and compared it with the data you get for your money site in webmaster tools?
Does webmaster tools show half or even less than half of the cached links?
I always use the following:
A large list of names
A large list of email addresses
A selection of very well spun comments
There must be a lot of people like me who have exactly the same problem with SENUKE X. I am sure!