Hi thanks for that.
Still though, SB returns duplicate sites. Why not just remove a site it's already found? Wouldnt that be easier and save time?
Also regarding removing more than one type of url from harvested urls and saying "I'm not sure if Scrapebox can do that."
There is an option to "Remove URL's containing" so you can remove say any URL containing blogspot.com or wordpres.com or remove any URL's that end in .pdf or .doc or .txt for example.
What I mean is instead of having to do them one by one is it possible to put all the urls you want removed that contain any of the following and then put what urls you want removed by putting something like wordpress.com,blogger.com,livejournal.com,.pdf,.doc,.txt etc etc or even add these to a blacklist to automatically remove them upon harvesting.
Still though, SB returns duplicate sites. Why not just remove a site it's already found? Wouldnt that be easier and save time?
Also regarding removing more than one type of url from harvested urls and saying "I'm not sure if Scrapebox can do that."
There is an option to "Remove URL's containing" so you can remove say any URL containing blogspot.com or wordpres.com or remove any URL's that end in .pdf or .doc or .txt for example.
What I mean is instead of having to do them one by one is it possible to put all the urls you want removed that contain any of the following and then put what urls you want removed by putting something like wordpress.com,blogger.com,livejournal.com,.pdf,.doc,.txt etc etc or even add these to a blacklist to automatically remove them upon harvesting.