05-19-2010, 06:00 AM
hey guys, thanks for an awesome place to find share and find solutions.
Im using scrapebox for blog commenting. Obviously I open SB
and start scraping using keywords. I get a huge list and remove dupes.
start posting on whats left. Of that huge list a small percent gets submitted.
I take that list and check for link. Finally, I save the list of the blogs that my comment was approved on like so... posted052010.txt
next I start a new scrape w/new keywords and start again.
my questions are...
whats the best way to optimize this process?
should I save a "not posted" list?
do I need to save by type "worpress-posted052010.txt?
in other words, on new scraped URLs how can I
a. remove URLs that might exist in a list of not-posted URLs
b. remove URLs that Ive already posted to
and finally how can I compile a master list by combining all the lists?
if anyones streamilined this process Id really appreciate some tips...
thanks in advance
Im using scrapebox for blog commenting. Obviously I open SB
and start scraping using keywords. I get a huge list and remove dupes.
start posting on whats left. Of that huge list a small percent gets submitted.
I take that list and check for link. Finally, I save the list of the blogs that my comment was approved on like so... posted052010.txt
next I start a new scrape w/new keywords and start again.
my questions are...
whats the best way to optimize this process?
should I save a "not posted" list?
do I need to save by type "worpress-posted052010.txt?
in other words, on new scraped URLs how can I
a. remove URLs that might exist in a list of not-posted URLs
b. remove URLs that Ive already posted to
and finally how can I compile a master list by combining all the lists?
if anyones streamilined this process Id really appreciate some tips...
thanks in advance