05-22-2018, 05:40 PM
(05-22-2018, 03:54 PM)adam110 Wrote:(05-18-2018, 02:40 AM)loopline Wrote: The expired domain finder uses proxies for me, when its selected, but make sure you are running scrapebox from a folder on your desktop or a folder in your documents folder.
its possible that there are no write permissions.
Thanks Loopline - Im using Scrapebox on the same VPS I have been using it on for many months now - I usually use it to harvest urls from the search engines (and i use it very successfully and works a treat) No reading writting issues - Its also saved on the desktop
I have just started a brand new crawl today - Im scraping 2 sites (25 depth) using 10 threads and 100 proxies - I will post back here after a few days or when the scrape is complete.. Im actually using 2 new domains to scrape from now because I already managed to scrape the last couple of domains using another tool
fingers crossed all goes well -
ok so the scrape has already completed which suprised me - Here are the results
I scraped 2 sites
kickstarter.com and indiegogo.com - crawl depth was set to max (25)
scrapebox has not crashed but has stopped - so i believe it has completed the task - it only got me 61 expired domains (i was hoping for many more)
I have now loaded the same 2 sites into the other tool that I used for scraping the last urls that scrapebox was giving me issues with to see how it compares to the 61 results that scrapebox has provided. So far the other tool has been running for less than 10 mins and it has already found 10 sites (to be honest scrapebox found about 10 sites in 10 mins too)
I have had a look at the crawl error file and i continue to see the error too many requests - I cant understand how this can be the case - I have proxies added (100) runing qith 10 threads and I have it checked to work with proxies -