07-10-2019, 07:58 AM
(06-28-2019, 04:33 PM)loopline Wrote: ITs locked threads. Does it always do it on the same 2 urls?
Here is a copy and paste from support:
That means that something has locked 1 or more of the threads. This can be security software such as anti-virus, malware checkers and firewalls. So you should whitelist scrapebox in all security software and then you can whitelist the entire scrapebox folder as well.
Further any program that accesses the internet can lock threads, things like skype, utorrent etc… So you can try closing down any unneeded programs. Then if its working you can turn programs back on 1 by 1 to find the culprit.
Further computer optimization software can lock threads so you can shut any such software down.
Take note that disabling security software (such as anti-virus, malware checkers and firewalls) often only stops new rules form forming, but allows existing rules to still fire. So you have to fully whitelist in the security software or uninstall the security software(as a test).
Further some security softwar requires you to whitelist in more then one place before it takes effect.
Also note that disabling a router firewall, does actually fully disable it.
Basically you have to sort out what is locking the threads, because scrapebox is forced to wait until all threads are released. On occasion it can be your operating system that does it, so you can try restarting your machine and/or lowering total connections.
One other thing to note is that this can happen with proxies that keep returning small amounts of data, it won't trigger the timeout because teh connections is still active. So try a test using no proxies or make sure you are using some quality private proxies.
Lastly if your running mac, you can try lowering the connections. Mac has terrible error handling when it comes to lots of errors stacking up quickly. So if there are too many errors stacking up too quick mac can choke, so lowering the threads fixes this. This is a non issue on windows.
OK, so this is still happening. I looked at all of your suggestions (thanks) and they don't really seem to apply here. check out the video
It seems that in most large batches, there's at least one or two urls that get stuck "reading". Even when I hit stop, it won't stop. I have to go stop the process. It's a mystery to me and happening enough that it's a pain