We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
最近知乎搞了个反爬虫,抓取速度过快就会这样
不过我稍微改了一下程序,加了个3s的延时,能够抓取600+个回答,不过还是被封了。。。
问题是能不能设置下载answer个数啊,因为并不是700+的回答我都需要,我 可能只需要100+ 而已 而且,一出错,所有数据都没了,还得重新下,很蛋疼啊
所以,能否加入
@YaoZeyuan
The text was updated successfully, but these errors were encountered:
@ZhangSanMo 可以告知修改延时的方法或者发给我修改好的文件吗……
Sorry, something went wrong.
No branches or pull requests
最近知乎搞了个反爬虫,抓取速度过快就会这样
不过我稍微改了一下程序,加了个3s的延时,能够抓取600+个回答,不过还是被封了。。。
不过主要问题不是这个
问题是能不能设置下载answer个数啊,因为并不是700+的回答我都需要,我 可能只需要100+ 而已
而且,一出错,所有数据都没了,还得重新下,很蛋疼啊
所以,能否加入
@YaoZeyuan
The text was updated successfully, but these errors were encountered: