-
-
Notifications
You must be signed in to change notification settings - Fork 531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Becomes slow with huge text #156
Comments
Hi, well, it's hard to say from the description. Can you provide the example text and the command/code you tried? |
ping @deepaksinghtopwal 🏓 🙂 |
run this on it, upload prof.txt |
I can summarise books in 30secs, with segments of ~10 sentences with LSA.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
it seems to work fine with small text data however when i tried to use the same for document(approx 2000 lines) , it became way too slow..
and took around 20 mins to summarize 50 documents.
So is there any parameter , specific algo which can be used to solve this issue.
The text was updated successfully, but these errors were encountered: