Table of Contents
Wrote this program to scrape some sitemaps and the following sitemap links on multiple servers. In order to save time it was pip packaged for easy repeated use.
Follow the installation instructions. The docstrings have detailed explainations for use.
This program uses python 3.8
pip
install package and use as needed.
Install pip
package
pip install samssimplescraper==0.1.3
The package has two modules.
sitemapscraper
is used to scrape sitemaps and can also scrape further levels of sub-sitemaps The methods will return lists of the scraped links that can be used to scrape the wanted links.scraper
is used to scrape the the list that is returned from the sitemapscraper or a user-made list of links. There is also a method that returns a status check of how many links have been scraped of the total.
- Find sitemap for the site you are looking to scrape. Some tips can be found here:
how-to-find-your-sitemap https://writemaps.com/blog/how-to-find-your-sitemap/
- Scrape sitemap:
from samssimplescraper import LinksRetriever
# instantiate LinksRetriever with the sitemap you wish to scrape
links_retriever = LinksRetriever(url='https://www.example.com/sitemap_index.xml')
# get a list of the link using .get_sitemap_links method, can also add filter
mainpage_links = links_retriever.get_sitemap_links(tag='loc')
# if website has more layers use the method to get the links on those pages
final_links = links_retriever.get_next_links(links=mainpage_links, tag='loc')
Note: If you are not going to continue scraping in the same script then be sure to save your list using pickle:
import pickle
# the data folder is automatically created when LinksRetriever is instantiated
with open('./data/pickled_lists/sitemap_links_list.pkl', 'wb') as fp:
pickle.dump(final_links, fp)
- Now you can scrape the list of links that the
LinksRetriever
module has produced for you. The files will be saved in thedata/scraped_html
folder.
from samssimplescraper import Scraper
# pass the list of links and for naming purposes the root_url
Scraper.get_html(link_list=final_links, root_url='https://www.example.com/)
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE
for more information.
Samuel Adams McGuire - [email protected]
Pypi Link: https://pypi.org/project/samssimplescraper/0.1.3/
Linkedin: LinkedIn
Project Link: https://github.com/SamuelAdamsMcGuire/simplescraper