Skip to content

A Simple Crawler with an option to crawl a particular number of links...πŸ•·

Notifications You must be signed in to change notification settings

iamsibasish/simple_crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 

Repository files navigation

A Simple Web Crawler πŸ•·

A Simple Crawler with an option to crawl a particular number of links. Only crawl linkes of root url.Store all links to a CSV file.

Features:

  • Crawl all the links and store in a CSV file
  • Can crawl a certain number of links .
  • Avoid crawl all other non root urls

Dependency:

  • BeautifulSoup
  • pip

How to Crawl :

  • Install pip in your enviroment(As the code will auto install BeautifulSoup using pip)
  • Then run the crawler.py in our terminal and give the inputs i.e root url and number of url to crawl
  • If you want the crawler to crawl all links of a website then simply press enter when ask for the number of links.

image

About

A Simple Crawler with an option to crawl a particular number of links...πŸ•·

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages