A web crawler is a relatively simple automated program or script, which with a certain method scans or "crawls" all Internet pages to create an index of the data it is looking for. Other names for web crawl are web spider, web robot, bot, crawl and automatic indexer.
Web crawls can be used for a variety of purposes. The most common usage is related to search engines. Search engines use web crawls to gather information about what is on public web pages. Its main purpose is to collect data so that when Internet users type search terms on their computers, search engines can immediately display relevant web sites.
Web crawlers dig every data that is on the internet such as: meta data, keywords, and so on. Then this web crawler or si (spider man) will index all of our data into the search engine database. Until finally the website page will be displayed on the SERP (search engine rage page)