Basics of Web Scraping

18 January 2021 at 10:00 by ParTech Media - Post a comment

Back in the 18th century, people owning oil wells and rigs were the richest people on the earth. But with the advent of the digital revolution and the rising chances of oil getting exhausted somewhere in the near future, a new definition of wealth has emerged. This new form of wealth is so valuable that it will drive the world. Yes, we are talking about data.

Clive Humbly, a British mathematician, says 'Data is the new oil' and it is the new nuclear power. Many other renowned businessmen and industrialists say that the data could be as effective as crude oil. But, similar to crude oil, data has to be refined to get the products that are as useful as gas, chemicals, etc.,

Before we get into the refinement of the data, what would be the source of the data? Data could be present anywhere and in any form. It could be present in the structured form as in a relational database, or it could be present as a review comment from a user who has purchased a product on an e-commerce website.

So, going by this, data is not always structured and they are not readily available. So, how to make the process of collecting data more efficient and effective? The answer to this question is web scraping. Let's see in detail what is web scraping and how it is useful.

Table of contents

  1. What is web scraping?
  2. Components in web scraping
  3. Purpose of web scraping
  4. How to do web scraping
  5. Conclusion

What is web scraping?

Most websites contain a voluminous amount of data that are valuable, but available in different formats. For example, stock prices, sports stats, product details, etc. To make use of this data, either the data has to be manually copied or you have to do a web scraping.

Web scraping refers to the process of extracting data from the website in an automated fashion. In this process, the data can be copied to the local machine and can be formatted based on your needs. Web scrapers differ based on the webpage from which the data has to be scraped.

Data derived from websites can be used for text mining. Similarly, data analysts use web scraped data to derive conclusions to enhance their business and operations.

Components in web scraping

There are two main components in web scraping. They are

  1. Crawler
  2. Scraper

The crawler and scraper are like cars in a convoy, where the security cars come first followed by the president's car. Here, the security cars are the crawlers and the president's car is the scraper.


The crawler is also called a spider. Its primary job is to search for content by following links. One or more links are crawled before scraping. Usually, the crawler first locates the URL where the data is present, which is then passed to the scraper for further work.


Scrapers are designed to extract data from web pages in an accurate and quick manner. The important feature of the scraper is the ability to locate the data that needs to be extracted from the webpage. Usually, scraper uses XPath, CSS finder, regex, or a combination of these to locate and extract the data.

Use cases of web scraping

In this section, let's see the areas in which web scraping comes in handy.


In the e-commerce industry, web scraping is used to extract products and pricing details. You can perform competitor pricing analysis for the same product, manage dynamic pricing, and optimize revenue. Also, the minimum advertised price for the product across the site can be measured and managed, as it is difficult to manually maintain and keep track of the prices of the product.

News and content monitoring

To understand the pulse of the people, data can be web scraped from social networking websites. Based on the data obtained, sentimental analysis, political campaigns, investment analysis, election predictions can be determined. This also helps the industries, government, sports teams to understand what the people think about their new product, policy, rule or even winning a trophy.

As a tool for automation

Web scraping can be used to combine data from two different websites when there is a need to merge the data and utilize it for a different purpose.


Commodity price data, cryptocurrency data, and many other price-related information are put up on many websites. Web scraping can be used to get the data from the relevant websites and try to figure information such as the rate of change of a currency in a week, the highest value of a currency, the lowest value of a currency, how stable the currency is, etc.

Real estate

Using web scraping, data can be monitored for property values, the number of available properties in an area, properties available for rent, the price of the property, and the direction of the market.


Sports-related information that is posted on the websites is scrapped and used for analyzing the performance of a player, team, and also to find out the indirect achievements of a player using the data.

How to do web scraping?

The first and foremost step is to open the URL which contains the data that needs to be extracted. On opening the URL, the content of the website will be returned in HTML format.

After receiving the HTML data, the HTML would be parsed. Based on the locators on the page (done either by using the id attribute of the HTML tag or by the HTML tag itself), the data will be obtained from the page and can be stored in the local machine for further processing.


Web scrapers can be pre-built and self-built; self-built can be written based on the need of the project. Pre-built can be tedious to configure but can solve the purpose without any coding. In this post, we have seen the basics of web scraping and how it is useful. In the next post, we will get into the details of how to implement it using C#.