Learning Outcomes

  • Understand the benefits and use cases of web scraping.
  • Learn how to parse the HTML content of a webpage using BeautifulSoup to extract specific elements.
  • Learn how to scan the HTML for specific keywords.
  • Learn how to scrape multiple web pages.
  • Learn how to store your web scraped data into a pandas dataframe.
  • Learn how to save the web scraped data as a local .csv file.

The following installations are for a Jupyter Notebook, however if you are using a command line then simply exclude the ! symbol

!pip install beautifulsoup4
!pip install requests
# Library Imports
import pandas as pd
from bs4 import BeautifulSoup
import requests

Why Learn Web Scraping?

Learning web scraping is a useful skill, whether you work as a programmer, marketer or analyst.

It’s a fantastic way for you to analyse websites. Web scraping should never replace a tool such as ScreamingFrog, however when you’re creating data pipelines with Python or JavaScript scripts, then you’ll likely want to write a custom scraper.

Because what’s the point of doing a website crawl if you only need a few pieces of information per page?

Once you have acquired advanced web scraping skills, you can:

  • Accurately monitor your competitors.
  • Create data pipelines that push fresh HTML data into a data warehouse such as BigQuery.
  • Allow you to blend it with other data sources such as Google Search Console or Google Analytics data.
  • Create your own APIs for websites that don’t publicly expose an API.

There are many other uses for why web scraping is a powerful skill to possess.

Challenges of Web Scraping

Firstly every website is different, this means it can be difficult to build a robust web scraper that will work on every website. You’ll likely need to create unique selectors for each website which can be time-consuming.

Secondly, your scripts are more likely to fail over time because websites change. Whenever a marketer, owner or developer makes changes to their website, it could lead to your script breaking. Therefore for larger proejcts its essential that you create a monitoring system so that you can fix these problems as they arise.

How To Web Scrape A Single HTML Page:

In order to scrape a web page in python or any programming language, we will need to download the HTML content.

The library that we’ll be using is requests.

url = 'https://www.indeed.co.uk/jobs?q=data%20scientist&l=london&start=40&advn=2102673149993430&vjk=40339845379bc411'​
response = requests.get(url)
<Response [200]>

As long as the status code is 200 (which means Ok), then we’ll be able to access the web page. You can always check the status code with:

if response.status_code == 200:
<Response [200]>

To access the content of a request, simply use:

# This will store the HTML content as a stream of bytes:
html_content = response.content
# This will store the HTML content as a string:
html_content_string = response.text​

Parsing the HTML Content to a Parser

Simply downloading the HTML page is not enough, particularly if we would like to extract elements from it. Therefore we will use a python package called BeautifulSoup. BeautifulSoup provides us with a large amount of DOM (document object model) parsing methods.

In order to parse the DOM of a page, simply use:

soup = BeautifulSoup(html_content, 'html.parser')

We can now see that instead of a HTML bytes string, we have a BeautifulSoup object, that has many functions on it!

In our example, we’ll be web scraping indeed and extracting job information from Indeed.co.uk

  • The job will be: data scientist.
  • The area will be london.

Investigate The URL

url = ‘https://www.indeed.co.uk/jobs?q=data%20scientist&l=london&start=40&advn=2102673149993430&vjk=40339845379bc411′

There can be a lot of information inside of a URL.

Its important for you to be able to identify the structure of URLs and to reverse engineer how they might have been created.

  1. The base URL means the path to the jobs functionality of the website which in this case is: https://www.index.co.uk/
  2. Query Parameters are a way for the jobs search to be dynamic, in the above example they are: ?q=data%20scientist&l=london&start=40&advn=2102673149993430&vjk=40339845379bc411′

Query parameters consist of:

  • The start of the query at q
  • A key and value for each query parameter (i.e. l = london or start=40)
  • A separator which is an ampersand symbol (&) that separates all of the key + value query parameters.

Visually Inspect The Webpage In Google Chrome Dev Tools

Before jumping straight into coding, its worthwhile visually inspecting the HTML page content within your browser. This will give you a sense of how the website is constructed and what repeating patterns you can see within the HTML.

Google Chrome Developer tools is a free available tool that allows you to visually inspect the HTML code.

Navigate to it by:

  1. Opening up Google Chrome.
  2. Right clicking on a webpage.
  3. Clicking inspect.

Find Element By HTML ID

It is possible to select specific HTML elements by using the #id CSS selector.

appPromoBanner = soup.find('div', {'id':'appPromoBanner'})

Find Element By HTML Class Name

Alternatively, you can find elements by their class selector.

container_div = soup.find('div', class_='tab-container')

How To Extract Text From HTML Elements

As well as selecting the entire HTML element, you can also easily extract the text using BeautifulSoup.

Let’s see how this might work whilst scraping a single job advertisement:

job_url = 'https://www.indeed.co.uk/viewjob?cmp=Crowd-Link-Consulting&t=Business+Intelligence+Engineer&jk=9129263166da1718&q=data+engineer&vjs=3'
resp = requests.get(job_url)
soup = BeautifulSoup(resp.content, 'html.parser')

Extracting The Title Tag

Firstly let’s extract the title tag and then use .text to obtain the text:

title_tag_text = soup.title.text
Business Intelligence Engineer - Woking - Indeed.co.uk

Or we can extract the first paragraph on the webpage, then get the text for that element:

first_paragraph = soup.find('p')
<p><b>Business Intelligence Engineer – Woking, Surrey</b></p>

How To Extract Multiple HTML Elements

Sometimes you’ll want to store multiple elements, for example if there is a list of job advertisements on the same page. The following method will return a list of elements rather than just the first element:

all_paragraphs = soup.findAll('p')
[<p><b>Business Intelligence Engineer – Woking, Surrey</b></p>, <p><b>Objective </b></p>, <p>This role needs to work closely with our client’s customers to turn data into critical information and knowledge that can be used to make sound business decisions. They provide data that is accurate, congruent, reliable and is easily accessible.</p>]

If we wanted to extract the text of every paragraph element, we could just do a list compehension:

all_paragraphs_text = [paragraph.text.strip() for paragraph in all_paragraphs]

It’s also possible to remove paragraph tags if they contain empty strings, by only including paragraphs which are truthy (don’t have empty strings).

# This will only return paragraphs that don't have empty strings!
full_paragraphs = [paragraph for paragraph in all_paragraphs_text if paragraph]

How To Web Scrape Multiple HTML Pages:

If you’d like to web scrape multiple pages, then we’ll simply create a for loop and multiple beautifulsoup objects.

The important things are:

  • Have a results dictionary or list(s) that is outside of the loop.
  • Extract either the result or N/A or a NaN (not a number), this is especially important when you’re using python lists as it ensures that all of your python lists will always be the same length.
urls = ['http://understandingdata.com/', 'https://understandingdata.com/about-me/', 'https://understandingdata.com/contact/']

1. Create a results list to store all of the web scraped data:
results = []
for url in urls:
   # 2. Obtain the HTML response:
   response = requests.get(url)
   # 3. Create a BeautifulSoup object:
   soup = BeautifulSoup(response.content, 'html.parser')
   # 4. Extract the elements per URL:
   title_tag = soup.title

. . .

website titles extracted with beautifulsoup and python

How To Scan HTML Content For Specific Keywords

Particularly in a marketing context, if one of your web pages is ranking for 5 keywords it would be beneficial to know:

  • If every keyword was on a given HTML page.
  • If there were keywords on / missing from the HTML page.

By writing a web scraper we can easily answer these questions at scale.

Let’s say that our keyword is Understanding Data, we will normalise this to be lowercase with .lower()

url_dict = {}

keyword = 'Understanding Data'.lower()

for url in urls:
    # Creating a new item in the dictionary:
    url_dict[url] = {'in_title': False, 'in_html': False}
    # Obtaining the HTML page with python requests:
    response = requests.get(url)
    if response.status_code == 200:
        # Parse the HTML content using BeautifulSoup:
        soup = BeautifulSoup(response.content, 'html.parser')
        # Extract the HTML content into a string and normalise it to be lowercase:
        cleaned_html_text = response.text.lower()
        # Extract the HTML elements using BeautifulSoup:
        title_tag = soup.title
        # Checking to see if the keyword is present in the HTML and the <title> tag and the HTML content:
        if keyword in title_tag:
            url_dict[url]['in_title'] = True
        if keyword in cleaned_html_text:
            url_dict[url]['in_html'] = True

. . .

keyword in HTML content detection with python

Notice above, how easily it is to web scrape multiple pages and search the HTML content as well as the title tag.

This can be extended to search many more HTML elements rather than just two.

If we would like to do 30 or 50 it would be better to use this structure:

for item_name, data in zip(['in_html', 'in_title'],[cleaned_html_text, title_tag]):
    if keyword in data:
        url_dict[url][item_name] = True

How To Create A Pandas Dataframe From Web Scraped Data

After web scraping and collecting your data from many web pages, it’s ideal to store it within a pandas dataframe. From there you’ll be able to push it directly to BigQuery or store it locally as a .csv.

!pip install pandas
Requirement already satisfied: pandas in /opt/anaconda3/lib/python3.7/site-packages (1.1.3)
Requirement already satisfied: python-dateutil>=2.7.3 in /opt/anaconda3/lib/python3.7/site-packages (from pandas) (2.8.1)
Requirement already satisfied: numpy>=1.15.4 in /opt/anaconda3/lib/python3.7/site-packages (from pandas) (1.19.1)
Requirement already satisfied: pytz>=2017.2 in /opt/anaconda3/lib/python3.7/site-packages (from pandas) (2020.1)
Requirement already satisfied: six>=1.5 in /opt/anaconda3/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)
import pandas as pd
master_df = pd.DataFrame()
master_df = master_df.from_dict(url_dict, orient='index')
# Resetting the index:
master_df.reset_index(drop=False, inplace=True)
master_df.rename(columns={'index': 'URL'}, inplace=True)

. . .

how to store web scraped data in a pandas dataframe

How To Save The Web Scraped Data To A .CSV

Now that the data is inside of a pandas dataframe, we can easily save it with the following method:



Hopefully this tutorial has sparked your curiosity with web scraping, I’d recommend reviewing the following resources to learn more: