20% off on all annual plans. Use 20blackfriday code at checkout.

How to Scrape YouTube Data with Python

Excerpt content

YouTube is an excellent data source for video performance, audience engagement, and content trends. Scraping YouTube data can help you gain deeper insights and make well-informed project decisions.

In today’s article, you’ll learn how to:

  • Use Python and the yt-dlp library to extract video metadata and download videos
  • Export this vital information into a CSV file
  • Use ScraperAPI to bypass YouTube’s anti-scraping measures effectively and gather more comprehensive data

Get ready to enhance your data scraping skills and take advantage of the wealth of information available on YouTube!

TL;DR: Full YouTube Scraper

For those familiar with web scraping or automating tasks in Python, this script provides a comprehensive guide on scraping various types of YouTube data.

Here’s a quick summary of the code’s functionalities:

  1. Downloading a YouTube Video – Uses yt-dlp to download a video from a specified URL.
  2. Extracting YouTube Comments – Extracts comments from a YouTube video using yt-dlp.
  3. Extracting Metadata – Retrieves metadata (e.g., title, width, height, language, channel, likes) from a YouTube video without downloading it.
  4. Scraping Channel Information – Uses requests and BeautifulSoup to scrape a YouTube channel’s “About” section, extracting the channel name and description.
import requests
from yt_dlp import YoutubeDL
from bs4 import BeautifulSoup
 
## Downloading a YouTube Video
def download_video(video_url):
    opts = {}
    with YoutubeDL(opts) as yt:
        yt.download([video_url])
    print(f"Downloaded video: {video_url}")
 
## Extracting YouTube Comments    
def extract_comments(video_url):
    opts = {"getcomments": True}
    with YoutubeDL(opts) as yt:
        info = yt.extract_info(video_url, download=False)
        comments = info["comments"]
        thread_count = info["comment_count"]
        print("Number of threads: {}".format(thread_count))
        for comment in comments:
            print(comment['text'])
 
## Extracting Metadata
def extract_metadata(video_url):
    opts = {}
    with YoutubeDL(opts) as yt:
        info = yt.extract_info(video_url, download=False)
        data = {
            "URL": video_url,
            "Title": info.get("title"),
            "Width": info.get("width"),
            "Height": info.get("height"),
            "Language": info.get("language"),
            "Channel": info.get("channel"),
            "Likes": info.get("like_count")
        }
        print("Metadata:", data)
        return data
 
## Scraping Channel Information
def scrape_channel_info(channel_url, api_key):
    params = {
        'api_key': api_key,
        'url': channel_url,
        'render': 'true'
    }
    response = requests.get('https://api.scraperapi.com', params=params)
    if response.status_code == 200:
        soup = BeautifulSoup(response.text, 'html.parser')
        channel_name = soup.find('yt-formatted-string', {'id': 'text', "class":"style-scope ytd-channel-name"})
        channel_desc = soup.find('div', {'id': 'wrapper', "class":"style-scope ytd-channel-tagline-renderer"})
        if channel_name and channel_desc:
            channel_info = {
                "channel_name": channel_name.text.strip(),
                "channel_desc": channel_desc.text.strip(),
            }
            print("Channel Info:", channel_info)
            return channel_info
        else:
            print("Failed to retrieve channel info")
    else:
        print("Failed to retrieve the page:", response.status_code)
 
## Example Usage
if __name__ == "__main__":
    # Download a video
    video_url =  "ANY_YOUTUBE_VIDEO_URL"
    download_video(video_url)
 
    # Extract comments
    video_url_for_comments = "https://www.youtube.com/watch?v=hzXabRASYs8"
    extract_comments(video_url_for_comments)
 
    # Extract metadata
    video_url_for_metadata = "ANY_YOUTUBE_VIDEO_URL"
    extract_metadata(video_url_for_metadata)
 
    # Scrape channel information
    api_key = 'YOUR_API_KEY'
    channel_url = 'https://www.youtube.com/@scraperapi/about'
    scrape_channel_info(channel_url, api_key)

This code provides a comprehensive way to scrape and analyze various types of YouTube data using Python. To get started, replace the placeholder values (e.g., YOUR_API_KEYANY_YOUTUBE_VIDEO_URL) with your actual data.

Interested in diving deeper into scraping YouTube? Continue reading!

Scraping YouTube Data Step-by-Step

Project Requirements

Here are the tools and libraries you’ll need for this project:

  • Python 3.8 or newer – Make sure you have the latest version installed
  • yt-dlp – For downloading YouTube videos
  • ScraperAPI – To help with web scraping without getting blocked
  • requests – A simple HTTP library for Python
  • BeautifulSoup4 – For parsing HTML and extracting the data you need
  • Json – For handling JSON data

Step 1: Setup Your Project

Let’s start by setting up your project environment. First, create a new directory to keep everything organized and navigate into it.

Open your terminal and run these commands:

mkdir youtube-scraper
cd youtube-scraper

Ensure you have Python installed on your system—Python 3.8 or newer is required. Now, let’s install the essential libraries for downloading and scraping YouTube data.

Use the following command to install the libraries:

pip install yt-dlp requests beautifulsoup4

With these steps, your project directory is ready, and you have all the necessary tools installed to start downloading and scraping YouTube data.

Now, let’s dive into the exciting part!

Step 2: Download YouTube Videos with Python

We’ll use the yt_dlp library to download YouTube videos. This library is a powerful tool for interacting with YouTube content with Python.

First, we’ll specify the URL of the YouTube video we want to download. You can replace the video_url variable with any YouTube video URL you wish to download.

from yt_dlp import YoutubeDL
 
video_url = "https://www.youtube.com/watch?v=xy5OQ2IWNtc"

After that, we’ll set up options for the downloader using a Python dictionary. In this snippet, we’ve specified two options:

  • 'format': 'best': This option instructs the downloader to choose the best quality format available for the video
  • 'outtmpl': '%(title)s.%(ext)s': This option specifies the output filename template. Here, %(title)s will be replaced by the title of the video, and %(ext)s will be replaced by the file extension (e.g., mp4, webm).
# Set options for the downloader
opts = {
    'format': 'best',  # Choose the best quality format available
    'outtmpl': '%(title)s.%(ext)s'  # Output template for the filename
}

Finally, we create a YoutubeDL object with the specified options and use it to download the video using the download() method. The method accepts a list of URLs as its parameter so you can download multiple videos simultaneously.

Now, here’s the complete code snippet:

# Create a YoutubeDL object and download the video
with YoutubeDL(opts) as yt:
    yt.download([video_url])

You can download the specified YouTube video with the desired quality and filename template by running this code snippet.

Step 3: Scrape YouTube Metadata

Important Note

The information provided here is for educational purposes only. This guide does not grant any rights to the videos or images, which may be protected by copyright, intellectual property, or other legal rights. Before downloading content, consult legal advisors and review the website’s terms of service.

Without downloading the video, we can use the yt_dlp library to get details about a YouTube video, like its title, size, channel, and language.

As an example, let’s scrape the details of this video using the extract_info() method, setting the download parameter to False so we don’t download the video again. The method will return a dictionary with all the information about the video.

Here’s a snippet for extracting the data:

from yt_dlp import YoutubeDL
 
video_url = "https://www.youtube.com/watch?v=xy5OQ2IWNtc"
opts = dict()
 
with YoutubeDL(opts) as yt:
    info = yt.extract_info(video_url, download=False)
    video_title = info.get("title")
    width = info.get("width")
    height = info.get("height")
    language = info.get("language")
    channel = info.get("channel")
    likes = info.get("like_count")
     
    data = {
        "URL": video_url,
        "Title": video_title,
        "Width": width,
        "Height": height,
        "Language": language,
        "Channel": channel,
        "Likes": likes
    }
    print(data)

To get more details about the video, you can use other keys from the dictionary returned by extract_info(). Here are some useful ones:

KeyElement
idVideo ID
titleVideo title
descriptionVideo description
uploaderName of the uploader
uploader_idUploader’s ID
upload_dateUpload date (YYYYMMDD)
durationDuration in seconds
view_countNumber of views
like_countNumber of likes
dislike_countNumber of dislikes
comment_countNumber of comments
thumbnailURL of the thumbnail image
formatsList of available formats
subtitlesAvailable subtitles
age_limitAge restriction
categoriesCategories related to the video
tagsTags related to the video
is_liveIf the video is a live stream

These keys will help you get all the information you need about a YouTube video.

Step 4: Scrape YouTube Comments

To scrape comments from a YouTube video, we’ll use the yt_dlp library to extract the comments without downloading the video itself.

In this example, we’ll be scraping the comments from this video: A brief introduction to Git for beginners

First, we import the YoutubeDL class from the yt_dlp library and specify the URL of the YouTube video we want to scrape:

from yt_dlp import YoutubeDL
 
# URL of the YouTube video
video_url = "https://www.youtube.com/watch?v=r8jQ9hVA2qs"

Next, we create an options dictionary opts with the key "getcomments" set to True. This tells yt_dlp that we want to extract comments:

# Options for YoutubeDL to get comments
opts = {
    "getcomments": True,
}

Using the YoutubeDL class with our specified options, we extract the video information, including comments, by calling the extract_info() method we used earlier.

Remember to set the download parameter to False so we only retrieve the metadata and comments without downloading the actual video:

# Extract video information, including comments
 
with YoutubeDL(opts) as yt:
    info = yt.extract_info(video_url, download=False)

Next, we use the get() method on the dictionary with the key "comments" to access the comments. We also retrieve the total number of comment threads using the key "comment_count":

# Get the comments
comments = info.get("comments")
thread_count = info["comment_count"]

Finally, we print the total number of comment threads and the comments themselves:

print("Number of threads: {}".format(thread_count))
print(comments)

Here is the complete code together:

from yt_dlp import YoutubeDL
 
# URL of the YouTube video
video_url = "https://www.youtube.com/watch?v=hzXabRASYs8"
 
# Options for YoutubeDL to get comments
opts = {
    "getcomments": True,
}
 
# Extract video information, including comments
with YoutubeDL(opts) as yt:
    info = yt.extract_info(video_url, download=False)
     
    # Get the comments
    comments = info.get("comments")
    thread_count = info["comment_count"]
     
    # Print the number of comment threads and the comments
    print("Number of threads: {}".format(thread_count))
    print(comments)

Step 5: Scrape YouTube Channel Information

To scrape information about a YouTube channel, we can use ScraperAPI to avoid getting blocked, the requests library to handle HTTP requests, and the BeautifulSoup library to parse HTML content.

We’ll scrape ScraperAPI’s YouTube channel’s “About” page and extract the channel name and description for this example.

First, we set up the API key for ScraperAPI and the URL of the YouTube channel’s “About” page.

Note: Don’t have an API key? Create a free ScraperAPI account and get 5,000 API credits to test all our tools.

We also define the parameters needed for the API request; the render parameter being set to true allows all Javascript on the webpage to be rendered fully:

api_key = 'YOUR_API_KEY'
url = f'https://www.youtube.com/@scraperapi/about'
params = {
    'api_key': api_key,
    'url': url,
    'render': 'true'
}

We then make a get() request to the ScraperAPI endpoint, passing in our parameters. This will allow us to bypass any restrictions and get the HTML content of the page:

response = requests.get('https://api.scraperapi.com', params=params)

If the request is successful (status code 200), we use BeautifulSoup to parse the HTML content and extract the channel’s name and description. We locate these elements using their specific HTML tags and classes we identified earlier:

  if response.status_code == 200:
  soup = BeautifulSoup(response.text, 'html.parser')
  channel_name = soup.find('yt-formatted-string', {'id': 'text', "class":"style-scope ytd-channel-name"})
  channel_desc = soup.find('div', {'id': 'wrapper', "class":"style-scope ytd-channel-tagline-renderer"})
  if channel_name and info:       
      channel_info = {
      " channel_name" : channel_name.text.strip(),
      "channel_desc" : channel_desc.text.strip(),
      }
  else:
      print("Failed to retrieve channel info")
else:
  print("Failed to retrieve the page:", response.status_code)

Finally, we print the extracted channel information:

print(channel_info)

Here’s the complete code all together:

import requests
from bs4 import BeautifulSoup
 
# API key for ScraperAPI
api_key = 'YOUR_API_KEY'
# URL of the YouTube channel's About page
url = 'https://www.youtube.com/@scraperapi/about'
# Parameters for the API request
params = {
    'api_key': api_key,
    'url': url,
    'render': 'true'
}
 
# Make a GET request to ScraperAPI
response = requests.get('https://api.scraperapi.com', params=params)
 
# If the request is successful, parse the HTML and extract the channel info
if response.status_code == 200:
    soup = BeautifulSoup(response.text, 'html.parser')
    channel_name = soup.find('yt-formatted-string', {'id': 'text', 'class': 'style-scope ytd-channel-name'})
    channel_desc = soup.find('div', {'id': 'wrapper', 'class': 'style-scope ytd-channel-tagline-renderer'})
 
    channel_info = {
        'channel_name': channel_name.text.strip(),
        'channel_desc': channel_desc.text.strip(),
    }
else:
    print("Failed to retrieve the page:", response.status_code)
 
# Print the extracted channel information
print(channel_info)

Using ScraperAPI makes it easy to retrieve web content, helping us to scrape data from websites like YouTube that may have restrictions.

ScraperAPI handles all the complexities of web scraping, such as managing IP rotation, handling CAPTCHAs, and rendering JavaScript, which can often block traditional web scraping techniques. Thus, we can focus on extracting the necessary data without worrying about these common issues.

For instance, YouTube often uses various mechanisms to prevent automated access, including rate limiting and complex page structures that require JavaScript rendering. By using ScraperAPI, we can effortlessly bypass these restrictions, allowing us to obtain data such as channel information, video details, and comments without being blocked.

How to Scrape YouTube Search Results

Scraping YouTube search results can provide valuable insights into popular content and trends related to specific keywords. In this example, we will use ScraperAPI to scrape the top YouTube search results for the query “scraperapi” and extract the videos’ titles and links.

First, we import the necessary libraries:

import requests
from bs4 import BeautifulSoup
import json

Next, we set up our ScraperAPI key and specify the search query. We also construct the URL for the YouTube search results page:

api_key = 'YOUR_API_KEY'
search_query = 'scraperapi'
url = f'https://www.youtube.com/results?search_query={search_query}'
params = {
    'api_key': api_key,
    'url': url,
    'render': 'true',
}

We initialize an empty list called video_data to store the extracted video information and make a request to the ScraperAPI endpoint with our parameters. This will allow us to retrieve the HTML content of the search results page:

video_data = []
response = requests.get('https://api.scraperapi.com', params=params)

If the request is successful (status code 200), we parse the HTML content using BeautifulSoup:

if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser')
videos = soup.find_all('div', {"id": "title-wrapper"})

We use soup.find_all() to find all div elements with the id of title-wrapper, which contain the video titles and links.

videos = soup.find_all('div', {"id": "title-wrapper"})

After that, we check if any video elements were found. If videos are found, we print the number of videos found and loop through each element to extract the title and link.

We get the title attribute of the anchor tag, which contains the video’s title, and the href attribute of the anchor tag, which contains the video’s relative URL. We concatenate them with the base URL “https://www.youtube.com” to form the full URL.

We then store the extracted title and link in a dictionary and append it to the video_data list.

if videos:
print(f"Found {len(videos)} videos")
 
for video in videos:
    video_details = video.find("a", {"id": "video-title"})
    if video_details:
        title = video_details.get('title')
        link = video_details['href']
        video_info = {"title": title, "link": f"https://www.youtube.com{link}"}
        video_data.append(video_info)

After extracting all video details, we save the video_data list to a JSON file for further use.

with open('videos.json', 'w') as json_file:
json.dump(video_data, json_file, indent=4)

Here’s the complete code all together:

import requests
from bs4 import BeautifulSoup
import json
 
# API key for ScraperAPI
api_key = 'YOUR_API_KEY'
# Search query for YouTube
search_query = 'scraperapi'
# URL of the YouTube search results page
url = f'https://www.youtube.com/results?search_query={search_query}'
# Parameters for the API request
params = {
    'api_key': api_key,
    'url': url,
    'render': 'true',
}
 
# Initialize an empty list to store video data
video_data = []
# Make a GET request to ScraperAPI
response = requests.get('https://api.scraperapi.com', params=params)
 
# If the request is successful, parse the HTML and extract video info
if response.status_code == 200:
    soup = BeautifulSoup(response.text, 'html.parser')
    videos = soup.find_all('div', {"id": "title-wrapper"})
     
    if videos:
        print(f"Found {len(videos)} videos")
        for video in videos:
            video_details = video.find("a", {"id": "video-title"})
            if video_details:
                title = video_details.get('title')
                link = video_details['href']
                video_info = {"title": title, "link": f"https://www.youtube.com{link}"}
                video_data.append(video_info)
         
        # Save the extracted video data to a JSON file
        with open('videos.json', 'w') as json_file:
            json.dump(video_data, json_file, indent=4)
    else:
        print("No videos found")
else:
    print("Failed to retrieve the page:", response.status_code)

Legal Considerations When Scraping YouTube

Scraping data from websites like YouTube can offer valuable insights, but it’s crucial to be aware of the legal considerations and potential consequences. Here are some key points to keep in mind:

Terms of Service

  1. Compliance: Ensure that your activities comply with YouTube’s Terms of Service. Scraping content that violates these terms can lead to your IP being blocked, your YouTube account being suspended, or legal action being taken against you.
  2. User Agreements: Review YouTube’s user agreements and community guidelines to understand what is permissible. Unauthorized scraping can breach these agreements, leading to potential legal ramifications.

Copyright and Intellectual Property

  1. Content Ownership: Copyright and other intellectual property laws protect YouTube videos, thumbnails, and metadata. Scraping and using this content without proper authorization can result in copyright infringement.
  2. Fair Use: While some uses of copyrighted material might fall under “fair use” (e.g., for commentary, criticism, or educational purposes), this is a complex legal doctrine that varies by jurisdiction.

Data Privacy

  1. Personal Information: Be cautious when scraping data that includes personal information. Privacy laws, such as the General Data Protection Regulation (GDPR) in the European Union, impose strict regulations on collecting, storing, and using personal data.
  2. Anonymization: If you collect data that might include personal information, consider anonymizing it to protect individuals’ privacy and comply with data protection laws.

Ethical Considerations

  1. Respectfulness: Ethical scraping involves respecting the target website’s robots.txt file, which specifies which parts of the site should not be scraped. While not legally binding, respecting this file demonstrates good faith and ethical behavior.
  2. Impact: Be mindful of the impact of your scraping activities on YouTube’s servers and the user experience. High-frequency scraping can strain servers and degrade service quality for other users.

Scraping YouTube data can be beneficial but must be done responsibly and legally. Always ensure compliance with YouTube’s Terms of Service, respect copyright and intellectual property rights, protect personal data, and consider the ethical implications of your actions.

By adhering to these guidelines, you can minimize legal risks and ensure that your data scraping practices are ethical and lawful.

Wrapping Up – How to scrape Youtube

In this Youtube scraping guide, you’ve learned:

  • How to use Python to scrape valuable YouTube data, including video metadata, comments, channel information, and search results
  • Leverage libraries such as yt-dlpRequests, and BeautifulSoup to efficiently gather and analyze a wealth of information from YouTube while navigating the complexities of web scraping
  • Use ScraperAPI to bypass anti-bot protections and build resilient data pipelines

Now that you have the skills to scrape YouTube data, here are some ideas on what to do with this data:

  • Content Analysis: To optimize your video strategies and perform in-depth content performance and audience engagement analyses.
  • Market Research: Use search results and metadata to understand current trends and what types of content resonate with your target audience
  • Sentiment Analysis: Analyze comments to gauge audience reactions and opinions on various topics or videos
  • Competitive Analysis: You can use the channel data to study competitor channels and their content to identify market strengths, weaknesses, and opportunities.

To streamline and enhance your web scraping projects, sign up for a free ScraperAPI account. With ScraperAPI, you can easily bypass IP blocks, handle CAPTCHAs, and render JavaScript, ensuring your scraping activities are efficient and uninterrupted.

Remember always to respect copyright and legal considerations when scraping and using data from YouTube.

Until next time, happy scraping!

About the author

Picture of Ize Majebi

Ize Majebi

Ize Majebi is a Python developer and data enthusiast who delights in unraveling code intricacies and exploring the depths of the data world. She transforms technical challenges into creative solutions, possessing a passion for problem-solving and a talent for making the complex feel like a friendly chat. Her ability brings a touch of simplicity to the realms of Python and data.