Selected Reading

Python response.iter_content() Method



Response.iter_content() method of the Python Requests module allows us to iterate over the response content in chunks.

This method is particularly useful for handling large responses such as when downloading large files, as it avoids loading the entire response content into memory at once.

We can specify the size of each chunk (in bytes) with the chunk_size parameter. This method is efficient for streaming data and helps in processing the content incrementally.

Syntax

Following is the syntax and parameters of Response.iter_content() method of the Python Requests module −

response.iter_content(chunk_size=1, decode_unicode=False)

Parameters

Following are the parameters of Response.iter_content() method of the Python Requests module −

  • chunk_size(optional): The size of each chunk to be read. It is an integer value in bytes. If set to None it will read data as it arrives.
  • decode_unicode(optional): If this parameter is True the content will be decoded to Unicode using the responses encoding. By default the value is False.

Return value

This method returns an iterator that produces bytes of the response content.

Example 1

When downloading a large file using the requests module in Python it's important to handle the response in chunks to avoid using too much memory.

Following is the example of the Response.iter_content() method of the Python Requests module for downloading a large file −

import requests

# Define the URL of the file to be downloaded
url = 'https://example.com/largefile.zip'

# Send a GET request to the URL with stream=True
response = requests.get(url, stream=True)

# Define the local filename to save the file
local_filename = 'largefile.zip'

# Open a local file in binary write mode
with open(local_filename, 'wb') as f:
    # Iterate over the response in chunks
    for chunk in response.iter_content(chunk_size=8192):
        if chunk:
            f.write(chunk)  # Write the chunk to the file

print(f'File downloaded as {local_filename}')

Output

File downloaded as largefile.zip

Example 2

To process data line by line in Python we can use a similar approach to the one we used for downloading large files in the above example. By reading and processing each line individually we can handle large datasets efficiently without loading the entire file into memory.

Here is an example of how to do this −

import requests

# Define the URL of the file to be downloaded
url = 'https://example.com/largefile.txt'

# Send a GET request to the URL with stream=True
response = requests.get(url, stream=True)

# Check if the request was successful
if response.status_code == 200:
    # Iterate over the response line by line
    for line in response.iter_lines():
        if line:  # filter out keep-alive new lines
            # Process each line (decode bytes to string)
            line = line.decode('utf-8')
            # Example processing: print each line
            print(line)
else:
    print(f'Failed to download file: {response.status_code}')

Output

Failed to download file: 404
python_modules.htm
Advertisements