Requests: The Most Powerful HTTP Library in Python
Hello everyone, I am a Python developer. Today, I want to introduce you to the most popular HTTP request library in Python – requests. Calling it the “most powerful” is not an exaggeration, as it makes sending HTTP requests as natural as breathing. Whether it’s scraping web pages, calling APIs, or downloading files, requests can help us easily accomplish these tasks.
What is Requests?
Requests is a third-party library in Python specifically designed for sending HTTP requests. Its design philosophy is “HTTP For Humans”, which means it is an HTTP library that is easy for humans to understand and use. First, let’s install requests:
pip install requests
Sending Your First Request
Let’s start with the simplest GET request, just like accessing a web page with a browser:
import requests
# Send a GET request
response = requests.get('https://api.github.com')
print(f"Status Code: {response.status_code}")
print(f"Response Content: {response.text[:100]}...") # Only display the first 100 characters
Tip: If response.status_code
is 200, it means the request was successful; 404 means not found; 500 means server error.
Different Types of HTTP Requests
In actual development, the commonly used HTTP request methods are GET, POST, PUT, DELETE, etc.:
# GET request with parameters
params = {
'key1': 'value1',
'key2': 'value2'
}
response = requests.get('https://httpbin.org/get', params=params)
# POST request to send data
data = {
'username': 'python_lover',
'password': '123456'
}
response = requests.post('https://httpbin.org/post', data=data)
# Send JSON data
json_data = {
'name': 'Xiao Ming',
'age': 18
}
response = requests.post('https://httpbin.org/post', json=json_data)
Note: When sending a POST request, the data
parameter is used to send form data, while the json
parameter is used to send JSON data.
Handling Response Content
Once requests receives the server’s response, it provides various ways to handle the returned data:
import requests
response = requests.get('https://api.github.com')
# Get response content
print(response.text) # Text format
print(response.json()) # JSON format
print(response.content) # Binary format
# Get response headers
print(response.headers)
# Get encoding
print(response.encoding)
Tip: Before using response.json()
, it’s best to confirm that the response content is in JSON format; otherwise, it may throw an error.
Customizing Request Headers
Sometimes we need to simulate a browser sending requests, which requires customizing request headers:
# Custom request headers
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
}
response = requests.get('https://www.python.org', headers=headers)
Cookie and Session Handling
On websites that require login, we often need to handle cookies and sessions:
# Use Session to maintain session
session = requests.Session()
# Login
login_data = {
'username': 'your_username',
'password': 'your_password'
}
session.post('https://example.com/login', data=login_data)
# Access a protected page
response = session.get('https://example.com/protected_page')
File Uploading and Downloading
Requests also supports file uploading and downloading:
# Upload file
files = {
'file': open('photo.jpg', 'rb')
}
response = requests.post('https://httpbin.org/post', files=files)
# Download file
response = requests.get('https://example.com/file.pdf')
with open('downloaded_file.pdf', 'wb') as f:
f.write(response.content)
Note: Remember to properly close the file object when handling files, and it’s best to use the with
statement.
Exception Handling
Network requests may encounter various issues, so exception handling is important:
import requests
from requests.exceptions import RequestException
try:
response = requests.get('https://api.github.com', timeout=5)
response.raise_for_status() # Raise exception if status code is not 200
except RequestException as e:
print(f"An error occurred during the request: {e}")
Practical Exercises
Let’s do a few small exercises:
-
Get the homepage content of a website and save it as an HTML file -
Call a public API (like a weather API) and parse the returned JSON data -
Download an image file to the local system
Summary
Today we learned the main uses of requests:
-
Sending various HTTP requests (GET, POST, etc.) -
Handling request parameters and response content -
Customizing request headers -
Handling cookies and sessions -
File uploading and downloading -
Exception handling
The learning curve for requests is very gentle, but its functionality is very powerful. I recommend everyone to practice more, starting with simple web scraping and gradually transitioning to more complex API calls.
Lastly, a reminder: When using requests, be sure to pay attention to the website’s usage policies, avoid sending requests too frequently, and comply with the website’s robots.txt rules.
Next time, I will introduce how to combine requests and BeautifulSoup to achieve more powerful web parsing capabilities. Stay tuned!