Python web-scraping beautifulsoup python-requests Share. Improve this question. Follow asked Jul 9 '20 at 23:30. Chibiw3n chibiw3n. 167 10 10 bronze badges.
- Web Scraping Tools
- Web Scraping With Python
- Python Web Scraping Using Requests
- Python Web Scraping Example
- Python Screen Scraping Tutorial
- Python Requests Beautifulsoup
- To automate the web scraping, schedule your python script on Windows task scheduler, or automate python script using CRON on Mac. You Might Also Like Authorise Requests to GSC API Using OAuth 2.0 Other Technical SEO Guides With Python Find Rendering Problems On Large Scale Using Python + Screaming Frog.
- In this phase, we send a POST request to the login url. We use the payload that we created in the previous step as the data. We also use a header for the request and add a referer key to it for the same url. Result = sessionrequests. Post (loginurl, data = payload, headers = dict (referer = loginurl)) Step 3: Scrape content.
Python is a high-level programming language that is used for web development, mobile application development, and also for scraping the web.
Python is considered as the finest programming language for web scraping because it can handle all the crawling processes smoothly. When you combine the capabilities of Python with the security of a web proxy, then you can perform all your scraping activities smoothly without the fear of IP banning.
In this article, you will understand how proxies are used for web scraping with Python. But, first, let’s understand the basics.
What is web scraping?
Web scraping is the method of extracting data from websites. Generally, web scraping is done either by using a HyperText Transfer Protocol (HTTP) request or with the help of a web browser.
Web scraping works by first crawling the URLs and then downloading the page data one by one. All the extracted data is stored in a spreadsheet. You save tons of time when you automate the process of copying and pasting data. You can easily extract data from thousands of URLs based on your requirement to stay ahead of your competitors.
Example of web scraping
An example of a web scraping would be to download a list of all pet parents in California. You can scrape a web directory that lists the name and email ids of people in California who own a pet. You can use web scraping software to do this task for you. The software will crawl all the required URLs and then extract the required data. The extracted data will be kept in a spreadsheet.
Why use a proxy for web scraping?
- Proxy lets you bypass any content related geo-restrictions because you can choose a location of your choice.
- You can place a high number of connection requests without getting banned.
- It increases the speed with which you request and copy data because any issues related to your ISP slowing down your internet speed is reduced.
- Your crawling program can smoothly run and download the data without the risk of getting blocked.
Web Scraping Tools
Now that you have understood the basics of web scraping and proxies. Let’s learn how you can perform web scraping using a proxy with the Python programming language.
Configure a proxy for web scraping with Python
Scraping using Python starts by sending an HTTP request. HTTP is based on a client/server model where your Python program (the client) sends a request to the server for seeing the contents of a page and the server returns with a response.
The basic method of sending an HTTP request is to open a socket and send the request manually:
***start of code***
import socket
HOST = ‘www.mysite.com’ # Server hostname or IP address
PORT = 80 # Port
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = (HOST, PORT)
client_socket.connect(server_address)
request_header = b’GET / HTTP/1.0rnHost: www.mysite.comrnrn’
client_socket.sendall(request_header)
response = ”
while True:
recv = client_socket.recv(1024)
if not recv:
break
response += str(recv)
print(response)
client_socket.close()
***end of code***
You can also send HTTP requests in Python using built-in modules like urllib and urllib2. However, using these modules isn’t easy.
Hence, there is a third-option called Request, which is a simple HTTP library for Python.
You can easily configure proxies with Requests.
Here is the code to enable the use of proxy in Requests:
***start of code***
import requests
proxies = {
“http”: “http://10.XX.XX.10:8000”,
“https”: “http://10.XX.XX.10:8000”,
}
r = requests.get(“http://toscrape.com”, proxies=proxies)
***end of code***
In the proxies section, you have to specify the proxy address and the port address.
If you wish to include sessions and use a proxy at the same time, then you need to use the below code:
***start of code***
import requests
s = requests.Session()
s.proxies = {
“http”: “http://10.XX.XX.10:8000”,
“https”: “http://10.XX.XX.10:8000”,
}
r = s.get(“http://toscrape.com”)
***end of code***
Sometimes, you might need to create a new session and add a proxy. To do this, a session object should be created.
***start of code***
import requests
s = requests.Session()
s.proxies = {
“http”: “http://10.XX.XX.10:8000”,
“https”: “http://10.XX.XX.10:8000”,
}
r = s.get(“http://toscrape.com”)
***end of code***
However, using the Requests package might be slow because you can scrape just one URL with every request. Now, imagine you have to scrape 100 URLs then you will have to send the request 100 times only after the previous request is completed.
To solve this problem and to speed up the process, there is another package called the grequests that allows you to send multiple requests at the same time. Grequests is an asynchronous Python API that is widely used for building web apps.
Web Scraping With Python
Here is the code that explains the working of grequests. In this code, we are using an array to keep all the URLs to scrape in an array. Let’s suppose we have to scrape 100 URLs. We will keep all the 100 URLs in an array and use the grequest package specifying the batch length to 10. This will require sending just 10 requests to complete the scraping of 100 URLs instead of sending 100 requests.
***start of code***
import grequests
BATCH_LENGTH = 10
# An array having the 100 URLs for scraping
urls = […]
# results will be stored in this empty results array
results = []
while urls:
# this is the first batch of 10 URLs
batch = urls[:BATCH_LENGTH]
# create a set of unsent Requests
rs = (grequests.get(url) for url in batch)
# send all the requests at the same time
batch_results = grequests.map(rs)
# appending results to our main results array
results += batch_results
# removing fetched URLs from urls
urls = urls[BATCH_LENGTH:]
print(results)
# [<Response [200]>, <Response [200]>, …, <Response [200]>, <Response [200]>]
***end of code***
Final Thoughts
Web scraping is a necessity for several businesses, especially eCommerce websites. Real-time data needs to be captured from a variety of sources to make better business decisions at the right time. Python offers different frameworks and libraries that make web scraping easy. You can extract data fast and efficiently. Moreover, it is crucial to use a proxy to hide your machine’s IP address to avoid blacklisting. Python along with a secure proxy should be the base for successful web scraping.
This post will walk through how to use the requests_html package to scrape options data from a JavaScript-rendered webpage. requests_html serves as an alternative to Selenium and PhantomJS, and provides a clear syntax similar to the awesome requests package. The code we’ll walk through is packaged into functions in the options module in the yahoo_fin package, but this article will show how to write the code from scratch using requests_html so that you can use the same idea to scrape other JavaScript-rendered webpages.
Note:
requests_html requires Python 3.6+. If you don’t have requests_html installed, you can download it using pip:
Motivation
Let’s say we want to scrape options data for a particular stock. As an example, let’s look at Netflix (since it’s well known). If we go to the below site, we can see the option chain information for the earliest upcoming options expiration date for Netflix:
On this webpage there’s a drop-down box allowing us to view data by other expiration dates. What if we want to get all the possible choices – i.e. all the possible expiration dates?
We can try using requests with BeautifulSoup, but that won’t work quite the way we want. To demonstrate, let’s try doing that to see what happens.
Running the above code shows us that option_tags is an empty list. This is because there are no option tags found in the HTML we scrapped from the webpage above. However, if we look at the source via a web browser, we can see that there are, indeed, option tags:
Why the disconnect? The reason why we see option tags when looking at the source code in a browser is that the browser is executing JavaScript code that renders that HTML i.e. it modifies the HTML of the page dynamically to allow a user to select one of the possible expiration dates. This means if we try just scraping the HTML, the JavaScript won’t be executed, and thus, we won’t see the tags containing the expiration dates. This brings us to requests_html.
Using requests_html to render JavaScript
Python Web Scraping Using Requests
Now, let’s use requests_html to run the JavaScript code in order to render the HTML we’re looking for.
Similar to the requests package, we can use a session object to get the webpage we need. This gets stored in a response variable, resp. If you print out resp you should see the message Response 200, which means the connection to the webpage was successful (otherwise you’ll get a different message).
Python Web Scraping Example
Running resp.html will give us an object that allows us to print out, search through, and perform several functions on the webpage’s HTML. To simulate running the JavaScript code, we use the render method on the resp.html object. Note how we don’t need to set a variable equal to this rendered result i.e. running the below code:
stores the updated HTML as in attribute in resp.html. Specifically, we can access the rendered HTML like this:
So now resp.html.html contains the HTML we need containing the option tags. From here, we can parse out the expiration dates from these tags using the find method.
Similarly, if we wanted to search for other HTML tags we could just input whatever those are into the find method e.g. anchor (a), paragraph (p), header tags (h1, h2, h3, etc.) and so on.
Alternatively, we could also use BeautifulSoup on the rendered HTML (see below). However, the awesome point here is that we can create the connection to this webpage, render its JavaScript, and parse out the resultant HTML all in one package!
Lastly, we could scrape this particular webpage directly with yahoo_fin, which provides functions that wrap around requests_html specifically for Yahoo Finance’s website.
Scraping options data for each expiration date
Once we have the expiration dates, we could proceed with scraping the data associated with each date. In this particular case, the pattern of the URL for each expiration date’s data requires the date be converted to Unix timestamp format. This can be done using the pandas package.
Similarly, we could scrape this data using yahoo_fin. In this case, we just input the ticker symbol, NFLX and associated expiration date into either get_calls or get_puts to obtain the calls and puts data, respectively.
Note: here we don’t need to convert each date to a Unix timestamp as these functions will figure that out automatically from the input dates.
Python Screen Scraping Tutorial
That’s it for this post! To learn more about requests-html, check out my web scraping course on Udemy here!
Python Requests Beautifulsoup
To see the official documentation for requests_html, click here.