YouTube is the most popular video-sharing platform in the world and as a hacker you may encounter a situation where you want to script something to download videos. For this I present to you pytube.
pytube is a lightweight library written in Python. It has no third party dependencies and aims to be highly reliable.
pytube also makes pipelining easy, allowing you to specify callback functions for different download events, such as on progress or on complete.
Finally pytube also includes a command-line utility, allowing you to quickly download videos right from the terminal.
It can be from www.pytube.org download pytube and choose the appropriate OS and version, of course you can also use pip to install it. After successful installation, you can use the following methods to confirm whether the installation was successful:
>>> import pytube
Python provides a fully functional library to help us complete network requests. The primary HTTP libraries are urllib, htpibib2, requests, treq, etc.
The urllib library only needs to pay attention to the required link topic, what parameters to pass, and optional request header settings. You don't need to go down to the bottom to understand how to transport and connect it. With it, two lines of code can complete the request process, respond and get the content of the webpage.
In Python 2, there are two libraries, urllib and urllib2, for sending requests. In Python 3, the urllib2 library no longer exists, standardized as urllib, and the official document link is: https://docs.python.org/3/library/urllib.html。
Urllib library: The HTTP request library included with Python, which means it can be used without additional installation.
It contains the following four modules.
Request: It is the simplest HTTP request unit and can be used to simulate send requests. Just like entering the URL into the browser and then pressing Enter, you only need to enter the URL and additional parameters into the library method to simulate this process.
Error: Exception Handling Unit, if there are errors in the request, you can catch these exceptions, then retry or other operations to ensure that the program does not terminate unexpectedly.
Analyze: A utility module that provides several URL processing methods, such as splitting, parsing, and merging.
Robotparser: It is mainly used to define a website's robots.txt file, then determine which websites can be crawled and which websites can not be crawled. It is less practical.
It can be from www.urlib.org download pytube and choose the appropriate OS and version, of course you can also use pip to install it. After successful installation, you can use the following methods to confirm whether the installation was successful:
>>> import urlib