Conteúdo do Curso
Web Scraping with Python (res)
Web Scraping with Python (res)
Selectors
We already know how to define any part of the HTML file you want. However, how can we extract the text using the XPaths? For this purpose, you can use the scrapy
library to import Selector
, which helps to select the data. Then create a selector object declaring the parameter text
as the HTML file you want to work with:
Here the first line imports needed packages, and the next one creates an object for work.
To get any part of the HTML file you want, just use the function .xpath()
of the selector object and your path as the parameter:
title = sel.xpath("//title") print(title)
The function returns the list of all title tags as selector objects, which could be inconvenient in use. To get any tag you want as a string, just specify the number of the list element and apply the function .extract()
:
print(title[0].extract())
Here we selected the first element of all initially extracted title tags and converted it to the string.
The BeautifulSoup
doesn’t provide us with the functions to work with XPaths, like with CSS locators (which we will consider later). However, you should know how the XPaths work as it’s an extremely powerful tool, and a lot of other more common libraries for advanced web-scrappers have the tools to work with XPaths (like lxml
library).
Tarefa
Let’s return to our websites. Here we work with the following page. Your should:
- Import
Selector
fromscrapy
to extract the text using the XPaths. - Get all
p
tags using XPaths. Save the list of tags in the variablep_tags
. - Get the fourth element of the list
p_tags
as a string and print it.
Obrigado pelo seu feedback!
Selectors
We already know how to define any part of the HTML file you want. However, how can we extract the text using the XPaths? For this purpose, you can use the scrapy
library to import Selector
, which helps to select the data. Then create a selector object declaring the parameter text
as the HTML file you want to work with:
Here the first line imports needed packages, and the next one creates an object for work.
To get any part of the HTML file you want, just use the function .xpath()
of the selector object and your path as the parameter:
title = sel.xpath("//title") print(title)
The function returns the list of all title tags as selector objects, which could be inconvenient in use. To get any tag you want as a string, just specify the number of the list element and apply the function .extract()
:
print(title[0].extract())
Here we selected the first element of all initially extracted title tags and converted it to the string.
The BeautifulSoup
doesn’t provide us with the functions to work with XPaths, like with CSS locators (which we will consider later). However, you should know how the XPaths work as it’s an extremely powerful tool, and a lot of other more common libraries for advanced web-scrappers have the tools to work with XPaths (like lxml
library).
Tarefa
Let’s return to our websites. Here we work with the following page. Your should:
- Import
Selector
fromscrapy
to extract the text using the XPaths. - Get all
p
tags using XPaths. Save the list of tags in the variablep_tags
. - Get the fourth element of the list
p_tags
as a string and print it.
Obrigado pelo seu feedback!
Selectors
We already know how to define any part of the HTML file you want. However, how can we extract the text using the XPaths? For this purpose, you can use the scrapy
library to import Selector
, which helps to select the data. Then create a selector object declaring the parameter text
as the HTML file you want to work with:
Here the first line imports needed packages, and the next one creates an object for work.
To get any part of the HTML file you want, just use the function .xpath()
of the selector object and your path as the parameter:
title = sel.xpath("//title") print(title)
The function returns the list of all title tags as selector objects, which could be inconvenient in use. To get any tag you want as a string, just specify the number of the list element and apply the function .extract()
:
print(title[0].extract())
Here we selected the first element of all initially extracted title tags and converted it to the string.
The BeautifulSoup
doesn’t provide us with the functions to work with XPaths, like with CSS locators (which we will consider later). However, you should know how the XPaths work as it’s an extremely powerful tool, and a lot of other more common libraries for advanced web-scrappers have the tools to work with XPaths (like lxml
library).
Tarefa
Let’s return to our websites. Here we work with the following page. Your should:
- Import
Selector
fromscrapy
to extract the text using the XPaths. - Get all
p
tags using XPaths. Save the list of tags in the variablep_tags
. - Get the fourth element of the list
p_tags
as a string and print it.
Obrigado pelo seu feedback!
We already know how to define any part of the HTML file you want. However, how can we extract the text using the XPaths? For this purpose, you can use the scrapy
library to import Selector
, which helps to select the data. Then create a selector object declaring the parameter text
as the HTML file you want to work with:
Here the first line imports needed packages, and the next one creates an object for work.
To get any part of the HTML file you want, just use the function .xpath()
of the selector object and your path as the parameter:
title = sel.xpath("//title") print(title)
The function returns the list of all title tags as selector objects, which could be inconvenient in use. To get any tag you want as a string, just specify the number of the list element and apply the function .extract()
:
print(title[0].extract())
Here we selected the first element of all initially extracted title tags and converted it to the string.
The BeautifulSoup
doesn’t provide us with the functions to work with XPaths, like with CSS locators (which we will consider later). However, you should know how the XPaths work as it’s an extremely powerful tool, and a lot of other more common libraries for advanced web-scrappers have the tools to work with XPaths (like lxml
library).
Tarefa
Let’s return to our websites. Here we work with the following page. Your should:
- Import
Selector
fromscrapy
to extract the text using the XPaths. - Get all
p
tags using XPaths. Save the list of tags in the variablep_tags
. - Get the fourth element of the list
p_tags
as a string and print it.