-
Notifications
You must be signed in to change notification settings - Fork 14
/
README.Rmd
313 lines (163 loc) · 9.28 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```
# ralger <a><img src='man/figures/logo.png' align="right" height="200" /></a>
<!-- badges: start -->
[![CRAN_Status_Badge](https://www.r-pkg.org/badges/version/ralger)](https://cran.r-project.org/package=ralger)
[![CRAN_time_from_release](https://www.r-pkg.org/badges/ago/ralger)](https://cran.r-project.org/package=ralger)
[![CRAN_latest_release_date](https://www.r-pkg.org/badges/last-release/ralger)](https://cran.r-project.org/package=ralger)
[![metacran downloads](https://cranlogs.r-pkg.org/badges/ralger)](https://cran.r-project.org/package=ralger)
[![metacran downloads](https://cranlogs.r-pkg.org/badges/grand-total/ralger)](https://cran.r-project.org/package=ralger)
<!-- [![license](https://img.shields.io/github/license/mashape/apistatus.svg)](https://choosealicense.com/licenses/mit/) -->
[![R badge](https://img.shields.io/badge/Build%20with-♥%20and%20R-blue)](https://github.com/feddelegrand7/ralger)
[![R badge](https://img.shields.io/badge/-Sponsor-brightgreen)](https://www.buymeacoffee.com/Fodil)
[![R build status](https://github.com/feddelegrand7/ralger/workflows/R-CMD-check/badge.svg)](https://github.com/feddelegrand7/ralger/actions)
[![Codecov test coverage](https://codecov.io/gh/feddelegrand7/ralger/branch/master/graph/badge.svg)](https://codecov.io/gh/feddelegrand7/ralger?branch=master)
<!-- badges: end -->
The goal of **ralger** is to facilitate web scraping in R. For a quick video tutorial, I gave a talk at useR2020, which you can find [here](https://www.youtube.com/watch?v=OHi6E8jegQg)
## Installation
You can install the `ralger` package from [CRAN](https://cran.r-project.org/) with:
```{r eval=FALSE}
install.packages("ralger")
```
or you can install the development version from [GitHub](https://github.com/) with:
``` r
# install.packages("devtools")
devtools::install_github("feddelegrand7/ralger")
```
## `scrap()`
This is an example which shows how to extract [top ranked universities' names](http://www.shanghairanking.com/rankings/arwu/2021) according to the ShanghaiRanking Consultancy:
```{r example}
library(ralger)
my_link <- "http://www.shanghairanking.com/rankings/arwu/2021"
my_node <- "a span" # The element ID , I recommend SelectorGadget if you're not familiar with CSS selectors
clean <- TRUE # Should the function clean the extracted vector or not ? Default is FALSE
best_uni <- scrap(link = my_link, node = my_node, clean = clean)
head(best_uni, 10)
```
Thanks to the [robotstxt](https://github.com/ropensci/robotstxt), you can set `askRobot = TRUE` to ask the `robots.txt` file if it's permitted to scrape a specific web page.
If you want to scrap multiple list pages, just use `scrap()` in conjunction with `paste0()`.
Suppose that you want to scrape all `RStudio::conf 2021` speakers:
```{r}
base_link <- "https://global.rstudio.com/student/catalog/list?category_ids=1796-speakers&page="
links <- paste0(base_link, 1:3) # the speakers are listed from page 1 to 3
node <- ".pr-1"
head(scrap(links, node), 10) # printing the first 10 speakers
```
## `attribute_scrap()`
If you need to scrape some elements' attributes, you can use the `attribute_scrap()` function as in the following example:
```{r}
# Getting all classes' names from the anchor elements
# from the ropensci website
attributes <- attribute_scrap(link = "https://ropensci.org/",
node = "a", # the a tag
attr = "class" # getting the class attribute
)
head(attributes, 10) # NA values are a tags without a class attribute
```
Another example, let's we want to get all javascript dependencies within the same web page:
```{r}
js_depend <- attribute_scrap(link = "https://ropensci.org/",
node = "script",
attr = "src")
js_depend
```
## `table_scrap()`
If you want to extract an __HTML Table__, you can use the `table_scrap()` function. Take a look at this [webpage](https://www.boxofficemojo.com/chart/top_lifetime_gross/?area=XWW) which lists the highest gross revenues in the cinema industry. You can extract the HTML table as follows:
```{r}
data <- table_scrap(link ="https://www.boxofficemojo.com/chart/top_lifetime_gross/?area=XWW")
head(data)
```
__When you deal with a web page that contains many HTML table you can use the `choose` argument to target a specific table__
## `tidy_scrap()`
Sometimes you'll find some useful information on the internet that you want to extract in a tabular manner however these information are not provided in an HTML format. In this context, you can use the `tidy_scrap()` function which returns a tidy data frame according to the arguments that you introduce. The function takes four arguments:
- **link** : the link of the website you're interested for;
- **nodes**: a vector of CSS elements that you want to extract. These elements will form the columns of your data frame;
- **colnames**: this argument represents the vector of names you want to assign to your columns. Note that you should respect the same order as within the **nodes** vector;
- **clean**: if true the function will clean the tibble's columns;
- **askRobot**: ask the robots.txt file if it's permitted to scrape the web page.
### Example
We'll work on the famous [IMDb website](https://www.imdb.com/). Let's say we need a data frame composed of:
- The title of the 50 best ranked movies of all time
- Their release year
- Their rating
We will need to use the `tidy_scrap()` function as follows:
```{r example3, message=FALSE, warning=FALSE}
my_link <- "https://www.imdb.com/search/title/?groups=top_250&sort=user_rating"
my_nodes <- c(
".lister-item-header a", # The title
".text-muted.unbold", # The year of release
".ratings-imdb-rating strong" # The rating)
)
names <- c("title", "year", "rating") # respect the nodes order
tidy_scrap(link = my_link, nodes = my_nodes, colnames = names)
```
Note that all columns will be of *character* class. you'll have to convert them according to your needs.
## `titles_scrap()`
Using `titles_scrap()`, one can efficiently scrape titles which correspond to the _h1, h2 & h3_ HTML tags.
### Example
If we go to the [New York Times](https://www.nytimes.com/), we can easily extract the titles displayed within a specific web page :
```{r example4}
titles_scrap(link = "https://www.nytimes.com/")
```
Further, it's possible to filter the results using the `contain` argument:
```{r}
titles_scrap(link = "https://www.nytimes.com/", contain = "TrUMp", case_sensitive = FALSE)
```
## `paragraphs_scrap()`
In the same way, we can use the `paragraphs_scrap()` function to extract paragraphs. This function relies on the `p` HTML tag.
Let's get some paragraphs from the lovely [ropensci.org](https://ropensci.org/) website:
```{r}
paragraphs_scrap(link = "https://ropensci.org/")
```
If needed, it's possible to collapse the paragraphs into one bag of words:
```{r}
paragraphs_scrap(link = "https://ropensci.org/", collapse = TRUE)
```
## `weblink_scrap()`
`weblink_scrap()` is used to srape the web links available within a web page. Useful in some cases, for example, getting a list of the available PDFs:
```{r}
weblink_scrap(link = "https://www.worldbank.org/en/access-to-information/reports/",
contain = "PDF",
case_sensitive = FALSE)
```
## `images_scrap() ` and `images_preview()`
`images_preview()` allows you to scrape the URLs of the images available within a web page so that you can choose which images __extension__ (see below) you want to focus on.
Let's say we want to list all the images from the official [RStudio](https://rstudio.com/) website:
```{r}
images_preview(link = "https://rstudio.com/")
```
`images_scrap()` on the other hand download the images. It takes the following arguments:
+ **link**: The URL of the web page;
+ **imgpath**: The destination folder of your images. It defaults to `getwd()`
+ **extn**: the extension of the image: jpg, png, jpeg ... among others;
+ **askRobot**: ask the robots.txt file if it's permitted to scrape the web page.
In the following example we extract all the `png` images from [RStudio](https://rstudio.com/) :
```{r, eval=FALSE}
# Suppose we're in a project which has a folder called my_images:
images_scrap(link = "https://rstudio.com/",
imgpath = here::here("my_images"),
extn = "png") # without the .
```
# Accessibility related functions
## `images_noalt_scrap()`
`images_noalt_scrap()` can be used to get the images within a specific web page that don't have an `alt` attribute which can be annoying for people using a screen reader:
```{r}
images_noalt_scrap(link = "https://www.r-consortium.org/")
```
If no images without `alt` attributes are found, the function returns `NULL` and displays an indication message:
```{r}
# WebAim is the reference website for web accessibility
images_noalt_scrap(link = "https://webaim.org/techniques/forms/controls")
```
## Code of Conduct
Please note that the ralger project is released with a [Contributor Code of Conduct](https://contributor-covenant.org/version/2/0/CODE_OF_CONDUCT.html). By contributing to this project, you agree to abide by its terms.