From c30113599b57a0df082bb4a4926f3dacb12b008a Mon Sep 17 00:00:00 2001 From: Mark Edmondson Date: Sun, 17 Jun 2018 21:23:45 +0200 Subject: [PATCH] documentation update for #20 --- R/natural-language.R | 2 +- README.Rmd | 8 +- docs/CONTRIBUTING.html | 31 ++- docs/LICENSE-text.html | 31 ++- docs/articles/index.html | 31 ++- docs/articles/nlp.html | 171 ++++++++------ docs/articles/setup.html | 30 +-- docs/articles/speech.html | 30 +-- docs/articles/text-to-speech.html | 30 +-- docs/articles/translation.html | 30 +-- docs/authors.html | 31 ++- docs/docsearch.css | 148 ++++++++++++ docs/docsearch.js | 85 +++++++ docs/index.html | 64 +++-- docs/news/index.html | 33 +-- docs/pkgdown.css | 13 ++ docs/pkgdown.js | 165 ++++++------- docs/pkgdown.yml | 4 +- docs/reference/gl_auth.html | 35 +-- docs/reference/gl_nlp.html | 48 ++-- docs/reference/gl_speech.html | 35 +-- docs/reference/gl_speech_op.html | 35 +-- docs/reference/gl_talk.html | 35 +-- docs/reference/gl_talk_languages.html | 35 +-- docs/reference/gl_talk_player.html | 35 +-- docs/reference/gl_talk_shiny.html | 259 +++++++++++++++++++++ docs/reference/gl_talk_shinyUI.html | 221 ++++++++++++++++++ docs/reference/gl_translate.html | 35 +-- docs/reference/gl_translate_detect.html | 35 +-- docs/reference/gl_translate_languages.html | 35 +-- docs/reference/googleLanguageR.html | 35 +-- docs/reference/index.html | 66 ++++-- docs/reference/is.NullOb.html | 35 +-- docs/reference/rmNullObs.html | 35 +-- docs/sitemap.xml | 9 + man/gl_nlp.Rd | 2 +- 36 files changed, 1458 insertions(+), 504 deletions(-) create mode 100644 docs/docsearch.css create mode 100644 docs/docsearch.js create mode 100644 docs/reference/gl_talk_shiny.html create mode 100644 docs/reference/gl_talk_shinyUI.html diff --git a/R/natural-language.R b/R/natural-language.R index a390921..6965fcb 100644 --- a/R/natural-language.R +++ b/R/natural-language.R @@ -25,7 +25,7 @@ #' \item{tokens - }{\href{https://cloud.google.com/natural-language/docs/reference/rest/v1/Token}{Tokens, along with their syntactic information, in the input document}} #' \item{entities - }{\href{https://cloud.google.com/natural-language/docs/reference/rest/v1/Entity}{Entities, along with their semantic information, in the input document}} #' \item{documentSentiment - }{\href{https://cloud.google.com/natural-language/docs/reference/rest/v1/Sentiment}{The overall sentiment for the document}} -#' \item{classifyText -}{\href{https://cloud.google.com/natural-language/docs/classifying-text}} +#' \item{classifyText -}{\href{https://cloud.google.com/natural-language/docs/classifying-text}{Classification of the document}} #' \item{language - }{The language of the text, which will be the same as the language specified in the request or, if not specified, the automatically-detected language} #' \item{text - }{The original text passed into the API. \code{NA} if not passed due to being zero-length etc. } #' } diff --git a/README.Rmd b/README.Rmd index 824718f..59dd27f 100644 --- a/README.Rmd +++ b/README.Rmd @@ -110,7 +110,7 @@ The Natural Language API returns natural language understanding technolgies. Yo You can pass a vector of text which will call the API for each element. The return is a list of responses, each response being a list of tibbles holding the different types of analysis. -```{r} +```r texts <- c("to administer medicince to animals is frequently a very difficult matter, and yet sometimes it's necessary to do so", "I don't know how to make a text demo that is sensible") nlp_result <- gl_nlp(texts) @@ -127,7 +127,7 @@ You can detect the language via `gl_translate_detect`, or translate and detect l Note this is a lot more refined than the free version on Google's translation website. -```{r} +```r text <- "to administer medicine to animals is frequently a very difficult matter, and yet sometimes it's necessary to do so" ## translate British into Danish gl_translate(text, target = "da")$translatedText @@ -145,7 +145,7 @@ A test audio file is installed with the package which reads: The file is sourced from the [University of Southampton's speech detection](http://www-mobile.ecs.soton.ac.uk/newcomms/) group and is fairly difficult for computers to parse, as we see below: -```{r} +```r ## get the sample source file test_audio <- system.file("woman1_wb.wav", package = "googleLanguageR") @@ -165,7 +165,7 @@ A test audio file is installed with the package which reads: The file is sourced from the [University of Southampton's speech detection](http://www-mobile.ecs.soton.ac.uk/newcomms/) group and is fairly difficult for computers to parse, as we see below: -``` r +```r ## get the sample source file test_audio <- system.file("woman1_wb.wav", package = "googleLanguageR") diff --git a/docs/CONTRIBUTING.html b/docs/CONTRIBUTING.html index 77b767b..3f3989d 100644 --- a/docs/CONTRIBUTING.html +++ b/docs/CONTRIBUTING.html @@ -21,13 +21,19 @@ + + + - + + + + @@ -37,20 +43,16 @@ - + + - - @@ -140,7 +142,12 @@ diff --git a/docs/LICENSE-text.html b/docs/LICENSE-text.html index 7112d73..e962c97 100644 --- a/docs/LICENSE-text.html +++ b/docs/LICENSE-text.html @@ -21,13 +21,19 @@ + + + - + + + + @@ -37,20 +43,16 @@ - + + - - @@ -140,7 +142,12 @@ diff --git a/docs/articles/index.html b/docs/articles/index.html index af959ba..f5a8282 100644 --- a/docs/articles/index.html +++ b/docs/articles/index.html @@ -21,13 +21,19 @@ + + + - + + + + @@ -37,20 +43,16 @@ - + + - - @@ -140,7 +142,12 @@ diff --git a/docs/articles/nlp.html b/docs/articles/nlp.html index 404fcdd..b73b2eb 100644 --- a/docs/articles/nlp.html +++ b/docs/articles/nlp.html @@ -8,22 +8,19 @@ Google Natural Language API • googleLanguageR - - + + @@ -110,7 +107,14 @@ - + @@ -125,9 +129,9 @@

Google Natural Language API

Mark Edmondson

-

2018-04-05

- +

2018-06-17

+ Source: vignettes/nlp.Rmd @@ -139,11 +143,13 @@

2018-04-05

The Natural Language API returns natural language understanding technologies. You can call them individually, or the default is to return them all. The available returns are:

@@ -151,82 +157,109 @@

You can pass a vector of text which will call the API for each element. The return is a list of responses, each response being a list of tibbles holding the different types of analysis.

library(googleLanguageR)
 
-texts <- c("to administer medicince to animals is frequently a very difficult matter,
-         and yet sometimes it's necessary to do so", 
-         "I don't know how to make a text demo that is sensible")
+# random text form wikipedia
+texts <- c("Norma is a small constellation in the Southern Celestial Hemisphere between Ara and Lupus, one of twelve drawn up in the 18th century by French astronomer Nicolas Louis de Lacaille and one of several depicting scientific instruments. Its name refers to a right angle in Latin, and is variously considered to represent a rule, a carpenter's square, a set square or a level. It remains one of the 88 modern constellations. Four of Norma's brighter stars make up a square in the field of faint stars. Gamma2 Normae is the brightest star with an apparent magnitude of 4.0. Mu Normae is one of the most luminous stars known, but is partially obscured by distance and cosmic dust. Four star systems are known to harbour planets. ", 
+         "Solomon Wariso (born 11 November 1966 in Portsmouth) is a retired English sprinter who competed primarily in the 200 and 400 metres.[1] He represented his country at two outdoor and three indoor World Championships and is the British record holder in the indoor 4 × 400 metres relay.")
 nlp_result <- gl_nlp(texts)

Each text has its own entry in returned tibbles

str(nlp_result, max.level = 2)
-#List of 6
-# $ sentences        :List of 2
-#  ..$ :'data.frame':   1 obs. of  4 variables:
-#  ..$ :'data.frame':   1 obs. of  4 variables:
-# $ tokens           :List of 2
-#  ..$ :'data.frame':   21 obs. of  17 variables:
-#  ..$ :'data.frame':   13 obs. of  17 variables:
-# $ entities         :List of 2
-#  ..$ :Classes ‘tbl_df’, ‘tbl’ and 'data.frame':   3 obs. of  9 variables:
-#  ..$ :Classes ‘tbl_df’, ‘tbl’ and 'data.frame':   1 obs. of  9 variables:
-# $ language         : chr [1:2] "en" "en"
-# $ text             : chr [1:2] "to administer medicince to animals is frequently a very difficult matter,\n   #      and yet sometimes it's necessary to do so" "I don't know how to make a text demo that is sensible"
-# $ documentSentiment:Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 2 obs. of  2 variables:
-#  ..$ magnitude: num [1:2] 0.5 0.3
-#  ..$ score    : num [1:2] 0.5 -0.3
+List of 7 + $ sentences :List of 2 + ..$ :'data.frame': 7 obs. of 4 variables: + ..$ :'data.frame': 1 obs. of 4 variables: + $ tokens :List of 2 + ..$ :'data.frame': 139 obs. of 17 variables: + ..$ :'data.frame': 54 obs. of 17 variables: + $ entities :List of 2 + ..$ :Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 52 obs. of 9 variables: + ..$ :Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 8 obs. of 9 variables: + $ language : chr [1:2] "en" "en" + $ text : chr [1:2] "Norma is a small constellation in the Southern Celestial Hemisphere between Ara and Lupus, one of twelve drawn "| __truncated__ "Solomon Wariso (born 11 November 1966 in Portsmouth) is a retired English sprinter who competed primarily in th"| __truncated__ + $ documentSentiment:Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 2 obs. of 2 variables: + ..$ magnitude: num [1:2] 2.4 0.1 + ..$ score : num [1:2] 0.3 0.1 + $ classifyText :Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 1 obs. of 2 variables: + ..$ name : chr "/Science/Astronomy" + ..$ confidence: num 0.93

Sentence structure and sentiment:

## sentences structure
 nlp_result$sentences[[2]]
-#                                                content beginOffset magnitude score
-#1 I don't know how to make a text demo that is sensible           0       0.3  -0.3
+ +content +1 Solomon Wariso (born 11 November 1966 in Portsmouth) is a retired English sprinter who competed primarily in the 200 and 400 metres.[1] He represented his country at two outdoor and three indoor World Championships and is the British record holder in the indoor 4 × 400 metres relay. + beginOffset magnitude score +1 0 0.1 0.1

Information on what words (tokens) are within each text:

# word tokens data
 str(nlp_result$tokens[[1]])
-#'data.frame':  6 obs. of  17 variables:
-# $ content       : chr  "to" "administer" "medicince" "to" ...
-# $ beginOffset   : int  0 3 14 24 27 35
-# $ tag           : chr  "PRT" "VERB" "NOUN" "ADP" ...
-# $ aspect        : chr  "ASPECT_UNKNOWN" "ASPECT_UNKNOWN" "ASPECT_UNKNOWN" "ASPECT_UNKNOWN" ...
-# $ case          : chr  "CASE_UNKNOWN" "CASE_UNKNOWN" "CASE_UNKNOWN" "CASE_UNKNOWN" ...
-# $ form          : chr  "FORM_UNKNOWN" "FORM_UNKNOWN" "FORM_UNKNOWN" "FORM_UNKNOWN" ...
-# $ gender        : chr  "GENDER_UNKNOWN" "GENDER_UNKNOWN" "GENDER_UNKNOWN" "GENDER_UNKNOWN" ...
-# $ mood          : chr  "MOOD_UNKNOWN" "MOOD_UNKNOWN" "MOOD_UNKNOWN" "MOOD_UNKNOWN" ...
-# $ number        : chr  "NUMBER_UNKNOWN" "NUMBER_UNKNOWN" "SINGULAR" "NUMBER_UNKNOWN" ...
-# $ person        : chr  "PERSON_UNKNOWN" "PERSON_UNKNOWN" "PERSON_UNKNOWN" "PERSON_UNKNOWN" ...
-# $ proper        : chr  "PROPER_UNKNOWN" "PROPER_UNKNOWN" "PROPER_UNKNOWN" "PROPER_UNKNOWN" ...
-# $ reciprocity   : chr  "RECIPROCITY_UNKNOWN" "RECIPROCITY_UNKNOWN" "RECIPROCITY_UNKNOWN" "RECIPROCITY_UNKNOWN" #...
-# $ tense         : chr  "TENSE_UNKNOWN" "TENSE_UNKNOWN" "TENSE_UNKNOWN" "TENSE_UNKNOWN" ...
-# $ voice         : chr  "VOICE_UNKNOWN" "VOICE_UNKNOWN" "VOICE_UNKNOWN" "VOICE_UNKNOWN" ...
-# $ headTokenIndex: int  1 5 1 1 3 5
-# $ label         : chr  "AUX" "CSUBJ" "DOBJ" "PREP" ...
-# $ value         : chr  "to" "administer" "medicince" "to" ...
+'data.frame': 139 obs. of 17 variables: + $ content : chr "Norma" "is" "a" "small" ... + $ beginOffset : int 0 6 9 11 17 31 34 38 47 57 ... + $ tag : chr "NOUN" "VERB" "DET" "ADJ" ... + $ aspect : chr "ASPECT_UNKNOWN" "ASPECT_UNKNOWN" "ASPECT_UNKNOWN" "ASPECT_UNKNOWN" ... + $ case : chr "CASE_UNKNOWN" "CASE_UNKNOWN" "CASE_UNKNOWN" "CASE_UNKNOWN" ... + $ form : chr "FORM_UNKNOWN" "FORM_UNKNOWN" "FORM_UNKNOWN" "FORM_UNKNOWN" ... + $ gender : chr "GENDER_UNKNOWN" "GENDER_UNKNOWN" "GENDER_UNKNOWN" "GENDER_UNKNOWN" ... + $ mood : chr "MOOD_UNKNOWN" "INDICATIVE" "MOOD_UNKNOWN" "MOOD_UNKNOWN" ... + $ number : chr "SINGULAR" "SINGULAR" "NUMBER_UNKNOWN" "NUMBER_UNKNOWN" ... + $ person : chr "PERSON_UNKNOWN" "THIRD" "PERSON_UNKNOWN" "PERSON_UNKNOWN" ... + $ proper : chr "PROPER" "PROPER_UNKNOWN" "PROPER_UNKNOWN" "PROPER_UNKNOWN" ... + $ reciprocity : chr "RECIPROCITY_UNKNOWN" "RECIPROCITY_UNKNOWN" "RECIPROCITY_UNKNOWN" "RECIPROCITY_UNKNOWN" ... + $ tense : chr "TENSE_UNKNOWN" "PRESENT" "TENSE_UNKNOWN" "TENSE_UNKNOWN" ... + $ voice : chr "VOICE_UNKNOWN" "VOICE_UNKNOWN" "VOICE_UNKNOWN" "VOICE_UNKNOWN" ... + $ headTokenIndex: int 1 1 4 4 1 4 9 9 9 5 ... + $ label : chr "NSUBJ" "ROOT" "DET" "AMOD" ... + $ value : chr "Norma" "be" "a" "small" ...

What entities within text have been identified, with optional wikipedia URL if its available.

nlp_result$entities
-#[[1]]
-# A tibble: 3 x 9
-#       name  type  salience    mid wikipedia_url magnitude score beginOffset mention_type
-#      <chr> <chr>     <dbl> <fctr>        <fctr>     <dbl> <dbl>       <int>        <chr>
-#1   animals OTHER 0.2449778   <NA>          <NA>        NA    NA          27       COMMON
-#2    matter OTHER 0.2318689   <NA>          <NA>        NA    NA          66       COMMON
-#3 medicince OTHER 0.5231533   <NA>          <NA>        NA    NA          14       COMMON
-
-#[[2]]
-# A tibble: 1 x 9
-#       name        type salience    mid wikipedia_url magnitude score beginOffset mention_type
-#      <chr>       <chr>    <int> <fctr>        <fctr>     <dbl> <dbl>       <int>        <chr>
-#1 text demo WORK_OF_ART        1   <NA>          <NA>        NA    NA          27       COMMON
+[[1]] +# A tibble: 52 x 9 + name type salience mid wikipedia_url magnitude score beginOffset mention_type + <chr> <chr> <dbl> <chr> <chr> <dbl> <dbl> <int> <chr> + 1 angle OTHER 0.0133 NA NA 0 0 261 COMMON + 2 Ara ORGANIZATION 0.0631 NA NA 0 0 76 PROPER + 3 astronomer NA NA NA NA NA NA 144 COMMON + 4 carpenter PERSON 0.0135 NA NA 0 0 328 COMMON + 5 constellation OTHER 0.150 NA NA 0 0 17 COMMON + 6 constellations OTHER 0.0140 NA NA 0.9 0.9 405 COMMON + 7 distance OTHER 0.00645 NA NA 0 0 649 COMMON + 8 dust OTHER 0.00645 NA NA 0.3 -0.3 669 COMMON + 9 field LOCATION 0.00407 NA NA 0.6 -0.6 476 COMMON +10 French LOCATION 0.0242 NA NA 0 0 137 PROPER +# ... with 42 more rows + +[[2]] +# A tibble: 8 x 9 + name type salience mid wikipedia_url magnitude score beginOffset mention_type + <chr> <chr> <dbl> <chr> <chr> <dbl> <dbl> <int> <chr> +1 British LOCATION 0.0255 NA NA 0 0 226 PROPER +2 country LOCATION 0.0475 NA NA 0 0 155 COMMON +3 English OTHER 0.0530 NA NA 0 0 66 PROPER +4 Portsmouth LOCATION 0.0530 /m/0619_ https://en.wiki… 0 0 41 PROPER +5 record holder PERSON 0.0541 NA NA 0 0 234 COMMON +6 Solomon Wariso ORGANIZATION 0.156 /g/120x5nf6 https://en.wiki… 0 0 0 PROPER +7 sprinter PERSON 0.600 NA NA 0 0 74 COMMON +8 World Championships EVENT 0.0113 NA NA 0.1 0.1 195 PROPER

Sentiment of the entire text:

nlp_result$documentSentiment
 # A tibble: 2 x 2
   magnitude score
       <dbl> <dbl>
-1       0.5   0.5
-2       0.3  -0.3
+1 2.4 0.3 +2 0.1 0.1 +

The category for the text as defined by the list here.

+
nlp_result$classifyText
+# A tibble: 1 x 2
+  name               confidence
+  <chr>                   <dbl>
+1 /Science/Astronomy       0.93

The language for the text:

nlp_result$language
 # [1] "en" "en"

The original passed in text, to aid with working with the output:

nlp_result$text
-# [1] "to administer medicince to animals is frequently a very difficult matter,\n         and yet sometimes  it's necessary to do so"
-# [2] "I don't know how to make a text demo that is sensible"  
+[1] "Norma is a small constellation in the Southern Celestial Hemisphere between Ara and Lupus, one of twelve drawn up in the 18th century by French astronomer Nicolas Louis de Lacaille and one of several depicting scientific instruments. Its name refers to a right angle in Latin, and is variously considered to represent a rule, a carpenter's square, a set square or a level. It remains one of the 88 modern constellations. Four of Norma's brighter stars make up a square in the field of faint stars. Gamma2 Normae is the brightest star with an apparent magnitude of 4.0. Mu Normae is one of the most luminous stars known, but is partially obscured by distance and cosmic dust. Four star systems are known to harbour planets." +[2] "Solomon Wariso (born 11 November 1966 in Portsmouth) is a retired English sprinter who competed primarily in the 200 and 400 metres.[1] He represented his country at two outdoor and three indoor World Championships and is the British record holder in the indoor 4 × 400 metres relay." diff --git a/docs/articles/setup.html b/docs/articles/setup.html index 0ae7bd7..ff70262 100644 --- a/docs/articles/setup.html +++ b/docs/articles/setup.html @@ -8,22 +8,19 @@ Introduction to googleLanguageR • googleLanguageR - - + + @@ -110,7 +107,14 @@ - + @@ -125,9 +129,9 @@

Introduction to googleLanguageR

Mark Edmondson

-

2018-04-05

- +

2018-06-17

+ Source: vignettes/setup.Rmd diff --git a/docs/articles/speech.html b/docs/articles/speech.html index b1fcaf9..534806e 100644 --- a/docs/articles/speech.html +++ b/docs/articles/speech.html @@ -8,22 +8,19 @@ Google Cloud Speech API • googleLanguageR - - + + @@ -110,7 +107,14 @@ - + @@ -125,9 +129,9 @@

Google Cloud Speech API

Mark Edmondson

-

2018-04-05

- +

2018-06-17

+ Source: vignettes/speech.Rmd diff --git a/docs/articles/text-to-speech.html b/docs/articles/text-to-speech.html index c448378..646df28 100644 --- a/docs/articles/text-to-speech.html +++ b/docs/articles/text-to-speech.html @@ -8,22 +8,19 @@ Google Cloud Text-to-Speech API • googleLanguageR - - + + @@ -110,7 +107,14 @@ - + @@ -125,9 +129,9 @@

Google Cloud Text-to-Speech API

Mark Edmondson

-

2018-04-05

- +

2018-06-17

+ Source: vignettes/text-to-speech.Rmd diff --git a/docs/articles/translation.html b/docs/articles/translation.html index 6f9b4c0..4bf468f 100644 --- a/docs/articles/translation.html +++ b/docs/articles/translation.html @@ -8,22 +8,19 @@ Google Cloud Translation API • googleLanguageR - - + + @@ -110,7 +107,14 @@ - + @@ -125,9 +129,9 @@

Google Cloud Translation API

Mark Edmondson

-

2018-04-05

- +

2018-06-17

+ Source: vignettes/translation.Rmd diff --git a/docs/authors.html b/docs/authors.html index ccda772..7e7a975 100644 --- a/docs/authors.html +++ b/docs/authors.html @@ -21,13 +21,19 @@ + + + - + + + + @@ -37,20 +43,16 @@ - + + - - @@ -140,7 +142,12 @@ diff --git a/docs/docsearch.css b/docs/docsearch.css new file mode 100644 index 0000000..e5f1fe1 --- /dev/null +++ b/docs/docsearch.css @@ -0,0 +1,148 @@ +/* Docsearch -------------------------------------------------------------- */ +/* + Source: https://github.com/algolia/docsearch/ + License: MIT +*/ + +.algolia-autocomplete { + display: block; + -webkit-box-flex: 1; + -ms-flex: 1; + flex: 1 +} + +.algolia-autocomplete .ds-dropdown-menu { + width: 100%; + min-width: none; + max-width: none; + padding: .75rem 0; + background-color: #fff; + background-clip: padding-box; + border: 1px solid rgba(0, 0, 0, .1); + box-shadow: 0 .5rem 1rem rgba(0, 0, 0, .175); +} + +@media (min-width:768px) { + .algolia-autocomplete .ds-dropdown-menu { + width: 175% + } +} + +.algolia-autocomplete .ds-dropdown-menu::before { + display: none +} + +.algolia-autocomplete .ds-dropdown-menu [class^=ds-dataset-] { + padding: 0; + background-color: rgb(255,255,255); + border: 0; + max-height: 80vh; +} + +.algolia-autocomplete .ds-dropdown-menu .ds-suggestions { + margin-top: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion { + padding: 0; + overflow: visible +} + +.algolia-autocomplete .algolia-docsearch-suggestion--category-header { + padding: .125rem 1rem; + margin-top: 0; + font-size: 1.3em; + font-weight: 500; + color: #00008B; + border-bottom: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--wrapper { + float: none; + padding-top: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--subcategory-column { + float: none; + width: auto; + padding: 0; + text-align: left +} + +.algolia-autocomplete .algolia-docsearch-suggestion--content { + float: none; + width: auto; + padding: 0 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--content::before { + display: none +} + +.algolia-autocomplete .ds-suggestion:not(:first-child) .algolia-docsearch-suggestion--category-header { + padding-top: .75rem; + margin-top: .75rem; + border-top: 1px solid rgba(0, 0, 0, .1) +} + +.algolia-autocomplete .ds-suggestion .algolia-docsearch-suggestion--subcategory-column { + display: block; + padding: .1rem 1rem; + margin-bottom: 0.1; + font-size: 1.0em; + font-weight: 400 + /* display: none */ +} + +.algolia-autocomplete .algolia-docsearch-suggestion--title { + display: block; + padding: .25rem 1rem; + margin-bottom: 0; + font-size: 0.9em; + font-weight: 400 +} + +.algolia-autocomplete .algolia-docsearch-suggestion--text { + padding: 0 1rem .5rem; + margin-top: -.25rem; + font-size: 0.8em; + font-weight: 400; + line-height: 1.25 +} + +.algolia-autocomplete .algolia-docsearch-footer { + width: 110px; + height: 20px; + z-index: 3; + margin-top: 10.66667px; + float: right; + font-size: 0; + line-height: 0; +} + +.algolia-autocomplete .algolia-docsearch-footer--logo { + background-image: url("data:image/svg+xml;utf8,"); + background-repeat: no-repeat; + background-position: 50%; + background-size: 100%; + overflow: hidden; + text-indent: -9000px; + width: 100%; + height: 100%; + display: block; + transform: translate(-8px); +} + +.algolia-autocomplete .algolia-docsearch-suggestion--highlight { + color: #FF8C00; + background: rgba(232, 189, 54, 0.1) +} + + +.algolia-autocomplete .algolia-docsearch-suggestion--text .algolia-docsearch-suggestion--highlight { + box-shadow: inset 0 -2px 0 0 rgba(105, 105, 105, .5) +} + +.algolia-autocomplete .ds-suggestion.ds-cursor .algolia-docsearch-suggestion--content { + background-color: rgba(192, 192, 192, .15) +} diff --git a/docs/docsearch.js b/docs/docsearch.js new file mode 100644 index 0000000..b35504c --- /dev/null +++ b/docs/docsearch.js @@ -0,0 +1,85 @@ +$(function() { + + // register a handler to move the focus to the search bar + // upon pressing shift + "/" (i.e. "?") + $(document).on('keydown', function(e) { + if (e.shiftKey && e.keyCode == 191) { + e.preventDefault(); + $("#search-input").focus(); + } + }); + + $(document).ready(function() { + // do keyword highlighting + /* modified from https://jsfiddle.net/julmot/bL6bb5oo/ */ + var mark = function() { + + var referrer = document.URL ; + var paramKey = "q" ; + + if (referrer.indexOf("?") !== -1) { + var qs = referrer.substr(referrer.indexOf('?') + 1); + var qs_noanchor = qs.split('#')[0]; + var qsa = qs_noanchor.split('&'); + var keyword = ""; + + for (var i = 0; i < qsa.length; i++) { + var currentParam = qsa[i].split('='); + + if (currentParam.length !== 2) { + continue; + } + + if (currentParam[0] == paramKey) { + keyword = decodeURIComponent(currentParam[1].replace(/\+/g, "%20")); + } + } + + if (keyword !== "") { + $(".contents").unmark({ + done: function() { + $(".contents").mark(keyword); + } + }); + } + } + }; + + mark(); + }); +}); + +/* Search term highlighting ------------------------------*/ + +function matchedWords(hit) { + var words = []; + + var hierarchy = hit._highlightResult.hierarchy; + // loop to fetch from lvl0, lvl1, etc. + for (var idx in hierarchy) { + words = words.concat(hierarchy[idx].matchedWords); + } + + var content = hit._highlightResult.content; + if (content) { + words = words.concat(content.matchedWords); + } + + // return unique words + var words_uniq = [...new Set(words)]; + return words_uniq; +} + +function updateHitURL(hit) { + + var words = matchedWords(hit); + var url = ""; + + if (hit.anchor) { + url = hit.url_without_anchor + '?q=' + escape(words.join(" ")) + '#' + hit.anchor; + } else { + url = hit.url + '?q=' + escape(words.join(" ")); + } + + return url; +} diff --git a/docs/index.html b/docs/index.html index 59a9c24..871d81a 100644 --- a/docs/index.html +++ b/docs/index.html @@ -9,8 +9,8 @@ 'Cloud Speech' API and 'Cloud Text-to-Speech' API • googleLanguageR - - + + into sound files."> @@ -118,7 +115,14 @@ - + @@ -250,21 +254,6 @@

# two results of lists of tibbles str(nlp_result, max.level = 2) -
## List of 6
-##  $ sentences        :List of 2
-##   ..$ :'data.frame': 1 obs. of  4 variables:
-##   ..$ :'data.frame': 1 obs. of  4 variables:
-##  $ tokens           :List of 2
-##   ..$ :'data.frame': 21 obs. of  17 variables:
-##   ..$ :'data.frame': 13 obs. of  17 variables:
-##  $ entities         :List of 2
-##   ..$ :Classes 'tbl_df', 'tbl' and 'data.frame': 3 obs. of  9 variables:
-##   ..$ :Classes 'tbl_df', 'tbl' and 'data.frame': 1 obs. of  9 variables:
-##  $ language         : chr [1:2] "en" "en"
-##  $ text             : chr [1:2] "to administer medicince to animals is frequently a very difficult matter, and yet sometimes it's necessary to do so" "I don't know how to make a text demo that is sensible"
-##  $ documentSentiment:Classes 'tbl_df', 'tbl' and 'data.frame':   2 obs. of  2 variables:
-##   ..$ magnitude: num [1:2] 0.5 0.3
-##   ..$ score    : num [1:2] 0.5 -0.3

See more examples and details on the website or via vignette("nlp", package = "googleLanguageR")

@@ -276,7 +265,6 @@

text <- "to administer medicine to animals is frequently a very difficult matter, and yet sometimes it's necessary to do so"
 ## translate British into Danish
 gl_translate(text, target = "da")$translatedText
-
## [1] "at administrere medicin til dyr er ofte en meget vanskelig sag, og dog er det undertiden nødvendigt at gøre det"

See more examples and details on the website or via vignette("translate", package = "googleLanguageR")

@@ -293,10 +281,6 @@

## its not perfect but...:) gl_speech(test_audio)$transcript

-
## # A tibble: 1 x 2
-##   transcript                                                    confidence
-##   <chr>                                                         <chr>     
-## 1 to administer medicine to animals is frequency of very diffi… 0.918081

See more examples and details on the website or via vignette("speech", package = "googleLanguageR")

@@ -312,11 +296,13 @@

test_audio <- system.file("woman1_wb.wav", package = "googleLanguageR") ## its not perfect but...:) -gl_speech(test_audio)$transcript

-
## # A tibble: 1 x 2
-##   transcript                                                    confidence
-##   <chr>                                                         <chr>     
-## 1 to administer medicine to animals is frequency of very diffi… 0.9180294
+gl_speech(test_audio)$transcript + + + ## # A tibble: 1 x 2 + ## transcript confidence + ## <chr> <chr> + ## 1 to administer medicine to animals is frequency of very diffi… 0.9180294

See more examples and details on the website or via vignette("speech", package = "googleLanguageR")

diff --git a/docs/news/index.html b/docs/news/index.html index 9c55567..cc7e1ab 100644 --- a/docs/news/index.html +++ b/docs/news/index.html @@ -21,13 +21,19 @@ + + + - + + + + @@ -37,20 +43,16 @@ - + + - - @@ -140,7 +142,12 @@ @@ -154,7 +161,7 @@
diff --git a/docs/pkgdown.css b/docs/pkgdown.css index fcd97bb..6ca2f37 100644 --- a/docs/pkgdown.css +++ b/docs/pkgdown.css @@ -217,3 +217,16 @@ a.sourceLine:hover { .hasCopyButton:hover button.btn-copy-ex { visibility: visible; } + +/* mark.js ----------------------------*/ + +mark { + background-color: rgba(255, 255, 51, 0.5); + border-bottom: 2px solid rgba(255, 153, 51, 0.3); + padding: 1px; +} + +/* vertical spacing after htmlwidgets */ +.html-widget { + margin-bottom: 10px; +} diff --git a/docs/pkgdown.js b/docs/pkgdown.js index 362b060..de9bd72 100644 --- a/docs/pkgdown.js +++ b/docs/pkgdown.js @@ -1,101 +1,110 @@ -$(function() { - - $("#sidebar") - .stick_in_parent({offset_top: 40}) - .on('sticky_kit:bottom', function(e) { - $(this).parent().css('position', 'static'); - }) - .on('sticky_kit:unbottom', function(e) { - $(this).parent().css('position', 'relative'); +/* http://gregfranko.com/blog/jquery-best-practices/ */ +(function($) { + $(function() { + + $("#sidebar") + .stick_in_parent({offset_top: 40}) + .on('sticky_kit:bottom', function(e) { + $(this).parent().css('position', 'static'); + }) + .on('sticky_kit:unbottom', function(e) { + $(this).parent().css('position', 'relative'); + }); + + $('body').scrollspy({ + target: '#sidebar', + offset: 60 }); - $('body').scrollspy({ - target: '#sidebar', - offset: 60 - }); - - $('[data-toggle="tooltip"]').tooltip(); - - var cur_path = paths(location.pathname); - $("#navbar ul li a").each(function(index, value) { - if (value.text == "Home") - return; - if (value.getAttribute("href") === "#") - return; + $('[data-toggle="tooltip"]').tooltip(); + + var cur_path = paths(location.pathname); + var links = $("#navbar ul li a"); + var max_length = -1; + var pos = -1; + for (var i = 0; i < links.length; i++) { + if (links[i].getAttribute("href") === "#") + continue; + var path = paths(links[i].pathname); + + var length = prefix_length(cur_path, path); + if (length > max_length) { + max_length = length; + pos = i; + } + } - var path = paths(value.pathname); - if (is_prefix(cur_path, path)) { - // Add class to parent
  • , and enclosing
  • if in dropdown - var menu_anchor = $(value); + // Add class to parent
  • , and enclosing
  • if in dropdown + if (pos >= 0) { + var menu_anchor = $(links[pos]); menu_anchor.parent().addClass("active"); menu_anchor.closest("li.dropdown").addClass("active"); } }); -}); -function paths(pathname) { - var pieces = pathname.split("/"); - pieces.shift(); // always starts with / + function paths(pathname) { + var pieces = pathname.split("/"); + pieces.shift(); // always starts with / - var end = pieces[pieces.length - 1]; - if (end === "index.html" || end === "") - pieces.pop(); - return(pieces); -} + var end = pieces[pieces.length - 1]; + if (end === "index.html" || end === "") + pieces.pop(); + return(pieces); + } -function is_prefix(needle, haystack) { - if (needle.length > haystack.lengh) - return(false); + function prefix_length(needle, haystack) { + if (needle.length > haystack.length) + return(0); - // Special case for length-0 haystack, since for loop won't run - if (haystack.length === 0) { - return(needle.length === 0); - } + // Special case for length-0 haystack, since for loop won't run + if (haystack.length === 0) { + return(needle.length === 0 ? 1 : 0); + } - for (var i = 0; i < haystack.length; i++) { - if (needle[i] != haystack[i]) - return(false); - } + for (var i = 0; i < haystack.length; i++) { + if (needle[i] != haystack[i]) + return(i); + } - return(true); -} + return(haystack.length); + } -/* Clipboard --------------------------*/ + /* Clipboard --------------------------*/ -function changeTooltipMessage(element, msg) { - var tooltipOriginalTitle=element.getAttribute('data-original-title'); - element.setAttribute('data-original-title', msg); - $(element).tooltip('show'); - element.setAttribute('data-original-title', tooltipOriginalTitle); -} + function changeTooltipMessage(element, msg) { + var tooltipOriginalTitle=element.getAttribute('data-original-title'); + element.setAttribute('data-original-title', msg); + $(element).tooltip('show'); + element.setAttribute('data-original-title', tooltipOriginalTitle); + } -if(Clipboard.isSupported()) { - $(document).ready(function() { - var copyButton = ""; + if(Clipboard.isSupported()) { + $(document).ready(function() { + var copyButton = ""; - $(".examples").addClass("hasCopyButton"); + $(".examples, div.sourceCode").addClass("hasCopyButton"); - // Insert copy buttons: - $(copyButton).prependTo(".hasCopyButton"); + // Insert copy buttons: + $(copyButton).prependTo(".hasCopyButton"); - // Initialize tooltips: - $('.btn-copy-ex').tooltip({container: 'body'}); + // Initialize tooltips: + $('.btn-copy-ex').tooltip({container: 'body'}); - // Initialize clipboard: - var clipboardBtnCopies = new Clipboard('[data-clipboard-copy]', { - text: function(trigger) { - return trigger.parentNode.textContent; - } - }); + // Initialize clipboard: + var clipboardBtnCopies = new Clipboard('[data-clipboard-copy]', { + text: function(trigger) { + return trigger.parentNode.textContent; + } + }); - clipboardBtnCopies.on('success', function(e) { - changeTooltipMessage(e.trigger, 'Copied!'); - e.clearSelection(); - }); + clipboardBtnCopies.on('success', function(e) { + changeTooltipMessage(e.trigger, 'Copied!'); + e.clearSelection(); + }); - clipboardBtnCopies.on('error', function() { - changeTooltipMessage(e.trigger,'Press Ctrl+C or Command+C to copy'); + clipboardBtnCopies.on('error', function() { + changeTooltipMessage(e.trigger,'Press Ctrl+C or Command+C to copy'); + }); }); - }); -} - + } +})(window.jQuery || window.$) diff --git a/docs/pkgdown.yml b/docs/pkgdown.yml index 6f2c5e8..24cc8ff 100644 --- a/docs/pkgdown.yml +++ b/docs/pkgdown.yml @@ -1,6 +1,6 @@ pandoc: 1.19.2.1 -pkgdown: 0.1.0.9000 -pkgdown_sha: d6ebaea156244a88bee4f14b4e8bcfb9e36f7727 +pkgdown: 1.1.0 +pkgdown_sha: ~ articles: nlp: nlp.html setup: setup.html diff --git a/docs/reference/gl_auth.html b/docs/reference/gl_auth.html index 69b4fae..384b337 100644 --- a/docs/reference/gl_auth.html +++ b/docs/reference/gl_auth.html @@ -21,16 +21,22 @@ + + + - + + + + @@ -40,20 +46,16 @@ - + + - - @@ -143,7 +145,12 @@ @@ -157,13 +164,15 @@
    +

    Authenticate with Google language API services

    +
    gl_auth(json_file)
    diff --git a/docs/reference/gl_nlp.html b/docs/reference/gl_nlp.html index 46c35c1..424dc42 100644 --- a/docs/reference/gl_nlp.html +++ b/docs/reference/gl_nlp.html @@ -21,16 +21,22 @@ + + + - + + + - + + @@ -40,20 +46,16 @@ - + + - - @@ -143,7 +145,12 @@
    @@ -157,19 +164,21 @@
    +
    -

    Analyse text entities, sentiment, and syntax using the Google Natural Language API

    +

    Analyse text entities, sentiment, syntax and categorisation using the Google Natural Language API

    +
    gl_nlp(string, nlp_type = c("annotateText", "analyzeEntities",
    -  "analyzeSentiment", "analyzeSyntax", "analyzeEntitySentiment"),
    -  type = c("PLAIN_TEXT", "HTML"), language = c("en", "zh", "zh-Hant", "fr",
    -  "de", "it", "ja", "ko", "pt", "es"), encodingType = c("UTF8", "UTF16",
    -  "UTF32", "NONE"))
    + "analyzeSentiment", "analyzeSyntax", "analyzeEntitySentiment", + "classifyText"), type = c("PLAIN_TEXT", "HTML"), language = c("en", "zh", + "zh-Hant", "fr", "de", "it", "ja", "ko", "pt", "es"), + encodingType = c("UTF8", "UTF16", "UTF32", "NONE"))

    Arguments

    @@ -204,6 +213,7 @@

    Value

  • tokens - Tokens, along with their syntactic information, in the input document

  • entities - Entities, along with their semantic information, in the input document

  • documentSentiment - The overall sentiment for the document

  • +
  • classifyText -Classification of the document

  • language - The language of the text, which will be the same as the language specified in the request or, if not specified, the automatically-detected language

  • text - The original text passed into the API. NA if not passed due to being zero-length etc.

  • diff --git a/docs/reference/gl_speech.html b/docs/reference/gl_speech.html index f9da563..d5c7bbb 100644 --- a/docs/reference/gl_speech.html +++ b/docs/reference/gl_speech.html @@ -21,16 +21,22 @@ + + + - + + + + @@ -40,20 +46,16 @@ - + + - - @@ -143,7 +145,12 @@ @@ -157,13 +164,15 @@
    +

    Turn audio into text

    +
    gl_speech(audio_source, encoding = c("LINEAR16", "FLAC", "MULAW", "AMR",
       "AMR_WB", "OGG_OPUS", "SPEEX_WITH_HEADER_BYTE"), sampleRateHertz = NULL,
    diff --git a/docs/reference/gl_speech_op.html b/docs/reference/gl_speech_op.html
    index 05b2f44..01ca8fc 100644
    --- a/docs/reference/gl_speech_op.html
    +++ b/docs/reference/gl_speech_op.html
    @@ -21,16 +21,22 @@
     
     
     
    +
    +
    +
     
     
    -
     
    +
    +
    +
     
     
     
     
     
     
    +
     
     
     
    @@ -40,20 +46,16 @@
     
     
     
    -
    +
    +
     
     
    -
    -
       
     
       
    @@ -143,7 +145,12 @@
           
           
           
           
         
    @@ -157,13 +164,15 @@
    +

    For asynchronous calls of audio over 60 seconds, this returns the finished job

    +
    gl_speech_op(operation)
    diff --git a/docs/reference/gl_talk.html b/docs/reference/gl_talk.html index 4d30bde..e1438e3 100644 --- a/docs/reference/gl_talk.html +++ b/docs/reference/gl_talk.html @@ -21,16 +21,22 @@ + + + - + + + + @@ -40,20 +46,16 @@ - + + - - @@ -143,7 +145,12 @@
    @@ -157,13 +164,15 @@
    +

    Synthesizes speech synchronously: receive results after all text input has been processed.

    +
    gl_talk(input, output = "output.wav", languageCode = "en",
       gender = c("SSML_VOICE_GENDER_UNSPECIFIED", "MALE", "FEMALE", "NEUTRAL"),
    diff --git a/docs/reference/gl_talk_languages.html b/docs/reference/gl_talk_languages.html
    index d9821c5..4322569 100644
    --- a/docs/reference/gl_talk_languages.html
    +++ b/docs/reference/gl_talk_languages.html
    @@ -21,16 +21,22 @@
     
     
     
    +
    +
    +
     
     
    -
     
    +
    +
    +
     
     
     
     
     
     
    +
     
     
     
    @@ -40,20 +46,16 @@
     
     
     
    -
    +
    +
     
     
    -
    -
       
     
       
    @@ -143,7 +145,12 @@
           
           
           
           
         
    @@ -157,13 +164,15 @@
    +

    Returns a list of voices supported for synthesis.

    +
    gl_talk_languages(languageCode = NULL)
    diff --git a/docs/reference/gl_talk_player.html b/docs/reference/gl_talk_player.html index f4fc1ff..7afa76b 100644 --- a/docs/reference/gl_talk_player.html +++ b/docs/reference/gl_talk_player.html @@ -21,16 +21,22 @@ + + + - + + + + @@ -40,20 +46,16 @@ - + + - - @@ -143,7 +145,12 @@
    @@ -157,13 +164,15 @@
    +

    This uses HTML5 audio tags to play audio in your browser

    +
    gl_talk_player(audio = "output.wav", html = "player.html")
    diff --git a/docs/reference/gl_talk_shiny.html b/docs/reference/gl_talk_shiny.html new file mode 100644 index 0000000..936cff4 --- /dev/null +++ b/docs/reference/gl_talk_shiny.html @@ -0,0 +1,259 @@ + + + + + + + + +Speak in Shiny module (server) — gl_talk_shiny • googleLanguageR + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + +
    + +
    +
    + + + + +
    gl_talk_shiny(input, output, session, transcript, ..., autoplay = TRUE,
    +  controls = TRUE, loop = FALSE, keep_wav = FALSE)
    + +

    Arguments

    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    input

    shiny input

    output

    shiny output

    session

    shiny session

    transcript

    The (reactive) text to talk

    ...

    Arguments passed on to gl_talk

    +
    input

    The text to turn into speech

    +
    output

    Where to save the speech audio file

    +
    languageCode

    The language of the voice as a BCP-47 language code

    +
    name

    Name of the voice, see list via gl_talk_languages for supported voices. Set to NULL to make the service choose a voice based on languageCode and gender.

    +
    gender

    The gender of the voice, if available

    +
    audioEncoding

    Format of the requested audio stream

    +
    speakingRate

    Speaking rate/speed between 0.25 and 4.0

    +
    pitch

    Speaking pitch between -20.0 and 20.0 in semitones.

    +
    volumeGainDb

    Volumne gain in dB

    +
    sampleRateHertz

    Sample rate for returned audio

    +
    autoplay

    passed to the HTML audio player - default TRUE plays on load

    controls

    passed to the HTML audio player - default TRUE shows controls

    loop

    passed to the HTML audio player - default FALSE does not loop

    keep_wav

    keep the generated wav files if TRUE.

    + + +
    + + + + + + + + + + + diff --git a/docs/reference/gl_talk_shinyUI.html b/docs/reference/gl_talk_shinyUI.html new file mode 100644 index 0000000..25fac00 --- /dev/null +++ b/docs/reference/gl_talk_shinyUI.html @@ -0,0 +1,221 @@ + + + + + + + + +Speak in Shiny module (ui) — gl_talk_shinyUI • googleLanguageR + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    + + + +
    + +
    +
    + + +
    + +

    Speak in Shiny module (ui)

    + +
    + +
    gl_talk_shinyUI(id)
    + +

    Arguments

    + + + + + + +
    id

    The Shiny id

    + +

    Details

    + +

    Shiny Module for use with gl_talk_shiny.

    + + +
    + +
    + + +
    + + + + + + diff --git a/docs/reference/gl_translate.html b/docs/reference/gl_translate.html index 00cc253..4260907 100644 --- a/docs/reference/gl_translate.html +++ b/docs/reference/gl_translate.html @@ -21,16 +21,22 @@ + + + - + + + + @@ -40,20 +46,16 @@ - + + - - @@ -143,7 +145,12 @@ @@ -157,13 +164,15 @@
    +

    Translate character vectors via the Google Translate API

    +
    gl_translate(t_string, target = "en", format = c("text", "html"),
       source = "", model = c("nmt", "base"))
    diff --git a/docs/reference/gl_translate_detect.html b/docs/reference/gl_translate_detect.html index a0eba82..5400952 100644 --- a/docs/reference/gl_translate_detect.html +++ b/docs/reference/gl_translate_detect.html @@ -21,16 +21,22 @@ + + + - + + + + @@ -40,20 +46,16 @@ - + + - - @@ -143,7 +145,12 @@
    @@ -157,13 +164,15 @@
    +

    Detect the language of text within a request

    +
    gl_translate_detect(string)
    diff --git a/docs/reference/gl_translate_languages.html b/docs/reference/gl_translate_languages.html index 84814b8..54d138a 100644 --- a/docs/reference/gl_translate_languages.html +++ b/docs/reference/gl_translate_languages.html @@ -21,16 +21,22 @@ + + + - + + + + @@ -40,20 +46,16 @@ - + + - - @@ -143,7 +145,12 @@
    @@ -157,13 +164,15 @@
    +

    Returns a list of supported languages for translation.

    +
    gl_translate_languages(target = "en")
    diff --git a/docs/reference/googleLanguageR.html b/docs/reference/googleLanguageR.html index 1e9a74b..3c5ce01 100644 --- a/docs/reference/googleLanguageR.html +++ b/docs/reference/googleLanguageR.html @@ -21,10 +21,15 @@ + + + - + + + + @@ -41,20 +47,16 @@ - + + - - @@ -144,7 +146,12 @@
    @@ -158,14 +165,16 @@
    +

    This package contains functions for analysing language through the Google Cloud Machine Learning APIs

    +

    Details

    diff --git a/docs/reference/index.html b/docs/reference/index.html index cf65d26..0a11d7d 100644 --- a/docs/reference/index.html +++ b/docs/reference/index.html @@ -21,13 +21,19 @@ + + + - + + + + @@ -37,20 +43,16 @@ - + + - - @@ -140,7 +142,12 @@
    @@ -159,6 +166,7 @@

    Reference

    + @@ -171,67 +179,79 @@

    gl_auth()

    - + - + - + - + - + - + - + + + + + + + + + - + - + - + diff --git a/docs/reference/is.NullOb.html b/docs/reference/is.NullOb.html index a9ba4c2..f46ca86 100644 --- a/docs/reference/is.NullOb.html +++ b/docs/reference/is.NullOb.html @@ -22,10 +22,15 @@ + + + - + + + @@ -34,6 +39,7 @@ + @@ -43,20 +49,16 @@ - + + - - @@ -146,7 +148,12 @@ @@ -161,14 +168,16 @@ +

    A helper function that tests whether an object is either NULL _or_ a list of NULLs

    +
    is.NullOb(x)
    diff --git a/docs/reference/rmNullObs.html b/docs/reference/rmNullObs.html index e8c6ba5..8c640a5 100644 --- a/docs/reference/rmNullObs.html +++ b/docs/reference/rmNullObs.html @@ -21,16 +21,22 @@ + + + - + + + + @@ -40,20 +46,16 @@ - + + - - @@ -143,7 +145,12 @@ @@ -157,13 +164,15 @@
    +

    Recursively step down into list, removing all such objects

    +
    rmNullObs(x)
    diff --git a/docs/sitemap.xml b/docs/sitemap.xml index 69f1a4f..8ffe305 100644 --- a/docs/sitemap.xml +++ b/docs/sitemap.xml @@ -1,5 +1,8 @@ + + https://code.markedmondson.me/googleLanguageR//index.html + https://code.markedmondson.me/googleLanguageR//reference/gl_auth.html @@ -21,6 +24,12 @@ https://code.markedmondson.me/googleLanguageR//reference/gl_talk_player.html + + https://code.markedmondson.me/googleLanguageR//reference/gl_talk_shiny.html + + + https://code.markedmondson.me/googleLanguageR//reference/gl_talk_shinyUI.html + https://code.markedmondson.me/googleLanguageR//reference/gl_translate.html diff --git a/man/gl_nlp.Rd b/man/gl_nlp.Rd index f42d4fe..00833a8 100644 --- a/man/gl_nlp.Rd +++ b/man/gl_nlp.Rd @@ -29,7 +29,7 @@ A list of the following objects, if those fields are asked for via \code{nlp_typ \item{tokens - }{\href{https://cloud.google.com/natural-language/docs/reference/rest/v1/Token}{Tokens, along with their syntactic information, in the input document}} \item{entities - }{\href{https://cloud.google.com/natural-language/docs/reference/rest/v1/Entity}{Entities, along with their semantic information, in the input document}} \item{documentSentiment - }{\href{https://cloud.google.com/natural-language/docs/reference/rest/v1/Sentiment}{The overall sentiment for the document}} - \item{classifyText -}{\href{https://cloud.google.com/natural-language/docs/classifying-text}} + \item{classifyText -}{\href{https://cloud.google.com/natural-language/docs/classifying-text}{Classification of the document}} \item{language - }{The language of the text, which will be the same as the language specified in the request or, if not specified, the automatically-detected language} \item{text - }{The original text passed into the API. \code{NA} if not passed due to being zero-length etc. } }

    Authenticate with Google language API services

    gl_nlp()

    Perform Natural Language Analysis

    gl_speech()

    Call Google Speech API

    gl_speech_op()

    Get a speech operation

    gl_talk()

    Perform text to speech

    gl_talk_languages()

    Get a list of voices available for text to speech

    gl_talk_player()

    Play audio in a browser

    +

    gl_talk_shiny()

    +

    Speak in Shiny module (server)

    +

    gl_talk_shinyUI()

    +

    Speak in Shiny module (ui)

    gl_translate()

    Translate the language of text within a request

    gl_translate_detect()

    Detect the language of text within a request

    gl_translate_languages()

    Lists languages from Google Translate API

    googleLanguageR