Skip to content

Commit

Permalink
Add 'require(pkgname)' in examples and fix some typos
Browse files Browse the repository at this point in the history
  • Loading branch information
nkoenen committed Jun 13, 2023
1 parent a3616dc commit 1eaaa84
Show file tree
Hide file tree
Showing 17 changed files with 795 additions and 733 deletions.
10 changes: 5 additions & 5 deletions R/ConnectionWeights.R
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#' Connection Weights method
#' Connection weights method
#'
#' @description
#' This class implements the *Connection Weights* method investigated by
#' This class implements the *Connection weights* method investigated by
#' Olden et al. (2004), which results in a relevance score for each input
#' variable. The basic idea is to multiply all path weights for each
#' possible connection between an input feature and the output node and then
Expand All @@ -14,7 +14,7 @@
#'
#' In this package, we extended this method to a local method inspired by the
#' method *Gradient\eqn{\times}Input* (see [`Gradient`]). Hence, the local variant is
#' simply the point-wise product of the global *Connection Weights* method and
#' simply the point-wise product of the global *Connection weights* method and
#' the input data. You can use this variant by setting the `times_input`
#' argument to `TRUE` and providing input data.
#'
Expand All @@ -39,7 +39,7 @@ ConnectionWeights <- R6Class(
public = list(
#' @field times_input (`logical(1)`)\cr
#' This logical value indicates whether the results from
#' the *Connection Weights* method were multiplied by the provided input
#' the *Connection weights* method were multiplied by the provided input
#' data or not. Thus, this value specifies whether the original global
#' variant of the method or the local one was applied. If the value is
#' `TRUE`, then data is provided in the field `data`.
Expand All @@ -52,7 +52,7 @@ ConnectionWeights <- R6Class(
#'
#' @param times_input (`logical(1)`)\cr
#' Multiplies the results with the input features.
#' This variant turns the global *Connection Weights* method into a local
#' This variant turns the global *Connection weights* method into a local
#' one. Default: `FALSE`.\cr
initialize = function(converter,
data = NULL,
Expand Down
6 changes: 3 additions & 3 deletions R/DeepLift.R
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#' @title Deep Learning Important Features (DeepLift)
#' @title Deep learning important features (DeepLift)
#'
#' @description
#' This is an implementation of the \emph{Deep Learning Important Features
#' This is an implementation of the \emph{deep learning important features
#' (DeepLift)} algorithm introduced by Shrikumar et al. (2017). It's a local
#' method for interpreting a single element \eqn{x} of the dataset concerning
#' a reference value \eqn{x'} and returns the contribution of each input
Expand All @@ -10,7 +10,7 @@
#' decompose the difference-from-reference prediction with respect to the
#' input features, i.e.,
#' \deqn{\Delta y = y - y' = \sum_i C(x_i).}
#' Compared to \emph{Layer-wise Relevance Propagation} (see [LRP]), the
#' Compared to \emph{Layer-wise relevance propagation} (see [LRP]), the
#' DeepLift method is an exact decomposition and not an approximation, so we
#' get real contributions of the input features to the
#' difference-from-reference prediction. There are two ways to handle
Expand Down
4 changes: 2 additions & 2 deletions R/LRP.R
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#' @title Layer-wise Relevance Propagation (LRP)
#' @title Layer-wise relevance propagation (LRP)
#'
#' @description
#' This is an implementation of the \emph{Layer-wise Relevance Propagation
#' This is an implementation of the \emph{layer-wise relevance propagation
#' (LRP)} algorithm introduced by Bach et al. (2015). It's a local method for
#' interpreting a single element of the dataset and calculates the relevance
#' scores for each input feature to the model output. The basic idea of this
Expand Down
4 changes: 2 additions & 2 deletions R/innsight.R
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,10 @@
#' This package implements several model-specific interpretability
#' (feature attribution) methods based on neural networks in R, e.g.,
#'
#' * *Layer-wise Relevance Propagation ([LRP])*
#' * *Layer-wise relevance propagation ([LRP])*
#' * Including propagation rules: \eqn{\epsilon}-rule and
#' \eqn{\alpha}-\eqn{\beta}-rule
#' * *Deep Learning Important Features ([DeepLift])*
#' * *Deep learning important features ([DeepLift])*
#' * Including propagation rules for non-linearities: *Rescale* rule and
#' *RevealCancel* rule
#' * Gradient-based methods:
Expand Down
117 changes: 60 additions & 57 deletions man-roxygen/examples-ConnectionWeights.R
Original file line number Diff line number Diff line change
Expand Up @@ -26,69 +26,72 @@
#' plot(cw)
#'
#' #----------------------- Example 2: Neuralnet ------------------------------
#' library(neuralnet)
#' data(iris)
#'
#' # Train a Neural Network
#' nn <- neuralnet((Species == "setosa") ~ Petal.Length + Petal.Width,
#' iris,
#' linear.output = FALSE,
#' hidden = c(3, 2), act.fct = "tanh", rep = 1
#' )
#' if (require("neuralnet")) {
#' library(neuralnet)
#' data(iris)
#'
#' # Convert the trained model
#' converter <- Converter$new(nn)
#' # Train a Neural Network
#' nn <- neuralnet((Species == "setosa") ~ Petal.Length + Petal.Width,
#' iris,
#' linear.output = FALSE,
#' hidden = c(3, 2), act.fct = "tanh", rep = 1
#' )
#'
#' # Apply the Connection Weights method
#' cw <- ConnectionWeights$new(converter)
#' # Convert the trained model
#' converter <- Converter$new(nn)
#'
#' # Get the result as a torch tensor
#' get_result(cw, type = "torch.tensor")
#' # Apply the Connection Weights method
#' cw <- ConnectionWeights$new(converter)
#'
#' # Plot the result
#' plot(cw)
#' # Get the result as a torch tensor
#' get_result(cw, type = "torch.tensor")
#'
#' # Plot the result
#' plot(cw)
#' }
#' @examplesIf keras::is_keras_available() & torch::torch_is_installed()
#' # ------------------------- Example 3: Keras -------------------------------
#' library(keras)
#'
#' # Make sure keras is installed properly
#' is_keras_available()
#'
#' data <- array(rnorm(10 * 32 * 32 * 3), dim = c(10, 32, 32, 3))
#'
#' model <- keras_model_sequential()
#' model %>%
#' layer_conv_2d(
#' input_shape = c(32, 32, 3), kernel_size = 8, filters = 8,
#' activation = "softplus", padding = "valid") %>%
#' layer_conv_2d(
#' kernel_size = 8, filters = 4, activation = "tanh",
#' padding = "same") %>%
#' layer_conv_2d(
#' kernel_size = 4, filters = 2, activation = "relu",
#' padding = "valid") %>%
#' layer_flatten() %>%
#' layer_dense(units = 64, activation = "relu") %>%
#' layer_dense(units = 16, activation = "relu") %>%
#' layer_dense(units = 2, activation = "softmax")
#'
#' # Convert the model
#' converter <- Converter$new(model)
#'
#' # Apply the Connection Weights method
#' cw <- ConnectionWeights$new(converter)
#'
#' # Get the head of the result as a data.frame
#' head(get_result(cw, type = "data.frame"), 5)
#'
#' # Plot the result for all classes
#' plot(cw, output_idx = 1:2)
#'
#' if (require("keras")) {
#' library(keras)
#'
#' # Make sure keras is installed properly
#' is_keras_available()
#'
#' data <- array(rnorm(10 * 32 * 32 * 3), dim = c(10, 32, 32, 3))
#'
#' model <- keras_model_sequential()
#' model %>%
#' layer_conv_2d(
#' input_shape = c(32, 32, 3), kernel_size = 8, filters = 8,
#' activation = "softplus", padding = "valid") %>%
#' layer_conv_2d(
#' kernel_size = 8, filters = 4, activation = "tanh",
#' padding = "same") %>%
#' layer_conv_2d(
#' kernel_size = 4, filters = 2, activation = "relu",
#' padding = "valid") %>%
#' layer_flatten() %>%
#' layer_dense(units = 64, activation = "relu") %>%
#' layer_dense(units = 16, activation = "relu") %>%
#' layer_dense(units = 2, activation = "softmax")
#'
#' # Convert the model
#' converter <- Converter$new(model)
#'
#' # Apply the Connection Weights method
#' cw <- ConnectionWeights$new(converter)
#'
#' # Get the head of the result as a data.frame
#' head(get_result(cw, type = "data.frame"), 5)
#'
#' # Plot the result for all classes
#' plot(cw, output_idx = 1:2)
#' }
#' @examplesIf torch::torch_is_installed() & Sys.getenv("RENDER_PLOTLY", unset = 0) == 1
#' #------------------------- Plotly plots ------------------------------------
#'
#' # You can also create an interactive plot with plotly.
#' # This is a suggested package, so make sure that it is installed
#' library(plotly)
#' plot(cw, as_plotly = TRUE)
#' if (require("plotly")) {
#' # You can also create an interactive plot with plotly.
#' # This is a suggested package, so make sure that it is installed
#' library(plotly)
#' plot(cw, as_plotly = TRUE)
#' }
80 changes: 42 additions & 38 deletions man-roxygen/examples-Converter.R
Original file line number Diff line number Diff line change
Expand Up @@ -21,49 +21,53 @@
#'
#'
#' #----------------------- Example 2: Neuralnet ------------------------------
#' library(neuralnet)
#' data(iris)
#'
#' # Train a neural network
#' nn <- neuralnet((Species == "setosa") ~ Petal.Length + Petal.Width,
#' iris,
#' linear.output = FALSE,
#' hidden = c(3, 2), act.fct = "tanh", rep = 1
#' )
#' if (require("neuralnet")) {
#' library(neuralnet)
#' data(iris)
#'
#' # Train a neural network
#' nn <- neuralnet((Species == "setosa") ~ Petal.Length + Petal.Width,
#' iris,
#' linear.output = FALSE,
#' hidden = c(3, 2), act.fct = "tanh", rep = 1
#' )
#'
#' # Convert the model
#' converter <- Converter$new(nn)
#' # Convert the model
#' converter <- Converter$new(nn)
#'
#' # Print all the layers
#' converter$model$modules_list
#' # Print all the layers
#' converter$model$modules_list
#' }
#'
#' @examplesIf keras::is_keras_available() & torch::torch_is_installed()
#' #----------------------- Example 3: Keras ----------------------------------
#' library(keras)
#'
#' # Make sure keras is installed properly
#' is_keras_available()
#'
#' # Define a keras model
#' model <- keras_model_sequential() %>%
#' layer_conv_2d(
#' input_shape = c(32, 32, 3), kernel_size = 8, filters = 8,
#' activation = "relu", padding = "same") %>%
#' layer_conv_2d(
#' kernel_size = 8, filters = 4,
#' activation = "tanh", padding = "same") %>%
#' layer_conv_2d(
#' kernel_size = 4, filters = 2,
#' activation = "relu", padding = "same") %>%
#' layer_flatten() %>%
#' layer_dense(units = 64, activation = "relu") %>%
#' layer_dense(units = 1, activation = "sigmoid")
#'
#' # Convert this model and save model as list
#' converter <- Converter$new(model, save_model_as_list = TRUE)
#'
#' # Print the converted model as a named list
#' str(converter$model_as_list, max.level = 1)
#' if (require("keras")) {
#' library(keras)
#'
#' # Make sure keras is installed properly
#' is_keras_available()
#'
#' # Define a keras model
#' model <- keras_model_sequential() %>%
#' layer_conv_2d(
#' input_shape = c(32, 32, 3), kernel_size = 8, filters = 8,
#' activation = "relu", padding = "same") %>%
#' layer_conv_2d(
#' kernel_size = 8, filters = 4,
#' activation = "tanh", padding = "same") %>%
#' layer_conv_2d(
#' kernel_size = 4, filters = 2,
#' activation = "relu", padding = "same") %>%
#' layer_flatten() %>%
#' layer_dense(units = 64, activation = "relu") %>%
#' layer_dense(units = 1, activation = "sigmoid")
#'
#' # Convert this model and save model as list
#' converter <- Converter$new(model, save_model_as_list = TRUE)
#'
#' # Print the converted model as a named list
#' str(converter$model_as_list, max.level = 1)
#' }
#'
#' @examplesIf torch::torch_is_installed()
#' #----------------------- Example 4: List ----------------------------------
Expand Down
Loading

0 comments on commit 1eaaa84

Please sign in to comment.