[go: up one dir, main page]

lime: Local Interpretable Model-Agnostic Explanations

When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) <doi:10.48550/arXiv.1602.04938>.

Version: 0.5.3
Imports: glmnet, stats, ggplot2, tools, stringi, Matrix, Rcpp, assertthat, methods, grDevices, gower
LinkingTo: Rcpp, RcppEigen
Suggests: xgboost, testthat, mlr, h2o, text2vec, MASS, covr, knitr, rmarkdown, sessioninfo, magick, keras, htmlwidgets, shiny, shinythemes, ranger
Published: 2022-08-19
DOI: 10.32614/CRAN.package.lime
Author: Emil Hvitfeldt ORCID iD [aut, cre], Thomas Lin Pedersen ORCID iD [aut], Michaƫl Benesty [aut]
Maintainer: Emil Hvitfeldt <emilhhvitfeldt at gmail.com>
BugReports: https://github.com/thomasp85/lime/issues
License: MIT + file LICENSE
URL: https://lime.data-imaginist.com, https://github.com/thomasp85/lime
NeedsCompilation: yes
Materials: README NEWS
In views: MachineLearning
CRAN checks: lime results

Documentation:

Reference manual: lime.pdf
Vignettes: Understanding lime

Downloads:

Package source: lime_0.5.3.tar.gz
Windows binaries: r-devel: lime_0.5.3.zip, r-release: lime_0.5.3.zip, r-oldrel: lime_0.5.3.zip
macOS binaries: r-release (arm64): lime_0.5.3.tgz, r-oldrel (arm64): lime_0.5.3.tgz, r-release (x86_64): lime_0.5.3.tgz, r-oldrel (x86_64): lime_0.5.3.tgz
Old sources: lime archive

Reverse dependencies:

Reverse imports: grafzahl
Reverse suggests: DALEXtra, innsight

Linking:

Please use the canonical form https://CRAN.R-project.org/package=lime to link to this page.