[go: up one dir, main page]

Skip to content
This repository has been archived by the owner on May 19, 2021. It is now read-only.

Revisiting the research compendium: testing, automation, and review #5

Open
noamross opened this issue Feb 8, 2017 · 75 comments
Open

Comments

@noamross
Copy link
noamross commented Feb 8, 2017

In the 2015 unconf, one output was a document of best practices for research compendia, a self-contained repository for a research analysis. Many of the ideas in this, and similar work, derive from best practices from R packages. In the past few years, there have been advances and wider adoption of a number of R package development practices, notably in package testing, automation of testing/checking/building, and best practices for code review. R-based research compendia have not coalesced around a similar set of practices yet. I would aim to build tools that would help address this, with these questions.

  • What is the state of of analysis workflow automation, are there gaps and how can they be addressed?
  • What should testing workflow and tooling for a research compendium workflow look like? How and should we separate testing from processing and analysis?
  • Is there a standard set of checks, a la R CMD check, that could be widely adopted?
  • What would an rrtools package, similar to devtools, contain to aid in creating reproducible research compendia?
  • How would we change or improve CI infrastructure or adopt it for research compendia?
  • What practices/checklists for analysis code review should we pilot or adopt?

(I have thoughts on most of these that I'll add below or in a linked document for these in a bit)

Possible outputs include

  • another best practices document
  • code review checklist
  • the start of an rrtools package
  • templates for CI or a PR to the current Travis-CI R engine
  • contributions to remake, tic, pkgdown, or other infrastructure packages
@njtierney
Copy link
njtierney commented Mar 21, 2017

I would be really interested in building tools to help facilitate the process of creating reproducible research compendia.

Recently I adopted the approaches described in rrrpkg, and it was great! (rough example here). I managed to have a bit more time on my hands, and was really disciplined with how I built the analysis. This was really great when the input data needed to change, or I needed to update one model. Very very satisfying to just type make paper, and then take a lunch break and your paper is all shiny and new when you get back.

However, recently I've been more pressed for time, and have started to do things in a way that "works for now", in an attempt to claw back some of the time that I need to get the analysis done. Then, a day, week, or a few months later, I'm suffering at my own hand because now I need to untangle the reproducibility issues that I've created.

I've heard it said that one of the benefits of doing things in a reproducible way is that it slows you down, and makes you think. This is a good point, but in my experience you don't always have the time to think, when deadlines are fast approaching.

Personally, my biggest problem with being reproducible is that the awesome frameworks can be fragile in the face of a scary deadline.

So, I really want to work on a "pit of success" for reproducibility, so that the default option is to fall into the right thing, rather than struggle doing it.


Two further thoughts:

  • Maybe there are many different paths (or "pits") for reproducibility, with some working better than others for different data/different analysis.
  • Managing input/output data can be really painful. One approach might be to create many different forks for each garden path you take when building your model. But then if each model has output that is 2GB, then your git repo can blow out pretty massively.

@noamross
Copy link
Author

One of the tensions I've found in this area is the difference between a "working" repo where one is developing ideas and doing exploratory work, and a fully formed workflow and analysis where you know what the outcomes should look like. I'd like to reduce the friction of moving from the former to the latter.

@Pakillo
Copy link
Member
Pakillo commented Mar 21, 2017

Hi,

I think having a template (folder structure, etc) to use for all your projects helps a lot to work reproducibly. You always start from it and just have to follow the design; no more extra time once you have designed everything to your liking. E.g. my template is here. See also @benmarwick research compendium here.

We do all exploratory analyses as Rmarkdown documents in an analyses folder. Once we are clear what we want to include in the paper/report, we work in another Rmd document in a different folder (manuscript or report or whatever).

Of course large outputs are always a problem (see also ropensci/rrrpkg#8). We choose not to git track them, and share them through dropbox, figshare, or similar.

Hope this helps, and always interested to hear other ideas!

@njtierney
Copy link

@noamross I think you're right on about the tension between exploratory analysis and final report.

Looking back, that is why the first paper was so much easier for me. I already had all the exploratory analysis done, and so when I changed over to the other template, I just filled in a template, shoved all the functions into an \R folder and away I went. Mostly.

Having a template is definitely a great idea, @Pakillo, and like you say, it saves a lot of cognitive load, so you can spend more time thinking about the analysis. I think this is why the idea of using the r package workflow catches on nicely. People can rely on the workflow outlined by @hadley's devtools, and move through the steps.

This is a big topic! It's kind of hard to know where to start. But one thought is that R packages are generally a growing product - many (most?) will continue iterating over what their purpose is, and will improve over time. Whereas a paper or a report has a finite ending, and the iterative process of writing, analysing, and re-writing is different to package development. I wonder if perhaps it might be useful to compare and contrast package development to analysis development?

@cboettig
Copy link
Member

Great thread, something I struggle with regularly as well.

My default is to stick with the R package structure for a new project, e.g. any exploration that's grown bigger than a a single self-contained .Rmd and for which I can think of a functional name for the repo. Like @njtierney says, this skeleton is quick to create with devtools or a template (I have https://github.com/cboettig/template, slightly dated).

I tend to omit formal tests; I refactor code a lot early on and having to refactor the unit tests is just a pain. Maybe that's not ideal but oh well. I do tend to write at least minimal roxygen docs with an example; these provide easier-to-maintain tests and can be helpful; and mean I can leave travis-ci turned on to do at least some sanity checking without being a burden. Functions I don't want to document / test I just don't export so CHECK doesn't complain.

Like @Pakillo , I do exploratory analysis in separate .Rmds (in a /inst/notebook dir for me), and then start writing the manuscript in /manuscript (if it's messy/long running) or /vignette (if it can run quickly).

Caching for large data &/or long-running outputs is a special problem. I think remake is a good general-purpose strategy here, but I tend to make sure my functions output some standard, text-based format which I store either in the package or on a separate GitHub repo (neither is ideal; but these are strictly intermediate objects which can be regenerated with enough cpu effort).

I think both papers and packages have a finite life cycles with similar challenges. Early on it's figuring out where to draw the box -- stuff starts in one repo and then gets split out into multiple repos for different packages / papers as the project grows and becomes better defined. Knowing when to split is always hard for either research or software; I which someone would give me some rules-of-thumb. Since we don't sweat software 'releases' I agree that there's no sharp delineation point in software the way there is in publishing, but after submitting a paper there's still the iteration of peer review, and after acceptance there's still some lifetime of the repo where I'm at least continuing to re-use and maybe tweak things on that project. And I think there comes a time for both software package or paper/compendium where one transitions from active maintenance to sunset & archive, though perhaps that happens more immediately after publication for most papers/compendium than for most software packages.

@noamross
Copy link
Author
noamross commented Mar 21, 2017

I'm thinking of of testing with a bit of a broader scope, including model diagnostics and data validation as done with assertr or validate. These tests may have boolean, text, or visual outputs, but I think its important to have a workflow that (a) separates them from the main products, and (b) ensures they are run and viewed with updates.

One paradigm I was considering was saving the environments of each of the notebook/ and manuscript/ knitr documents, and having scripts or Rmds in a test directory that load these environments and run standard tests on the objects in them. These could be run on CI and be available as artifacts, or pushed back to github, perhaps on a gh-pages branch.

I've had trouble with remake, because of the overhead of moving from script-based organization that characterizes many projects to the functional approach. make is easier to transition to, and I sometimes help long runtimes by keeping my knitr cache on CI. One potential unconf idea is working on some options to run scripts and save their environments as targets, which I think might help more people get aboard. We had an open issue discussing this but I think it disappeared in the renaming of "maker" to "remake".

On the lifecycle issue, I think something like rrtools::use_docker to put in local rocker infrastructure with a fixed R version, or rrtools::use_packrat/use_checkpoint, would be helpful to for "freezing" an analysis in place.

@benmarwick
Copy link

I'm following this discussion with great interest, especially the comments about testing and re/make, since I haven't found a natural place for those in my work, and I'm curious to see how others are using them.

I think there's scope for some easy wins with tooling to increase the convenience using CI for research-compendia-packages. There was a nice paper on using CI for research pipelines recently: http://www.nature.com/nbt/journal/vaop/ncurrent/abs/nbt.3780.html

Related to CI, I like @noamross's idea of rrtools::use_docker. I once proposed something vaguely similar for devtools, and there are a few related projects mentioned in that thread that may be relevant here.

Seems like @stephlocke's excellent new https://github.com/stephlocke/pRojects could be a good prototype for rrtools

@stephlocke
Copy link

If y'all want to make use of pRojects I'm happy to do work to fold it into rOpenSci - I've made a lot of the Issues etc up for grabs and am keen to get other's opinions and requirements built in :)

@noamross
Copy link
Author
noamross commented Apr 20, 2017

One idea that we might be able make progress on here is figure out what subset of package-checks from various tools (R CMD check, lintr, goodpractice, etc.) can be easily applied to non-package projects, and possibly create some lightweight wrappers, extensions, and/or docs for using them on directories at various levels of rigor (e.g., something with full-blown build system and package-style DESCRIPTION, something with standalone scripts, data, and Rmds)

A possible home for this would be @stephlocke's https://github.com/lockedata/PackageReviewR

@hadley
Copy link
Member
hadley commented Apr 20, 2017

@noamross random aside: rigger could be a fun name for this package (sounds like both rigger and rigour)

@noamross
Copy link
Author

@hadley +1, as in a former career, I actually had the title of chief rigger. ⛵️

@njtierney
Copy link

It seems to me that there should be some sort of golden path to tread for reproducibility, sort of like

@hadley Tostoy's (approximate) quote:

Tidy data are all alike; Untidy data is untidy in its own way

It seems that the same could mostly be said for reproducibility. There are very many ways that you can do reproducibility wrong, but my sense is that there should be a small set of ways to do it right.

That said, I think talking about what went wrong in the past could still be helpful, sort of like Etsy's "Guilt-Free Post Mortems" that I have heard about on NSSD.

So, do you think it might be useful to collect anti-patterns to reproducibility? I, ahem, may have a few horror stories to share. We could also, of course, share success stories, and things that went well. My thoughts are that this would help identify common problems, strong good patterns for reproducibility, and common anti patterns.

Thoughts?

@batpigandme
Copy link

@njtierney I hadn't heard the guilt-free post-mortems, but I definitely think the repro research autopsies would fit well with/be of interest for #64.

@benmarwick
Copy link

There's a nice recent collection of success stories in https://www.practicereproducibleresearch.org, with a few chapters from R users (disclaimer: including me).

I'm not aware of any surveys of anti-patterns, that sounds like it would be very useful, and help to give some priority to the items in various how-to lists for reproducibile research (e.g. http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285).

If we could identify the 20% of patterns or behaviors that are responsible for 80% of irreproducibility in research (if such a thing is quantifiable), and target those with some tooling to change those behaviors, there could be a big win.

@noamross
Copy link
Author
noamross commented Apr 29, 2017

Let me try to summarize some of the stuff above as well as my own evolving thoughts on this. I realize that this project has several components, any one of which could be an unconf project on its own:

Bootstrapping compendia

The "project template" approach has been tackled repeatedly, and well, but beyond data/ and R/ (functions) directories and maybe DESCRIPTION, there's lots of heterogeneity to analysis project structure. Some projects consists of a series of R notebooks, some a series of scripts which may or may not be linked together. Output types vary a lot. There are several options for build systems, including Make and remake. For this reason, its hard to automatically bootstrap projects that are already underway with use_* functions - you can't overlay a build system unless you know the project logic.

A good start might be creating a set of project templates (make_project, remake_project, notebook_project?) that match a few common idioms. @gaborcsardi's mason may provide a good approach here, providing project setup with a small set of questions. With the choice of idiom set, functions to enhance a repo's reproducibility with use_docker or use_packrat/checkpoint and associated CI templates should be easier. Maybe information on the idiom can be stored in a dotfile or DESCRIPTION so that other bootstrapping and CI functions can use it.

Code analysis and testing

  • Static code analysis is relatively low-hanging fruit. A package that wraps various static analysis tools, much like goodpractice to report on linting, spelling, grammar (The grammar of grammar #53), use of common anti-patterns, etc., of code in all R and Rmd files in a project would be helpful. We could also think of other static metrics report or things to test/lint that are specific to analyses rather than packages. (Comment/code ratio?)
  • Dynamic analysis and testing is more tricky, because automatically running all the code in the project requires knowing the project logic. If, as above, the project is using one of the common idioms/build systems, then a test_project() function would be able to run all the code. If the project has tests in scripts using testthat/assertr/assertive or Rmd test blocks (testthat chunks for R Markdown documents #38) their results could collected and returned. What other types of dynamics analysis could be run? covr runs dynamically but doesn't quite apply.
  • A beginner-friendly way to do code analysis might be to have "tiers" of tests, and have a package suggest moving up a tier when all tests in the first tier is passed. ("All your code passes! How about adding a build system?"). This could eventually lead to something like different colored badges.

Training and review guides

  • The above will be most helpful when accompanied by peer- and self-review. We've found checklists and guides very helpful for this in our package reviews. Adopting the Mozilla Science Labs and ROpenSci guides into checklists specifically for peer-reviewing research compendia would be very useful. This could be the other side of the coin of the anti-pattern idea described above.

@stephlocke
Copy link

I'm really interested in this and have started tackling the project setup and the testing / review of packages. I'd love to make these more community developed so it's not limited by time as I think the community at large can benefit.

@ateucher
Copy link
Member
ateucher commented May 3, 2017

I really love this idea. In my work it is my mission to get people using tools to make their work more reproducible - it can be overwhelming to beginners to learn best practices and tools so having a starting point is great. I think we have to be careful to make this approachable enough so that it doesn't scare new users off (as @njtierney mentioned talking about the 'pit of success').

+1 to building on @stephlocke's pRojects package. I wonder too if there is something that can be done to help users create their own templates, perhaps built off some boilerplates? I've often heard people say that they would use ProjectTemplate etc but it's not quite the template they need. In government, we often have to do very similar projects repeatedly so custom templates would be very helpful.

As an aside, another build system I've recently heard about is drake - I've never tried it but heard an interview with the author on the r-podcast.

@stephlocke
Copy link
stephlocke commented May 3, 2017

So I'm aiming to add a bunch of flags to the starter projects so folks can set exactly what they want in them.
At the moment, each function has parameters and these can also be tuned in the add-in. Of course, someone has to know about add-ins to be able to get the benefit of the GUI which might be a catch-22.

I'm hoping to extend the list of options to include, for instance, Makefiles, My attempts at Makefiles have been dismal failures.

The package can also be wrapped so if someone want's a bespoke project function with internal standards, they can depend on pRojects, and use the createBasicProject as a starter like the Analysis and Training functions do https://github.com/lockedata/pRojects/blob/master/R/createAnalysisProject.R

@batpigandme
Copy link

@ateucher “it can be overwhelming to beginners to learn best practices and tools so having a starting point is great” Yes, a thousand times yes-- and as someone who got overwhelmed by learning so many "best practices" in the beginning, this can lead to bad habits and/or just wasted time because the workflow you've got going satsfices.

@stephlocke It's interesting that you mention Makefiles because when I tweeted a "Reproducible workflows in R" post I'd found a few weeks ago which included make-related things, one of the responses was that there's a related conceptual obstacle of some sort common to R users.
Orig tweet: https://twitter.com/dataandme/status/856916251196764162
Unfortunately, some quote-tweeting and whatnot has me unable to find the rest of the thread, but I'm wondering if there's an opportunity here to help fill in that conceptual gap too (which could even be through really good documentation).

@ateucher
Copy link
Member
ateucher commented May 3, 2017

@batpigandme interesting that post is by Will Landau, as he is the author of drake. Maybe that was the start of his inspiration for it.

@batpigandme
Copy link
batpigandme commented May 3, 2017 via email

@stephlocke
Copy link

My memory is hazy of trying to make the Makefile work, but I had a shell script that worked and I was trying to convert over with no luck whatsoever. And this was my trivial case!
https://github.com/stephlocke/ReproducibleGLM-step0/blob/master/deploy.sh
https://github.com/stephlocke/ReproducibleGLM-step0/blob/master/Makefile

@noamross
Copy link
Author
noamross commented May 4, 2017

A possible Makefile idiom might be this example of a set of Rmd notebook files from Lincoln Mullen: https://github.com/lmullen/rmd-notebook/blob/master/Makefile

Another possible Makefile idiom would be a set of scripts that should be run in-order. If the scripts have numbered filenames (01-clean-data.R, 02-fit-model.R) that might help for an easy template setup. I'm not sure how to make an easy setup for the output/intermediate files, though, as they'll vary so much project-to-project.

As for testing, I think I would separate an analysis-repo testing package from a package reviewing package, though the package reviewing task is something I'm interested in, too!

@cboettig
Copy link
Member

ha, that's yet another topic in which I could greatly use the wisdom of @jennybc. My students start from such different backgrounds that my grading tries to reflect the amount of effort &/or learning much more than what is objectively accomplished...

@hadley
Copy link
Member
hadley commented May 17, 2017

One simple thing would be to add some metadata to DESCRIPTION that identifies this as a compendium (not a package), and then travis could either:

  • call rmarkdown::render() each .Rmd in vignettes/
  • if a Makefile is present, call make

@hadley
Copy link
Member
hadley commented May 17, 2017

Also see usethis, where I've been pulling out all the use_* functions from devtools in a way that's easier to reuse.

@noamross
Copy link
Author

I'm more of a "sensible defaults with config" person, but I really like the metadata in DESCRIPTION convention. e.g., Compendium: default, but also Compendium: remake.

I also just started wrapping @jennybc's tree implementation mostly for the purpose of showing project directory navigation in a README.

@cboettig
Copy link
Member
cboettig commented May 17, 2017

@hadley brilliant. I love the idea of just leveraging DESCRIPTION metadata & use_travis() here. I've just added the line script: R -f test.R to .travis.yml to skip the R CMD build/CHECK stuff and run a tiny R script which calls rmarkdown on any .Rmd in the directory: https://github.com/cboettig/compendium

Yay! My students can now face the 😂 and 😭 and 🔥 of travis builds on their very own homework assignments.

@hadley
Copy link
Member
hadley commented May 17, 2017

I think the place to start would be to try and use the existing type field. If that doesn't break install.packages(), then you could have:

Type: compendium
Type: compendium/remake
# etc

Or maybe

Type: article

Figuring out how to cleanly parameterise travis builds in this way is useful enough that we should do it independently of the other compendium issues. We just need some simple convention + default that make this easily extensible.

Maybe we could have the value of the field be the name of the package, and then travis just installs that package then calls pkgname::build("."). What do you think @jimhester ?

@gaborcsardi
Copy link
Contributor

If that doesn't break install.packages(), then you could have:

Not only is R CMD install fine with this, R CMD check is "fine", too. Although it does nothing.

If you use TypeNote, then you can even run R CMD check on it. (I am not sure if you would always want to be able to, just an observation.)

@cboettig
Copy link
Member

oh right, using a custom script: command in .travis.yml replaces R travis's call to install as well as to check. I think this is a minimal example that now both installs based on the DESCRIPTION file and knits any .Rmds: https://github.com/cboettig/compendium

Of course a utility that generated such a template .travis.yml and any additional test.R script based just on the Type: designation in DESCRIPTION would be way cool.

@stephlocke
Copy link

I did some work on this type of generation on the weekend (got a post scheduled on it) but this custom travis file plus shell scripts generates each Rmd in a specific dir
https://github.com/lockedata/pres-stub

Dynamic file gen like generating DESCRIPTION and .travis.yml is the next thing on the pRojects todo for us to get to grips with so that we can start producing files that contain content based on the users input.

Have the use* in a separate package will help as we make extensive use of these in pRojects

(Thanks for the kind words about pRojects @cboettig btw - very much appreciated!)

@hadley
Copy link
Member
hadley commented May 17, 2017

One problem with using Type is that it prevents a project from being both a package and a compendia.

@cboettig I think such a capability should be baked into travis. Auto-building an .travis.yml is going to be fragile.

@jimhester
Copy link
Contributor

oh right, using a custom script: command in .travis.yml replaces R travis's call to install as well as to check.

No it doesn't, you should be able to use the default install: step assuming you have a DESCRIPTION file, just override the script: step to do something other than run R CMD check, e.g. script: R -e 'pkgname::build()' in Hadley's example.

@Pakillo
Copy link
Member
Pakillo commented May 18, 2017

Hi,
Independently of using vignettes or another folder to hold the Rmd reports, we do have found convenient to have two separate folders: for preliminary stuff and the final report or manuscript. For example, we use an analyses folder to store all the exploratory data analyses, complete model runs with residual plots etc. And then a manuscript folder in which the Rmd only includes the stuff that goes in the manuscript. I would find it very messy to have all the Rmds in the same folder. But that's just our experience, of course :)

For the record, here are other repos/templates for projects structured as R packages still not mentioned (I think):

@hadley
Copy link
Member
hadley commented May 18, 2017

I have summarised my take aways from this discussion into a research compendia proposal (google doc). Comments welcome!

@benmarwick
Copy link

Some observations on directory naming practices of research compendia spotted in the wild:

name of main analysis directory n sources
analysis 4 https://github.com/duffymeg/BroodParasiteDescription, https://github.com/cylerc/AP_SC, https://github.com/benmarwick/mjbtramp, https://github.com/benmarwick/ktc11
vignettes 3 https://github.com/famuvie/ArchaeologicalFloors, https://github.com/benmarwick/1989-excavation-report-Madjebebe, https://github.com/sje30/eglen2015/
manuscript/s 2 https://github.com/cboettig/nonparametric-bayes, https://github.com/benmarwick/Pleistocene-aged-stone-artefacts-from-Jerimalai--East-Timor
vignettes/manuscript 1 https://github.com/USEPA/LakeTrophicModelling/

It's a very small sample, but it seems that analysis and manuscripts are popular non-standard directory names for the core directory of the research compendium. This reflects the orientation of these compendia (and my interest) toward scholarly publication as the final product. This can be contrasted with other research contexts, such as reports for business applications, which I'm less aware of. But I guess that manuscripts directories doesn't make much sense for researchers in commercial settings.

Naming the main compendium directory analysis would seem to be a natural choice that makes sense for both academic and commercial research contexts.

@hadley
Copy link
Member
hadley commented May 18, 2017

Is there something more general than analysis but more specific than scripts? process/? activity/? task/?

Should it be a verb or a noun? (I think a noun because all the other directories are nouns). Should it be singular or plural? (This is harder because they're mostly singular apart from vignettes/).

OTOH scripts/ is nice because the directory name usually defines the type of its contents. OTOOH is a notebook a script? (yes?) Is a data file a script? (no)

@karthik
Copy link
Member
karthik commented May 18, 2017

compose?

@jhollist
Copy link
Member

Thanks for the "in the wild" summary @benmarwick! "Wild" very definitely describes https://github.com/USEPA/LakeTrophicModelling/ and that ended up being a bit of a mess by the end. Wish I could say the decision to have a subfolder in vignettes was a conscious one, but it wasn't. Since that time I have used a separate manuscript folder.

@cboettig
Copy link
Member

Maybe stating the obvious, but while this sample is great it's worth noting they aren't very independent. E.g. I think Megan's original layout was more representative of what I see people doing before we all, um, piled in duffymeg/BroodParasiteDescription#1

Okay, so I'm all for establishing convention over configuration, but I'm not actually clear on why we need to specify a choice here. I see that using the existing vignettes saves config, but here it seems the same amount of config could just treat any nonstandard top level dir this way?

@cboettig
Copy link
Member

One significant issue that isn't addressed in Hadley's excellent doc summary is publishing / sharing of output. I think this is one area that could benefit hugely from both more normative conventions and additional tooling.

Personally, I like to see the analysis/ dir (or whatever it is called) use github_document as the output format (diffs & displays nicely on GitHub. For a final product like a manuscript Rmd, a PDF is probably more appropriate, but during development (or perhaps for purely supplementary material content) it's nice to commit something more text/based diff-able (and has no risk of code being cut off the margin).

Even so, this approach has problems or at least open questions on how to do it. The default of dumping output .md / .pdf etc into the same working dir as the input .Rmd does save new users lots of headaches about path, but it also defies convention of separating inputs and outputs, makes it harder to find relevant content (Particularly with GitHub rendering .Rmd versions as well now -- students coming from Jupyter click on these ones and then ask: but where are the figures??), and complicates a manual version of make clean (e.g. deleting an 'output dir').

(Aside to the RStudio team: it would also be easier to make github_document more normative if the option wasn't quite so buried in the RStudio menu!)

Contrast this situation to the case in R packages where we now have pkgdown as a nice way to share vignette output while keeping a clean package repo containing input and output separately.

@noamross
Copy link
Author
noamross commented May 18, 2017

@cboettig This, to me, gets back to the tension between "working" and "final" compendia. I like the solution of .md output files in the analysis/ directory, but more "final" documents in vignettes/ where they have other outputs, including pkgdown docs. The .md aren't quite "outputs", but working intermediates. Similarly, my working repositories usually end up having things like model objects saved as .rds files, which aren't ultimate outputs but are important to retain during the working phase so that they can be shared and inspected.

A bigger challenge in input/output is data/. The idea of data as output is not adequately addressed in a lot of workflow templates, and having that data documented and installable is great. But if you want to document and test both input and output data, that puts them in the same place and its not immediately obvious how to distinguish between the two.

(I think getting GitHub render to .Rmd files was a mistake, myself.)

@cboettig
Copy link
Member

Just wanted to second the issue on data output. The Google doc comments reflect a lot of variation in how we view derived data:

  • does it belong in data or is that just for raw/input data?
    -Should every/most analyses be saving derived/tidy data, or is cleaned data just one more intermediate object to ignore?
  • Should we have a mechanism to save/share the (derived) data behind each figure?

Closing off my earlier comment about publishing/discovering final results, I agree with the general proposal that pkgdown provides a good solution for this; meanwhile more intermediate / low-stakes products can be left in more messy form as, say, github_markdown in an analysis dir.

@gshotwell
Copy link

I'm a little late to the party on this one, but I wanted to sketch out the approach I took in easyMake in case it's helpful for the "working" and "final" compendia tension. What easyMake does is reads the source files in an R project for input (like read.csv()) and output ( like write.csv()) functions in order to automatically detect dependencies between the different files. It then uses this to produce a Makefile for that project.

I think this is a pretty promising approach to resolving the "working" and "final" tension that @noamross mentioned because so long as you break your analysis into scripts you should be able to generate a working Makefile from those scripts. This helps new users get started with Makefiles and also is a good way to take a look at the whole project and see how your workflow could be improved.

I'm not sure if easyMake is an optimal implementation of this approach but the idea of automatically detecting dependencies based on the IO functions might be worth incorporating into the research compendia packages.

@hadley
Copy link
Member
hadley commented May 22, 2017

I've fleshed out some notes on what a reactive build system my look like for R: https://docs.google.com/document/d/1avYAqjTS7zSZn7JAAOZhFPkhkPvYwaPVrSpo31Cu0Yc/edit#. This started out like "make for R", but has ended up fairly far away, drawing just as much inspiration from shiny. Your comments are greatly appreciated!

@noamross
Copy link
Author

Testing package development happening here: https://github.com/ropenscilabs/checkers
Review guide happening here: https://docs.google.com/document/d/1vSbT9dcGTeUYDvSHclr3U8fLAqKxQsSBk3DJdMaK5ak/edit#

@benmarwick
Copy link

Inspired by the discussion here, I recently worked with @danki, @MartinHinz and others at @ISAAKiel to make a start on an rrtools package for bootstrapping a research compedium: https://github.com/benmarwick/rrtools

We carefully reviewed the literature on best practices and tried to boil them down to a novice-friendly workflow. It's an opinionated approach, for sure, but it also gives options at key decision points that tries to capture the diversity we see in the discussion above and elsewhere.

No doubt we've missed some variants, but we're happy to take suggestions to make it more broadly useful!

@stefaniebutland
Copy link
Member

Blog post about this project: https://ropensci.org/blog/2017/06/20/checkers/

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests