default search action
Perspectives on Data Science for Software Engineering 2016
- Tim Menzies, Laurie A. Williams, Thomas Zimmermann:
Perspectives on Data Science for Software Engineering. Academic Press 2016, ISBN 978-0-12-804206-9 - Perspectives on data science for software engineering. 3-6
- Dongmei Zhang, Tao Xie:
Software analytics and its application in practice. 7-11 - Tim Menzies:
Seven principles of inductive software engineering. 13-17 - Barbara Russo:
The need for data analysis patterns (in software engineering). 19-23 - Jim Whitehead:
From software data to software theory. 25-28 - Dag I. K. Sjøberg, Gunnar R. Bergersen, Tore Dybå:
Why theory matters. 29-33 - Andreas Zeller:
Mining apps for anomalies. 37-42 - Venkatesh-Prasad Ranganath:
Embrace dynamic artifacts. 43-46 - Meiyappan Nagappan, Emad Shihab:
Mobile app store analytics. 47-49 - Earl T. Barr, Premkumar T. Devanbu:
The naturalness of software ☆. 51-55 - Pete Rotella:
Advances in release readiness. 57-62 - Qingwei Lin, Jian-Guang Lou, Hongyu Zhang, Dongmei Zhang:
How to tame your online services. 63-65 - Thomas Fritz:
Measuring individual productivity. 67-71 - Christopher Theisen, Laurie A. Williams:
Stack traces reveal attack surfaces. 73-76 - Zhitao Hou, Hongyu Zhang, Haidong Zhang, Dongmei Zhang:
Visual analytics for software engineering data. 77-80 - Jeff Huang:
Gameplay data plays nicer when divided into cohorts. 81-84 - Ayse Bener, Burak Turhan, Ayse Tosun, Bora Caglayan, Ekrem Kocaguneli:
A success story in applying data science in practice. 85-90 - Kim Herzig:
There's never enough time to do all the testing you want. 91-95 - Abram Hindle:
The perils of energy mining. 97-102 - Elaine J. Weyuker, Thomas J. Ostrand:
Identifying fault-prone files in large industrial software systems. 103-106 - Olga Baysal:
A tailored suit. 107-110 - Günther Ruhe, Maleknaz Nayebi:
What counts is decisions, not numbers - Toward an analytics design sheet. 111-114 - Baishakhi Ray, Daryl Posnett:
A large ecosystem study to understand the effect of programming languages on code quality. 115-118 - Jacek Czerwonka:
Code reviews are not for finding defects - Even established tools need occasional evaluation. 119-122 - Christian Bird:
Interviews. 125-131 - Reid Holmes:
Look for state transitions in temporal data. 133-135 - Thomas Zimmermann:
Card-sorting. 137-141 - Diomidis Spinellis:
Tools! Tools! We need tools! 143-148 - Tore Dybå, Gunnar R. Bergersen, Dag I. K. Sjøberg:
Evidence-based software engineering. 149-153 - Leandro L. Minku:
Which machine learning method do you need? 155-159 - Alberto Bacchelli:
Structure your unstructured data first! 161-168 - Philip J. Guo:
Parse that data! Practical tips for preparing your raw data for analysis. 169-173 - Stefan Wagner:
Natural language processing is no free lunch. 175-179 - David Budgen:
Aggregating empirical evidence for more trustworthy decisions. 181-186 - Ayse Bener, Ayse Tosun:
If it is software engineering, it is (probably) a Bayesian factor. 187-191 - Fayola Peters:
Becoming Goldilocks. 193-197 - Leandro L. Minku:
The wisdom of the crowds in predictive modeling for software engineering. 199-204 - Massimiliano Di Penta:
Combining quantitative and qualitative methods (when mining software data). 205-211 - Titus Barik, Emerson R. Murphy-Hill:
A process for surviving survey design and sailing through survey deployment. 213-219 - Gail C. Murphy:
Log it all? 223-225 - Michael W. Godfrey:
Why provenance matters. 227-231 - Georgios Gousios:
Open from the beginning. 233-237 - Trevor Carnahan:
Reducing time to insight. 239-243 - Miryung Kim:
Five steps for success. 245-248 - Bram Adams:
How the release process impacts your software analytics. 249-253 - Andrew Meneely:
Security cannot be measured. 255-259 - Sascha Just, Kim Herzig:
Gotchas from mining bug reports. 261-265 - Stephan Diehl:
Make visualization part of your analysis process. 267-269 - Alessandro Orso:
Don't forget the developers! (and be careful with your assumptions). 271-275 - Brendan Murphy:
Limitations and context of research. 277-281 - Andrew Meneely:
Actionable metrics are better metrics. 283-287 - Martin J. Shepperd:
Replicated results are more trustworthy. 289-293 - Harold Valdivia Garcia, Meiyappan Nagappan:
Diversity in software engineering research. 295-298 - Natalia Juristo:
Once is not enough. 299-302 - Per Runeson:
Mere numbers aren't enough. 303-307 - Christian Bird:
Don't embarrass yourself. 309-315 - Audris Mockus:
Operational data are missing, incorrect, and decontextualized. 317-322 - Markku Oivo:
Data science revolution in process improvement and assessment? 323-325 - Tim Menzies:
Correlation is not causation (or, when not to scream "Eureka!"). 327-330 - Romain Robbes:
Software analytics for small software companies. 331-335 - Nenad Medvidovic, Alessandro Orso:
Software analytics under the lamp post (or what star trek teaches us about the importance of asking the right questions). 337-340 - Sira Vegas, Natalia Juristo:
What can go wrong in software engineering experiments? 341-345 - Thomas Zimmermann:
One size does not fit all. 347-348 - Venkatesh-Prasad Ranganath:
While models are good, simple explanations are better. 349-352 - Lutz Prechelt:
The white-shirt effect. 353-357 - Burak Turhan, Kari Kuutti:
Simpler questions can lead to better insights. 359-363 - Jürgen Münch:
Continuously experiment to assess values early on. 365-368 - Margaret-Anne D. Storey:
Lies, damned lies, and analytics. 369-374 - Amy J. Ko:
The world is your test suite. 375-378
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.