The 9th International Workshop on

Search-Based
Software Testing


Austin, TX, United States - May 16-17 2016

Held in conjunction with ICSE 2016 - ACM/IEEE International Conference on Software Engineering

Find Out More

About the Workshop


Search-Based Software Testing (SBST) is the application of optimizing search techniques (for example, Genetic Algorithms) to solve problems in software testing. SBST is used to generate test data, prioritize test cases, minimize test suites, optimize software test oracles, reduce human oracle cost, verify software models, test service-orientated architectures, construct test suites for interaction testing, and validate real-time properties (among many other things).

The objectives of this workshop are to bring together researchers and industrial practitioners both from SBST and the wider software engineering community to collaborate, to share experience, to provide directions for future research, and to encourage the use of search techniques in novel aspects of software testing in combination with other aspects of the software engineering lifecycle.

The 9th International Workshop on Search-Based Software Testing (SBST) will be co-located with ICSE 2016 in Austin, Texas on May 16-17, 2016.

Learn How To Submit


Past Workshops

News and Updates


For the latest news and updates, please follow us on

Call for Submissions


Practicioners and researchers are invited to submit:

Full Papers

Maximum of 10 pages, on original research- either empirical or theoretical - in SBST, practical experiences using SBST, or SBST tools.

Short Papers

Maximum of 4 pages, describing novel techniques, ideas, or positions that have yet to be fully developed; or that discuss the importance of recently published SBST work by another author in setting a direction for the SBST community.

Position Papers

Maximum of 2 pages, analyzing trends in SBST and raising issues of importance. Position papers are intended to seed discussion and debate at the workshop, and will be reviewed with respect to relevance and their ability to spark discussions.

Competition Reports

Maximum of 4 pages. We invite researchers, students, and industrial developers to design innovative new approaches to software test generation. Find out more.


In all cases, papers should address a problem in the software testing/verification/ validation domain or combine elements of those domains with other concerns in the software engineering lifecycle. Examples of problems in the software testing/verification/ validation domain include (but are not limited to) generating testing data, prioritizing test cases, constructing test oracles, minimizing test suites, verifying software models, testing service-orientated architectures, constructing test suites for interaction testing, and validating real-time properties.

The solution should apply a metaheuristic search strategy such as (but not limited to) random search, local search (e.g. hill climbing, simulated annealing, and tabu search), evolutionary algorithms (e.g. genetic algorithms, evolution strategies, and genetic programming), ant colony optimization, and particle swarm optimization.

Help spread the word! Download the Call for Submissions in PDF format!


Format and Submission

All papers must conform, at time of submission, to the ICSE 2016 Formatting Guidelines.

All submissions must be in PDF format. Make sure that you are using the correct ACM style file (for LaTeX, "option 2" style) and that the paper is in the US letter page format. All submissions should be performed electronically through EasyChair.

Accepted papers will be published as an ICSE 2016 Workshop Proceedings in the ACM and IEEE Digital Libraries. The official publication date of the workshop proceedings is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of ICSE 2016. The official publication date affects the deadline for any patent filings related to published work.

Submit

Important Dates


Submission Deadline: January 22, 2016 January 29, 2016
Competition Report Deadline: February 18, 2016
Author Notification: February 19, 2016
Camera-Ready: February 26, 2016
Workshop: May 2016

Program

Sabine Room


Monday, May 16

  • 8:45 - 9:00 AM: Introduction [Slides]
  • 9:00 - 10:30 AM: Keynote - Tim Menzies [Slides]
  • 10:30 - 11:00 AM: Break
  • 11:00 AM - 12:30 PM: Paper Session 1
    • "Evolutionary Testing for Crash Reproduction" - Mozhan Soltani, Annibale Panichella and Arie van Deursen [Slides]
    • "A Preliminary Empirical Assessment of Similarity for Combinatorial Interaction Testing of Software Product Lines" - Stefan Fischer, Roberto Erick Lopez-Herrejon, Rudolf Ramler and Alexander Egyed [Slides]
    • "On the Diffusion of Test Smells in Automatically Generated Test Code: An Empirical Study" - Fabio Palomba, Dario Di Nucci, Annibale Panichella, Rocco Oliveto and Andrea De Lucia [Slides]
  • 12:30 - 2:00 PM: Lunch
  • 2:00 - 3:30 PM: Tool Competition Session
    • "Unit Testing Tool Competition - Round Four" - Urko Rueda, René Just, Juan Galleoti Galleoti and Tanja E.J. Vos [Slides]
    • "Budget-aware Random Testing with T3: Benchmarking at the SBST2016 Testing Tool Contest" - Wishnu Prasetya [Slides]
    • "EvoSuite at the SBST 2016 Tool Competition" - Gordon Fraser and Andrea Arcuri
    • "JTExpert at the Fourth Unit Testing Tool Competition" - Abdelilah Sakti, Gilles Pesant and Yann-Gaël Guéhéneuc
  • 3:30 - 4:00 PM: Break
  • 4:00 - 5:30 PM: Discussion Panel - "Rethinking SBST - Non-standard Approaches"
  • 6:30 - 8:30 PM: Social Event at Manuel's Great Hills

Tuesday, May 17

  • 9:00 - 10:30 AM: Keynote - Claire Le Goues [Slides]
  • 10:30 - 11:00 AM: Break
  • 11:00 AM - 12:30 PM: Paper Session 2
    • "Extending Search-Based Software Testing Techniques to Big Data Applications" - Erik M. Fredericks and Reihaneh H. Hariri [Slides]
    • "Automated Search for Good Coverage Criteria" - Phil McMinn, Mark Harman, Gordon Fraser and Gregory Kapfhammer [Slides]
    • "Strong Mutation-Based Test Data Generation using Hill Climbing" - Francisco Carlos M. Souza, Mike Papadakis, Yves Le Traon and Marcio Eduardo Delamaro
    • "Hitchhikers Need Free Vehicles! Shared Repositories for Statistical Analysis in SBST" - Gregory Kapfhammer, Phil McMinn and Chris Wright [Slides]
  • 12:30 - 2:00 PM: Lunch
    • Steering Committee Meeting
  • 2:00 - 3:30 PM: EvoSuite Tutorial - Gordon Fraser [Slides]
  • 3:30 - 4:00 PM: Break
  • 4:00 - 5:30 PM: Awards and Closing Remarks [Slides]

Keynote Speakers


Claire Le Goues
Passing tests is easy: when full coverage isn't enough

[View Slides]

Research in automated program improvement seeks to improve programs by, e.g., fixing bugs, porting functionality, or improving non-functional properties. Most such techniques, whether search-based or semantic, rely on test cases to validate transformation correctness, in the absence of formal correctness specifications. In this talk I will discuss the progression of the area of automated bug repair in particular. I will especially focus on the key challenge of assuring, measuring, and reasoning about the quality of bug-fixing patches. I will outline recent results on the relationship between test suite quality and origin and output quality, with observations about both semantic and heuristic approaches. I will conclude with a discussion of potentially promising future directions and open questions, especially focusing on the potential synergies with search-based automated testing.

Claire Le Goues is an Assistant Professor in the School of Computer Science at Carnegie Mellon University in the Institute for Software Research. She is broadly interested in how engineers can construct, maintain, evolve, and then assure high-quality, real-world and open-source systems.Her research is in Software Engineering, inspired/informed by program analysis and transformation, with a side of search-based software engineering. She focuses on automatic program improvement and repair (using stochastic or search-based as well as more formal approaches such as SMT-informed semantic code search); assurance and testing, especially in light of the scale and complexity of modern evolving systems; and quality metrics. For more, see her vita, list of publications, or home page.

Tim Menzies
Data Science2 = (Test * Data Science)

[View Slides]

I will argue that the limits to test are really the limits to science and, also, the limits to data science.

This is an important point since half of “data science” is “science” and science is about communities studying each other's models, while trying to refute or improve those models. Yet much of the current work is concerned with either the (1) the systems layer required to reason over data set or (2) the creation of dashboards that just let anyone view the data.

Missing in much of that work are the tools required to continually, share, critique and maybe refute and improve the models generated from others. Accordingly, this talk explores what extra is required in order to perpetually test the models generated by communities working on data science.

Tim Menzies (Ph.D., UNSW) is a full Professor in CS at North Carolina State University where he teaches software engineering and automated software engineering. His research relates to synergies between human and artificial intelligence, with particular application to data mining for software engineering. He is the author of over 230 referred publications; and is one of the 100 most cited authors in software engineering out of over 80,000 researchers (http://goo.gl/BnFJs). In his career, he has been a lead researcher on projects for NSF, NIJ, DoD, NASA, USDA, as well as joint research work with private companies. Prof. Menzies is the co-founder of the PROMISE conference series devoted to reproducible experiments in software engineering (http://openscience.us/repo). He is an associate editor of IEEE Transactions on Software Engineering, Empirical Software Engineering, the Information Sand Software Technology journal, the Automated Software Engineering Journal, the Software Quality Journal, and the Big Data Research Journal. In 2015, he served as co-chair for the ICSE'15 NIER track. In 2016, he serves as co-general chair of ICMSE'16. In 2017 he will serve as PC co-chair for SBSE'17. For more, see his vita, list of publications or home page.

Discussion Panel


Topic: “Rethinking SBST - Non-standard Approaches”


Phil McMinn (Moderator)

University of Sheffield, UK

Judith Bishop

Microsoft Research, USA

Betty H.C. Cheng

Michigan State University, USA

Robert Feldt (Virtual Participant)

Blekinge Institute of Technology, Sweden

Mark Harman

University College London, UK

Annibale Panichella

Delft University of Technology, Netherlands

Tutorial


Gordon Fraser - A Tutorial on EvoSuite

EvoSuite is a tool that automatically generates test cases with assertions for classes written in Java code. To achieve this, EvoSuite applies a novel hybrid approach that generates and optimizes whole test suites towards satisfying a coverage criterion. For the produced test suites, EvoSuite suggests possible oracles by adding small and effective sets of assertions that concisely summarize the current behavior; these assertions allow the developer to detect deviations from expected behavior, and to capture the current behavior in order to protect against future defects breaking this behaviour. In this tutorial, Gordon Fraser will discuss how to use EvoSuite, how to integrate it into other tools, and how to extend it.

Gordon Fraser is a lecturer in Computer Science at the University of Sheffield, UK. He received a PhD in computer science from Graz University of Technology, Austria, in 2007, and worked as a post-doc researcher at Saarland University, Germany. The central theme of his research is improving software quality, and his recent research concerns the prevention, detection, and removal of defects in software. More specifically, he develops techniques to generate test cases automatically, and to guide the tester in validating the output of tests by producing test oracles and specifications. He is chair of the steering committees of the International Conference on Software Testing, Verification, and Validation (ICST) and the International Symposium on Search-Based Software Engineering (SSBSE).

Tool Competition


After three successful competitions we, again this year, invite developers of tools for Java unit testing at the class level—both SBST and non-SBST—to participate in the 4th round of our tools competition!

The contest is targeted at developers/vendors of testing tools that generate test input data for unit testing java programs at the class level. Each tool will be applied on a set of java classes taken from open-source projects, and selected by the contest organization. The participating tools will be compared for statement and branch coverage ratios, fault detection and mutation scores, and preparation, generation and execution times.

Competition entries are in the form of short papers (maximum of 4 pages) describing an evaluation of your tool against a benchmark supplied by the workshop organizers. In addition to comparing your tool to other popular and successful tools such as Randoop, we will manually create unit tests for the classes under test, to be able to obtain and compare benchmark scores for manual and automated test generation.

Find Out More

Social Event



May 16, 6:30 - 8:30 PM
Manuel's Great Hills



10201 Jollyville Road,
Austin, TX 78759
Directions from the Workshop Venue (7 minute/0.3 mile walk)

One entree is included in the cost of workshop registration. Beverages must be purchased separately. Additional (non-registered) guests must purchase their own food.

Workshop Organizers


Gregory Gay

University of South Carolina, USA
Workshop Co-Chair

Justyna Petke

University College London, UK
Workshop Co-Chair

Tanja Vos

Universidad Politecnica de Valencia, Spain
Tool Competition Chair

Program Committee


Wasif Afzal (Malardalen University, Sweden)
Rob Alexander (University of York, UK)
Giuliano Antoniol (Ecole Polytechnique de Montréal)
Andrea Arcuri (Scienta, Norway)
Earl Barr (University College London, UK)
Mariano Ceccato (FBK (Fondazione Bruno Kessler) Trento, Italy)
Francisco Chicano (University of Málaga, Spain)
Massimiliano Di Penta (University of Sannio, Italy)
Robert Feldt (Blekinge Institute of Technology, Sweden)
Erik Fredericks (Oakland University, USA)
Juan Pablo Galeotti (University of Buenos Aires, Argentina)
Mark Harman (University College London, UK)
Gregory Kapfhammer (Allegheny College, USA)
Zheng Li (Beijing University of Chemical Technology, China)
Phil McMinn (University of Sheffield, UK)
Changhai Nie (Nanjing University, China)
Annibale Panichella (Delft University of Technology, Netherlands)
Simon Poulding (Blekinge Institute of Technology, Sweden)
Marc Roper (University of Strathclyde, UK)
Federica Sarro (University College London, UK)
David White (University College London, UK)

Steering Committee


Phil McMinn (University of Sheffield, UK), Chair
Myra Cohen (University of Nebraska at Lincoln, USA)
Andrea Arcuri (Scienta, Norway)
John Clark (University of York, UK)
Wasif Afzal (Malardalen University, Sweden)
Simon Poulding (Blekinge Institute of Technology, Sweden)
Tanja Vos (Universidad Politecnica de Valencia, Spain)
Mark Harman (University College London, UK)
Gregory Gay (University of South Carolina, USA)
Giuliano Antoniol (Polytechnique Montreal, Canada)

Sponsors


About This Page


Page content and design by Gregory Gay, based on the Creative template by Start Bootstrap (licensed under Apache 2.0)

Header image by Roy Niswanger (licensed under Creative Commons 2.0)