Diving in to Educational Experiments: Process, Evaluation, and Reasoning in Support of Learning (DEEPER Support of Learning)

Call for Participation, in conjunction with Learning Analytics 2019, March 4-8, Tempe AZ

Overview

Randomized experiments in educational settings are an accepted standard for making claims about causal effects about learning and pedagogical decisions. Widespread adoption of online learning has opened new opportunities to conduct large-scale experimentation at lower costs (Kizilcec and Brooks 2017). In light of these unique affordances, Stamper and colleagues proclaimed a new era in experimentation putting forward a Super Experiment Framework (SEF), where multiple experiments can be conducted online at the same time in authentic learning contexts (Stamper et al. 2012). Besides the obvious evolution of the infrastructure underpinning large-scale experimentation, new exciting developments, such as micro-randomized trials in health, have taken place in how experiments could be designed. Finally, extension of mainstream statistical toolkits to Bayesian statistics and the development of machine learning algorithms offered new avenues to more rigorous analysis of data collected through experimental research.

Despite the promise to strengthen the quality of insights gleaned by learning analytics through experimentation, the uptake of educational research in our community is not high. Such in part could be due to the tension existing in educational research around the evidence-based approaches (Nelson and Campbell 2017). Despite its prominence as a method for claims around cause and effect in science, a counter-narrative challenges the relevance and application of experimentation when it comes to education. For instance, Biesta (2010) critiqued key assumptions of evidence-based education (including experiments) as limiting the scope of educational effectiveness and restricting opportunities for participation in educational decision making. Besides, the issues of whose evidence counts and how professional practices interact with evidence, highlighted by Biesta, educational researchers also raise concerns about ethical considerations underpinning experimentation in authentic learning settings. The issues around adoption and use of evidence from experimentation in authentic educational settings are best demonstrated through the recent controversial study by Pearson presented at the AERA, which resulted in significant press coverage (Strauss 2018; Herold 2018) which ended up with Pearson disavowing the inquiry as experiments, instead arguing that the “messages weren’t psychological experiments, but product tests.” (Fussell 2018)

The Learning Analytics and Knowledge (LAK) conference offers a unique venue where researchers interested in informing action to improve learner experiences through evidence can engage in dialogue and use of experimental research in education. This workshop is envisioned as a stepping stone towards a stronger use of experimental research within learning analytics community. The workshop will also set tone for a broader discussion around issues associated with participatory and rigorous experimental research in educational settings. The workshop aims to bridge the knowledge gap about experimental thinking and help broker connections between those in the learning analytics community open and interested in doing experiments. The workshop will briefly cover the fundamental concepts required for experimentation but will focus on introducing innovative experimental approaches. During the workshop we will create opportunities for researchers and practitioners to partner and design experiments.

Preliminary Schedule

(Times to be confirmed closer to date of event)

Note, there will be pre-conference activities as well.

Organizers

References

Biesta, Gert J. J. 2010. “Why ‘what Works’ Still Won’t Work: From Evidence-Based Education to Value-Based Education.” Studies in Philosophy and Education 29 (5): 491–503.

Chow, Shein-Chung, and Mark Chang. 2008. “Adaptive Design Methods in Clinical Trials–a Review.” Orphanet Journal of Rare Diseases 3 (1): 11.

Dweck, Carol S. 2009. “Mindsets: Developing Talent through a Growth Mindset.” Olympic Coach 21 (1): 4–7.

Fussell, Sidney. 2018. “Pearson Embedded a ‘Social-Psychological’ Experiment in Students’ Educational Software [Updated].” Gizmodo.

Gizmodo. April 18, 2018. https://gizmodo.com/pearson-embedded-a-social-psychological-experiment-in-s-1825367784.

Herold, Benjamin. 2018. “Pearson Tested ‘Social-Psychological’ Messages in Learning Software, With Mixed Results.” Education Week - Digital Education. April 17, 2018. https://blogs.edweek.org/edweek/DigitalEducation/2018/04/pearson_growth_mindset_software.html.

Kizilcec, René F., and Christopher Brooks. 2017. “Diverse Big Data and Randomized Field Experiments in Massive Open Online Courses: Opportunities for Advancing Learning Research.” G. Siemens & C. Lang (eds. ), Handbook on Learning Analytics & Educational Data Mining. Murphy, S. A. 2005. “An Experimental Design for the Development of Adaptive Treatment Strategies.” Statistics in Medicine 24 (10): 1455–81.

Nelson, Julie, and Carol Campbell. 2017. “Evidence-Informed Practice in Education: Meanings and Applications.” Educational Research 59 (2): 127–35.

Stamper, John C., Derek Lomas, Dixie Ching, Steve Ritter, Kenneth R. Koedinger, and Jonathan Steinhart. 2012. “The Rise of the Super Experiment.” International Educational Data Mining Society, June. http://files.eric.ed.gov/fulltext/ED537230.pdf.

Strauss, Valerie. 2018. “Pearson Conducts Experiment on Thousands of College Students without Their Knowledge.” The Washington Post,April 23, 2018. https://www.washingtonpost.com/news/answer-sheet/wp/2018/04/23/pearson-conducts-experiment-on-thousands-of-college-students-without-their-knowledge/.