34 lines
4.9 KiB
Plaintext
34 lines
4.9 KiB
Plaintext
We thank the reviewers for their careful attention and feedback. Below, we describe adjustments that we believe address the reviewers’ central concerns. Although relatively minor, we feel that these changes will improve the manuscript enormously. We have room in the paper to make the changes described.
|
||
|
||
1
|
||
R2, R3, & R4 criticized the way we present our findings in terms of learning. We agree that we overstated our findings in this regard and will make a series of minor changes to address this issue:
|
||
|
||
- In our background, we will clarify that “wide walls” support increased engagement which can lead to learning, but that our study only directly measures engagement.
|
||
- We will explain that our results provide no direct evidence of learning but that we believe our findings for H2 provide “evidence in support of the theory that wider walls can also support learning.”
|
||
- We will articulate the reasoning behind the latter point: a) previous quantitative studies of Scratch measure learning as the presence of certain blocks; b) Moreno-Leon et al. (CHI 2017) validated these approaches by comparing them to expert assessments; c) because all users in our sample had access to variables before the treatment, an increase in non-SCV variable use is difficult to explain except through increased familiarity with data structures in general; d) Dasgupta et al. (CSCW 2016) and others described an increase in block use associated with exposure as evidence in support of learning.
|
||
- We will remove the phrase “strong evidence of learning” [R2]. We intended to convey the large effect sizes for our 2SLS models.
|
||
- We will remove the word “learning” from our title [R2].
|
||
|
||
2
|
||
R4 & R1 suggest that our discussion of our methodology was too dense and obtuse. We will address this in several ways:
|
||
|
||
- We will revise our analytic strategy section for clarity. We will have colleagues without econometric training read our revision to ensure that it is accessible and understandable.
|
||
- We will add a citation, with short description, to a methodologically similar econometric study from education research.
|
||
- We will standardize on terminology (e.g., “quasi-experiment” over “natural”; we used them interchangeably).
|
||
- Per R1, we will edit our threats section to clearly explain that our method produces a local treatment effect on affected users—i.e., a subset of Scratch users who differ systematically from all Scratchers in observable and likely unobservable ways (e.g., they are more experienced [R4, R1]).
|
||
|
||
3
|
||
R1, R2 & R4 were concerned that our findings might be driven by novelty. We will add a new paragraph to our threats section to describe this limitation. We will mention that SCV introduces minimal structural novelty (no new blocks, just one new checkbox) but that the functional novelty introduced by SCV is significant and a possible alternative explanation for our findings, especially H1. To some degree, this limitation extends to any causal inference technique (lab and quasi-experimental) that relies on measuring relatively short-term effects—a old criticism of experimental evaluation and user testing in HCI. We will explain that our analysis for H2—where no structure or functionality captured in the dependent variable is new—seems to suggest that that our findings are not only a function of novelty.
|
||
|
||
4
|
||
To more precisely express what we mean by wide walls [R3], we will quote text describing the design rationale from the SCV systems paper. Specifically, we will describe how SCV sought to support a broader range of projects that connected to existing Scratch community practices and needs (per Resnick’s definition). We believe this will explain how SCV was designed not just as a new feature in a toolkit. In the discussion, we will expand the section on the tension inherent in widening walls in terms of learnability. We will cite Resnick and Silverman who frame this as a tension between wide walls and low floors.
|
||
|
||
5
|
||
R3 raised concerns about generalizability. We will edit our methods and threats sections to explain that 2SLS achieves strong internal validity (i.e. unbiased estimation of a local causal effect) at the potential expense of external validity. We will explain that this is an important trade-off in quasi-experimental field studies which are best understood as complementary to lab and qualitative studies. We will remind readers that no single study proves a theory, that questions of generalizability are common to every study, and that one becomes more confident about the validity of a theory from multiple studies in different settings. We will edit our manuscript to carefully convey that this is /a/ test and reflects only a first piece of contingent evidence in support of the widely-cited theory.
|
||
|
||
6
|
||
We will address R3’s comment on constructivism and transmission of knowledge.
|
||
|
||
7
|
||
We will fix the stylistic errors [R2], remove unnecessary quotes [R1], and fix the minor issues raised by R1. We will have our work professionally proof-read.
|