AC review score 3/5 Confidence Confident First Round Overall Recommendation 3 - Maybe acceptable (with significant modifications) Contribution and Criteria for Evaluation The authors present an empirical study of how gender and project feedback affects sharing on the Scratch online community. The most important evaluation criteria for this kind of work are: 1) Plausibility of the study (soundness of the followed methods) 2) Descriptions and analysis of findings 3) The implications to CSCW theory and/or practice that result from the study First Round Review from AC (if needed) Overall, this is a good piece of writing about sharing behaviors in online communities (in this case, Scratch projects). For anyone familiar with this online environment, this paper is of high interest. However, I strongly suggest authors to better situate the context of the study scenario (in particular, the cooperative and collective nature of the Scratch platform) for those who are not necessarily familiar with the topic. All reviewers have stressed the importance of the intended contribution. However, while this submission has spread a bit of divergence among reviewers, both externals have agreed that the current manuscript might benefit from more work during the R&R stage. However, authors should note that papers with this score distribution often end up being rejected if they do not properly address the main concerns raised by reviewers. Coordinator's First-Round Report to Authors The following points should be clarified and/or reworked for being reconsidered during the second round: - The analysis of people's decision choices seems superficial, although well executed in statistical terms. In that respect 2AC suggests to consider further factors, such as the characteristics of the projects, the social relationships among users within the Scratch community, and the potential "negative feedback" that project creators could receive. - R1 questions the consideration of the dataset with regard to the dependent variable of "sharing a project". In particular, he/she raises the issue of how critical would be this variable in the Scratch community as opposed to other contexts of informal learning (where this work has situated itself in the literature). This should be clarified and justified. Furthermore, R2 requires that authors explain how the dataset was obtained for analysis. - R1 also misses a discussion on whether there is (or not) a gender gap on the sharing decision. R1 also indicates that both the abstract and introduction need to be rewritten to better reflect this idea. I strongly suggest authors to improve the discussion on this topic, as I also missed it. - R2 raises several methodological issues that need to be clarified. For instance, projects are not necessarily shared in the order they are created, it is not clear how the formal model in Section 5 was derived, and counting auto-saves as a measure of effort seems overly simplistic. - R1 suggests authors to clarify the limitations of their study and address some of them using complementary research methods. Furthermore, R2 questions the age distribution in the study sample (particularly as reported in table 1), highlighting that this could account for some of the discrepancies in the analysis. By addressing these points, authors would certainly strengthen the value of their intended contribution and I explicitly hope to learn more about this topic. Requested Revisions (blank) Formatting and Reference Issues (blank) ---------------------------------------------------------------- 2AC review score 2/5 Confidence Confident First Round Overall Recommendation 2 - Probably NOT acceptable Contribution and Criteria for Evaluation This paper aims to reveal how children make decisions on sharing the creative artifacts in online informal learning communities. The contribution is mostly empirical. My criteria to evaluate the work will focus on the motivation, empirical study design, and results. First Round Review Regarding the sharing behaviors in online communities, there is a bunch of literature. The paper focus on the community of children has some novelty regarding the context. The paper is well written. The intended contribution of the paper is important. However, there are several major weaknesses in the execution that undermine the paper The paper does not actually deal with people's decision choices. Most of the paper is linking sharing behavior with a few demographics factors though the statistical analyses are fairly well done and sophisticated. So it does not help too much for us to understand people's behavioral choice. To fix this problem, more other factors may need to be considered. Particular some factors that related to the characteristics of the creative artifacts. Since there is a community, the social relationships among users also help to shape people's sharing decision. The paper also does not operationalize the factor "negative feedback." There are "Loves" for a project as positive feedback. But with the increasingly popular of a project, the negative feedback in the comments perhaps also increase, which may lead to the unwillingness of sharing. To sum up, I do encourage the authors to continue this research, but I don't think there is enough time in the r&r cycle for them to improve the study. ---------------------------------------------------------------- reviewer 2 review score 3/5 Confidence Confident First Round Overall Recommendation 3 - Maybe acceptable (with significant modifications) Contribution and Criteria for Evaluation By using a quantitative approach, this paper attempts to provide empirical evidence of (1) a gender gap in the decision to share Scratch (creative computing) projects, and (2) how this gender gap varies across different levels of the creators' experience and the level of positive feedback received in the past. The paper also aims to make a methodological contribution by using a novel method to analyze a longitudinal process of user engagement in a specific action. First Round Review The intended contribution is important as it explores gender differences using a more nuanced approach than prior literature in the field. The submission offers a compelling argument to understand why a gender gap might appear in the context of informal learning and, therefore, why it is important to investigate it in an online setting. The chosen dataset and methods enable a better understanding of how other factors, such as experience and positive feedback, relate to the size of a gender gap regarding the decision to publicly share creative projects. While this submission does quite well at achieving the intended contribution, I have some concerns and suggestions about it : 1) The hypothesis development is supported by literature in informal learning. The use of the action of "sharing a project" as the dependent variable seemed adequate given that it is an important step according to the literature in informal learning; however, once the dataset is considered, the selection of this variable turns more questionable. Less than a third of the projects are shared, the data analysis only considers projects of creators who have shared two or more projects (thus reducing the dataset size), and the number of "love-its" (positive feedback) is rather low (range from 0-10). Therefore, I wonder how critical is the action of "sharing projects" to informal learning in Scratch. Is it possible that it is less critical than in other contexts of informal learning? Could that also explain the unexpected results? Could another variable be used as an alternative dependent variable? 2) Is there a gender gap when considering the decision to share the first project? This seems to be an essential aspect of understanding the relationship between gender and sharing projects; however, it seems that the submission does not present this aspect. 3) Given that the goal of the paper is to better understand the dynamics of the relationship among gender, feedback, and sharing, the paper would be much stronger if some of the method's limitations were addressed by using complementary research methods. This seems particularly necessary given the unexpected results. For example, is there any other kind of evidence that can give some support to the proposed explanation of "second album syndrome"? It would also be beneficial to know whether there are differences across projects' genres and complexity. If it is known that there are gender differences across those variables, then it seems necessary to consider such variables in this analysis as well. 4) I think that the paper is generally well written, except for the abstract and introduction, which do not explain well why it is reasonable to investigate the gender gap in this context. There is also a complete paragraph that is repeated in these two sections. Overall, I think that this is an interesting contribution. I hope the authors can address my concerns in the R&R phase. ---------------------------------------------------------------- reviewer 3 review score 3/5 Confidence Somewhat confident First Round Overall Recommendation 3 - Maybe acceptable (with significant modifications) Contribution and Criteria for Evaluation The authors analyze how gender and project feedback affects project sharing on the Scratch platform. In order to do this, the authors analyzed data from shared and unshared Scratch projects created by 1.1 million Scratch users. The data was analyzed in a stratified manner, separating the data into groups according to the order in which the projects were shared by their users (all the projects that were shared first were analyzed together, all the projects that were shared second were analyzed together, etc.). The authors define three hypotheses related to gender, experience level and feedback, and their relation to sharing on the Scratch platform. If accepted, can the authors include an explanation of how they obtained the dataset? First Round Review The work is interested and well motivated, but I have several issues with the methodology followed by the authors: 1. The paper talks about "boys and girls". However, table 1 shows that the age range is [4, 90]. The mean and median fall in the "boys and girls" age range, but we don't know much about the distribution of the user ages. Have the authors taken into consideration in their analysis that some of the projects may have been created by teachers? This may also explain the sharing behavior exhibited by more experienced users. 2. Projects aren't necessarily shared in the order they are created. It is not clear if/how this affects the model proposed by the authors (beta_4). 3. It is not clear from the paper how the formal model in section 5 was derived. 4. Using the number of auto-saves as a measure of effort involved in a project seems overly simplistic. 5. The order in which the authors presented information in section 5 can be improved.