38
Girls Who Code Confidently and Effectively - A study of student engagement and learning in an after-school coding club Gabriela Buraglia Industrial and Systems Engineering Undergraduate Research Thesis University of Florida [email protected] Supervisory Committee Christina Gardner- McCune, PH.D. Juan E. Gilbert, PH.D. Wayne Giang, PH.D. Engaging Learning Lab Human-Experience Research Lab Human Systems Engineering Lab University of Florida University of Florida University of Florida [email protected] [email protected] [email protected]

Girls Who Code Confidently and Effectively - A study of ......Girls Who Code reports that today only 24% of Computer Scientists are women, down from 37% in 1995. Their mission is to

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

  • Girls Who Code Confidently and Effectively - A study of

    student engagement and learning in an after-school

    coding club

    Gabriela Buraglia

    Industrial and Systems Engineering

    Undergraduate Research Thesis

    University of Florida

    [email protected]

    Supervisory Committee

    Christina Gardner-

    McCune, PH.D.

    Juan E. Gilbert,

    PH.D.

    Wayne Giang,

    PH.D.

    Engaging Learning

    Lab

    Human-Experience

    Research Lab

    Human Systems Engineering

    Lab

    University of Florida University of Florida University of Florida

    [email protected] [email protected] [email protected]

    mailto:[email protected]

  • Table of Contents Abstract 5

    Background 5

    Girls Who Code and Other Initiatives 5

    Study Results on Other After-School Coding Clubs 6

    Girls Who Code 6

    Scratch 7

    Figure 1: Scratch programming environment 7

    Learning Trajectories 7

    Figure 2: Sequence learning trajectory 8

    Figure 3: Repetition learning trajectory 8

    Research Methods 9

    Overview 9

    Research Context 9

    Participants 9

    Table 1: Results to participants demographics survey 10

    Data Collection 10

    Teaching Curriculum 10

    Table 2: Program schedule 12

    Weekly Lessons 12

    Figure 5: Example meeting agenda 13

    Figure 6: Example pre-assessment 14

    Figure 7: Example teaching guide 15

    Figure 9: Example starter project from Girls Who Code 17

    Figure 10: Example post-assessment 18

    Project Creation 18

    Exit Interview 19

  • Data Analysis 19

    Findings 19

    Short-Term Learning 19

    Table 3: Results from short- and long-term assessments 20

    Figure 11: Post-Assessment Scratch Loops Conceptual Question 21

    Long-Term Learning 21

    Demonstrated Understanding of Loops Exit Interview 22

    Figure 12: Cumulative Assessment Loop Conceptual Question 22

    Figure 13: Cumulative Assessment Loop Coding Question 23

    Figure 14: Answer choice b from Cumulative Assessment Loop Conceptual Question 24

    Demonstrated Understanding of Functions Exit Interview 24

    Figure 15: Cumulative Assessment Function Conceptual Question 25

    Figure 16: Cumulative Assessment Function Coding Question 25

    Demonstrated Understanding of Variables Exit Interview 26

    Figure 17: Cumulative Assessment Variable Conceptual Question 26

    Figure 18: Cumulative Assessment Variable Coding Question 27

    Demonstrated Understanding of Conditionals Exit Interview 28

    Figure 19: Cumulative Assessment Conditional Conceptual Question 29

    Figure 20: Cumulative Assessment Conditional Coding Question 29

    Attitude and Perceptions Toward Computing Survey 30

    Table 4: Attitude and Perceptions Toward Computing Survey Results 30

    Statistical Analysis in Attitude and Perceptions Toward Computing Survey 31

    Figure 21: Box plot of questionnaire results before and after the program for question “I am good at programming” 31

    Figure 22: Box plot of questionnaire results before and after the program for question “Computer Jobs are Boring” 32

    Discussion 33

    Short- vs. Long-Term Learning 33

    Table 5: Average Percent of Students Who Answered Correctly in the Post-Assessment (Short-Term) and Post-Interview (Long-Term) 33

    Learning Trajectories 34

  • Figure 23: Sequence learning trajectory 34

    Figure 24: Repetition learning trajectory 35

    Figure 25: Conditionals learning trajectory 36

    Attitude and Perceptions Toward Computing Survey 36

    Limitations 36

    Future Work 37

    Conclusion 37

    References 37

    Personal Biography 38

  • Abstract

    Although women work close to half of all jobs in the U.S. economy, they hold less than 25% of STEM

    (Science, Technology, Engineering, and Mathematics) jobs.1 This is especially true in the field of Computer

    Science where only 19% of Computer Science bachelor degree recipients are female.2 3 While there are

    multiple explanations for the underlying causes of these statistics, one possible reason is the lack of

    Computer Science exposure and access for young girls. This paper focuses on Girls Who Code (GWC), an

    after-school Computer Science program designed specifically for young female students. This study

    investigates an implementation of the GWC program in Gainesville, FL and evaluates student engagement

    and the program’s effect on short- and long-term learning. We present findings focused on 4 conceptual

    areas: variables, conditionals, loops, and functions. As well as comparisons between perception survey

    results before and after the program. We found no significant difference between short- and long-term

    learning and found that topic difficulties in the short-term translated to similar difficulties in the long-

    term. We found that students were able to articulate their thought process and explain their level of

    understanding at the completion of the program. We believe that individual project work at the end of

    the program helped achieve this result by allowing students to apply and solidify concepts learned in

    earlier weekly lessons. We also found that interest in Computer Science stayed constant throughout the

    program, but there was a significant change in personal perceived current programming ability. These

    findings suggest that the Girls Who Code curriculum is effective in teaching students both in the short-

    term and long-term and supports that the program effectively engages students and helps them develop

    coding confidence and maintain an interest in the field.

    Background

    Girls Who Code and Other Initiatives

    Girls Who Code was founded in 2012 by Reshma Saujani. Today they have served an estimated 185,000

    girls. Girls Who Code reports that today only 24% of Computer Scientists are women, down from 37% in

    1995. Their mission is to close the gender gap in technology by educating young girls.4 Many other

    programs have followed suit in contributing to this goal. Such programs include Black Girls Code founded

    in 2011, which focuses on providing workshops and opportunities to teach young girls from

    underrepresented communities basic programming skills.5 Kode with Klossy is another program that

    provides coding opportunities for young girls, specifically a 2-week summer coding camp.6

    Other initiatives have also developed with similar ideals. These include TECHNOLOchicas, an initiative that

    aims to raise awareness among young Latina girls about potential opportunities and careers in technology

    1 Beede, David N. and Julian, Tiffany A. and Langdon, David and McKittrick, George and Khan, Beethika and Doms,

    Mark E., Women in STEM: A Gender Gap to Innovation (August 1, 2011). Economics and Statistics Administration Issue Brief No. 04-11. Available at SSRN: https://ssrn.com/abstract=1964782 2 https://ncses.nsf.gov/pubs/nsf19304/digest/field-of-degree-women 3 https://nces.ed.gov/programs/digest/d17/tables/dt17_318.30.asp 4 https://girlswhocode.com/about-us/ 5 http://www.blackgirlscode.com/ 6 https://www.kodewithklossy.com/impact

    https://ssrn.com/abstract=1964782https://ncses.nsf.gov/pubs/nsf19304/digest/field-of-degree-womenhttps://nces.ed.gov/programs/digest/d17/tables/dt17_318.30.asphttps://girlswhocode.com/about-us/http://www.blackgirlscode.com/https://www.kodewithklossy.com/impact

  • by showcasing role models in their communities.7 Another example is the CoderDojo Girls Initiative that

    aims to increase the number of girls attending a CoderDojo from 29% to 40% by 2020.8 Like Girls Who

    Code, the CoderDojo movement is a network of community-based informal programming clubs for young

    students aged 7-17.

    Study Results on Other After-School Coding Clubs

    Sheridan and Goggin (2016) explored and categorized the learning gained through CoderDojo coding

    activities by aligning observational data collected by CoderDojo mentors about participants across four

    participating European countries with the European Qualifications Framework levels 1 to 3. They found

    93% of participants regardless of age attained basic computer-related skills such as starting a computer,

    saving and editing work unaided, and defining what a computer or programming language is. 74% of

    participants aged 7 to 12 attained skills from the second level such as understanding variables and

    iteration, creating an app by applying concepts previously learned in structured learning and verbalizing

    issues in terms of bugs. And no children younger than 16 achieved skills at the third level such as

    transferring knowledge to other programming languages, communicating learning, or mentoring other

    students at a more junior level.

    Similarly, another non-gender specific program, Code Club also reported their findings with respect to

    learning. Code Club is a voluntary based weekly after-school coding program for children aged 9-13 which

    originated in the UK. Club leaders reported children's confidence level on different programming concepts

    by rating them on a ten-point Likert scale. They found most clubs showed that students were “at least

    reasonably confident” using a variety of programming concepts, with debugging being rated lower than

    other concepts. Both conditionals and variables were reported by a similar number of clubs at each

    confidence level, with both peaking at a confidence level of 7 although more clubs reported this

    confidence level for conditionals than variables. They found that overall students were able to cope with

    difficult programming concepts while remaining engaged and enjoying participation.

    Girls Who Code

    The Girls Who Code program provides after school coding clubs as well as summer program opportunities,

    and during both, like TECHNOLOchicas, focus on providing role models to participants by accompanying

    programming lessons with women in technology spotlights that help students learn about both historical

    and current influential women. Their efforts seem to be paying off. Across all initiatives Girls Who Code

    has served an estimated 185,000 girls to date where half of those girls come from historically

    underrepresented groups. Girls Who Code now has approximately 5,000 college-aged alumni. Those

    alumni declared Computer Science related majors at a rate 15 times the national average.9

    In the last year, Girls Who Code expanded their after-school programs to include elementary aged

    students by identifying the importance of reaching girls as early as possible. Clements and Gullo (1984)

    found that teaching programming to young children can help increase some aspects of problem-solving

    ability. We focused our chapter on an elementary grade level cohort to explore this subset of students,

    which for Girls Who Code is a new addition.

    7 https://technolochicas.org/ 8 https://coderdojo.com/girlsinitiative/ 9 https://girlswhocode.com/2017report/

    https://technolochicas.org/https://coderdojo.com/girlsinitiative/https://girlswhocode.com/2017report/

  • Scratch

    Scratch is a block-based programming environment, which as a result of it’s visual, easy to manipulate

    constructs, allows novice programmers to learn Computer Science concepts without the burden of syntax

    errors. Scratch was developed by the Lifelong Kindergarten research group at the MIT Media Lab with the

    goal of providing an approach to programming that would be appealing and accessible to all ages,

    backgrounds, and interests (Resnick, Maloney, Monroy-Hernández, Rusk, Eastmond, Brennan, Millner,

    Rosenbaum, Silver, Silverman and Kafai 2009). Their current and future approach stems from wanting to

    “lower the floor and widen the walls” by removing conceptual barriers to entry while allowing for a large-

    breath of options to create projects that will deepen core computational ideas. In Scratch there are no

    connections between conditionals and repetition in early levels and effectively using a conditional does

    not require an in-depth understanding of variables and comparison operators, as is the case for traditional

    text-based languages (Rich, Strickland, Binkowski, Moran, and Franklin 2017).

    Figure 1: Scratch programming environment

    Learning Trajectories

    Along with deciding when Computer Science should be taught there is also a big question of how and in

    what order, Rich et. al (2017) present three learning trajectories on sequence, repetition and conditionals

    (see figure 2 through 4). These learning trajectories help map how students can be guided from pre-

    existing knowledge to more complex Computer Science concepts via a spiral curriculum connected over

    three levels of complexity (Rich et. al 2017). Students aren’t expected to traverse through all learning

    objectives from a single exposure to the topic, instead the trajectories are created in a way that allows for

    multiple paths depending on the specific instance of exposure and the specific level that a student falls

    into. These will be explored in more detail later in the Discussion section in reference to the Girls Who

    Code program.

  • Figure 2: Sequence learning trajectory

    Figure 3: Repetition learning trajectory

    Figure 4: Conditionals learning trajectory

  • Research Methods

    Overview

    The goal of this research study is to investigates an implementation of the Girls Who Code program in

    Gainesville, FL and evaluate student engagement as well as the program’s effect on short and long-term

    learning. Learning measures were focused on 4 conceptual areas: variables, conditionals, loops, and

    functions and were measured via written assessments and a verbal exit interview. And engagement was

    measured via a perception survey administered before and after the program.

    Research Context

    This research was conducted within the context of an existing Girls Who Code Chapter in Gainesville led

    by myself, Natalie Remon, and Taylor Hansen. The program had two cohorts, one elementary cohort

    learning Scratch and a high school cohort learning Python. However, the participation of the high school

    group was sporadic due to a busy involvement schedule. They did participate in several lessons, along

    with the pre- and post-assessments, but they did not create a project or participate in an exit interview.

    For that reason, their results and participation will not be discussed in this thesis.

    Participants

    Students were recruited in the fall prior to the beginning of the program through outreach events in the

    community and via Facebook advertising. We volunteered at a STEM night at a local elementary school

    and promoted the club at the end of the planned activity. We also spoke to after-school clubs such as a

    robotics club and a women in STEM club at 2 local high schools. The Facebook advertising also helped to

    gain traction as well as remind previously interested individuals.

    From these participants that were recruited to join the Girls Who Code program, they were then given

    the option of participating in the research aspect as well. IRB01900574 was created for this explicit

    purpose, and with that IRB we obtained consent forms from the parents and assent forms from the

    participants who opted to take part in the research portion of the program.

    All four participants were female. Their ages ranged from 7 to 10 years of age. They were all in first, third,

    or fifth grade. Three out of four students self-reported their ethnicity as African American. And the fourth

    self-reported as African American and White. Only one student reported previous programming

    experience, which they listed as Lego EV3. Two students checked Scratch as a program they had previously

    used from the list given which was Scratch, App Inventor, Hour of Code, Alice, and Other.

  • Grade Age Ethnicity Previous Programming Experience

    Previously Used a Related Program

    1st grade 7 African American and White No Scratch

    3rd grade 8 African American No N/A

    3rd grade 9 African American No N/A

    5th grade 10 African American Yes Scratch

    Table 1: Results to participants demographics survey

    Data Collection

    To collect the data we used a general demographics survey, an Attitude and Perceptions Toward

    Computing survey, weekly assessments, a cumulative assessment, and conducted an exit interview. All

    data was collected verbally or on paper to eliminate any unnecessary technological complications.

    The pre- and post-program Attitude and Perceptions Toward Computing survey used was from Ericson

    and Mcklin (2015). The survey asked students questions that gauged their opinion about Computer

    Science, their perceived current and future abilities, and their general interest in the field.

    I created the weekly assessments that measured short-term learning on my own with the aid of several

    online resources, including Code.Org and progamiz10 11. I also created the cumulative assessment that was

    used as part of the exit interview and to evaluate long-term learning. The questions in the cumulative

    assessment were modified versions of the questions asked in the weekly assessments to ensure that

    material tested was comparable, but hadn’t been previously seen by the students.

    Exit interviews were structured as a think-aloud style interview where students were asked to read aloud

    each question in the cumulative assessment, and then prompted to explain their thought process and

    understanding in different ways. The interviews were conducted one-on-one with each of the four girls

    and took between 15 to 25 minutes each. Their voices were recorded as well as their hands to analyze

    both verbal responses and any visual changes, such as circling a new answer or pointing to parts of the

    question, code, or answer choices.

    Teaching Curriculum

    We used the curriculum provided by the Girls Who Code12 website as the base curriculum to teach the

    students. We used the mini club plan as an organizational guide which is structured for about 10 weeks

    of meetings. Our specific club program was structured as 7 weeks of conceptual lessons, followed by 3

    weeks of project creation, and 1 last celebratory week where we toured a local software company called

    SharpSpring. The first two conceptual lessons were a general introduction to programming and the block-

    based programming language Scratch. Each conceptual lesson thereafter focused on a core programming

    10 https://code.org/curriculum/course2/5/Assessment5-GettingLoopy.pdf 11 https://www.programiz.com/node/675/quiz-results/226905/view 12 https://hq.girlswhocode.com/gwc#collection/clubs-club-plans

    https://code.org/curriculum/course2/5/Assessment5-GettingLoopy.pdfhttps://www.programiz.com/node/675/quiz-results/226905/viewhttps://hq.girlswhocode.com/gwc#collection/clubs-club-plans

  • concept. Concepts covered were loops, variables, conditionals, and functions which were evaluated via a

    pre-lesson and post-lesson assessment. Finally, the last lesson was a comprehensive review accompanied

    by a cumulative assessment. Additionally, an attitude and perception survey was administered at the

    beginning of the program and at the end. Table 2 summarizes the program and research schedules. To

    accompany the material provided by Girls Who Code we created or introduced all the research activities

    and created supplements for the program activities. The supplements we created were lesson guides,

    exercise guides, and a cumulative review. The lesson and exercise guides are explained in more detail

    below in the Weekly Lessons section.

    Overview of Program & Research Schedule

    Week Description Program Activities Research Activities

    1 Introduction to program:

    What Computer Science is,

    introduce each other, and

    what the program will

    consist of

    -Activity: What is Computer Science? -Individual introductions -Expectations

    -Pre-survey: Attitude and Perceptions Toward Computing survey

    2 Introduction to Scratch:

    How to use sprites,

    backgrounds, and different

    categories for different

    types of blocks

    -Video: Who is Grace Hopper? -Conceptual Lesson -Exercises

    -Practice weekly pre- and post-assessment: Introduction to Scratch

    3 Loops -Video: Made with Code: Miral Kotb, Founder of iLuminate -Conceptual Lesson -Exercises

    -Weekly pre- and post- assessments: Loops

    4 Functions -Video: The Cult of littleBits: How This Tech Toy is Changing the Engineering Landscape -Conceptual Lesson -Exercises

    -Weekly pre- and post- assessments: Functions

    5 Variables -Video: Black Women In Tech: Miishe Addy -Conceptual Lesson -Exercises

    -Weekly pre- and post- assessments: Variables

    6 Conditionals -Video: Nonny de la Peña: The Birth of VR Journalism | The Future of News -Conceptual Lesson

    -Weekly pre- and post- assessments: Conditionals

  • -Exercises

    7 Variables continued13 -Activity: Play with Google Daydream VR headset -Video: Project Game: The Future of Education -Conceptual Lesson -Exercises

    -Weekly pre- and post- assessments: Variables

    8 Review and begin project -Cumulative Review -Explain Project Guidelines -Begin Project

    -Final Cumulative Assessment

    9 Individual project -Continue working on project

    10 Finish project and Exit

    Interview

    -Continue working on project -Pre-survey: Attitude and Perceptions Toward Computing survey

    11 Celebrate with tour at

    SharpSpring

    -Tour of offices and meet women on the software development team

    Table 2: Program schedule

    Weekly Lessons

    The GWC Gainesville chapter was held weekly. During the seven weeks of conceptual lessons we

    consistently followed the meeting structure of a women in tech spotlight video, a pre-assessment, a

    review and lesson, exercises, a post-assessment, and a stand-up meeting, an example is illustrated below

    in figure 5.

    13 Variables were taught on two seperate weeks because the first time 2 out of the 4 girls were absent. The same

    lesson and assessments were administered on two seperate meeting dates.

  • Figure 5: Example meeting agenda (week 4)

    The woman in tech spotlight videos showcased an influential woman in the field of Computer Science

    provided by Girls Who Code. These spotlights included historical women like Ada Lovelace and Grace

    Hopper as well as contemporary women like Miral Kotb. The video was briefly discussed among students.

    Then, a pre-assessment I created was administered about the weekly topic. The pre-assessment was

    meant to gauge what the students knew or could understand without exposure to the topic. The

    expectation was that since most girls had minimal previous programming exposure, the topics would not

    be familiar. An example pre-assessment is illustrated below in figure 6.

  • Figure 6: Example pre-assessment (variables)

    Instructors would then review last week’s topic and introduce the current week’s concept. Content for

    these lessons was provided by Girls Who Code within a learning tutorial we selected entitled “Animations

    in Scratch”. The tutorial covered each concept as a separate part. We organized each lesson to cover one

    part. In each of these sections there was a portion called “Big Idea” which we made into our lesson section

    and another called “Learn it” that we made into our exercise section. For the lesson portion I took the

    information provided and formatted it into a presentable word document to serve as a lesson guide.

    Definitions and examples were discussed with the girls. An example of the lesson guides created is shown

    below in figure 7.

  • Figure 7: Example teaching guide (loops)

    After the verbal lesson, they completed exercises on Scratch from the tutorial. Girls Who Code provided

    a step by step online tutorial to complete each exercise. I thought having the student’s read off the steps

    and follow the tutorial didn’t foster independent thinking and saw that without prompting by facilitators

    the girls often drifted from the task as a result of boredom or distraction. So instead, I crafted a document

    for facilitators to prompt students throughout the activity. And gave students a starter project from which

    to begin which was often given by Girls Who Code and, in the case that it wasn’t given, I created one. An

    example exercise guide is included below in figure 8 and an example starter project is shown in figure 9.

  • Figure 8: Example exercise guide (functions)

  • Figure 9: Example starter project from Girls Who Code (functions)

    At the end of the tutorial, students were given the post-assessment I created which was expected to show

    an improvement in their understanding of the topic. A post-assessment example is illustrated in figure 10

    below. Each session was then finished with a stand-up meeting meant to emulate the real software

    development practice to share with peers what is currently being worked on, and any roadblocks or

    successes encountered.

  • Figure 10: Example post-assessment (conditionals)

    Project Creation

    During the last 3 weeks, students worked on their own personal project of their choosing and were

    encouraged to apply concepts learned. Students had the choice to pick a game, animation, or short movie

    clip. We gave students starting guidelines to include 1 background, 2 sprites, 2 variables, 1 conditional, 2

    loops, and 1 function. Students worked freely on their projects and instructors assisted when requested.

  • Exit Interview

    At the end of the program each student was interviewed verbally to better understand their programming

    reasoning. They walked researchers through their cumulative assessment. They were prompted to read

    the question aloud, explain what was being asked of them, answer parts of the question, then choose the

    best answer with an explanation of why that option was chosen. These interviews were voice recorded

    along with a video recording of the girls’ hands as they explained the assessment questions and pointed

    at different parts in the code.

    Data Analysis

    In the evaluation of short-term learning all results were based solely on the written multiple choice pre-

    and post- assessments. Each multiple-choice question had the option to respond “I do not know” in order

    to avoid false positives. However, written multiple choice assessments are limited in nature because

    misconceptions can only be speculated.

    In the long-term pre-interview results were based on the cumulative assessment and the post-interview

    results were based on the cumulative assessment as well as any answers changed during the interview.

    The assessment results were analyzed by studying the percentage of students who answered each

    question correctly. The exit interview results were evaluated by comparing misconceptions and areas of

    understanding across participants.

    To gauge the meaning of results from the Attitude and Perceptions Toward Computing survey, areas of

    increased or decreased scores were identified and a paired sample t-test was conducted on each of the

    survey questions to find any potential statistical significance in the results. For the t-tests we opted for an

    alpha level of 0.1 because in the scenario of a small data set, standard errors tend to be larger so an alpha

    level of 0.1 may make sense as opposed to a more stringent level of 0.05 or 0.01 (Noymer 2008).

    Findings

    Short-Term Learning

    We found evidence of short-term learning illustrated by students’ consistent improvement between pre-

    and post-assessment percentages on three of the four content areas (functions, variables, and

    conditionals) for the conceptual questions (see Table 3). We saw consistent improvement between pre-

    and post-assessments in the coding questions, with variables and conditionals demonstrating the largest

    growth. This shows that the largest improvement isn’t in learning the concepts, but in applying those

    within a coding environment.

  • Percent of Students Who Answered Correctly

    Loops Functions Variables Conditionals

    Short-Term Pre-Assessment Conceptual 75% 50% 75% 100%

    Post-Assessment Conceptual 50% 100% 75% 100%

    Long-Term Pre-Interview Conceptual 75% 75% 25% 100%

    Post-Interview Conceptual 75% 100% 50% 100%

    Short-Term Pre-Assessment Coding 75% 50% 0% 25%

    Post-Assessment Coding - 75% 50% 100%

    Long-Term Pre-Interview Coding 75% 50% 25% 100%

    Post-Interview Coding 75% 75% 50% 100%

    Table 3: Results from short- and long-term assessments

    The only area where we see inconsistent performance was in students’ responses to pre- and post-

    assessment conceptual question for loops. In the loop post-assessment two students that had previously

    answered correctly on the pre-assessment, answered incorrectly on the post-assessment. We believe this

    shift in understanding arose after another facilitator accidentally misguided them. The facilitator tried to

    help by illustrating the dance pattern as seen in the question below but did it incorrectly and confused

    the students (see Figure 11 below). The result also rebounded to the original level in the long-term

    evaluation, which helps to support that it was a momentary misunderstanding caused by the facilitator.

    Additionally, the loops post-assessment coding question had to be removed from our data set because

    we realized it inadvertently expected the students to apply multiplication knowledge and the results

    weren’t representative of their abilities.

    For variables, the short-term results for the conceptual question are stable at 75% of students answering

    correctly, however in the long-term this dips and only recovers to 50% in the post-interview results.

    Conversely in the short-term, results for the coding question show a large increase from 0% of students

    correctly answering in the pre-assessment to 50% in the post-assessment. This number is stable in the

    long-term post-interview results which are also 50%.

  • Figure 11: Post Assessment Scratch Loops Conceptual Question

    Long-Term Learning

    Similar to short-term learning gains, we see consistent performance in the pre- and post-interview results

    where percentages either stay the same as seen in loops and conditionals or improve as seen in functions

    and variables. Further observation shows that for functions and variables improvements are seen in both

    the conceptual questions and the coding questions, demonstrating that the confusion was with the overall

    topic. These improvements can partly be attributed to their individual project work which they worked

    on between the final cumulative assessment and the exit interviews, as well as the students self-

    identifying misconceptions they had when taking the assessment and being able to correct them during

    the interview.

    Comparing overall averages between both the long- and short-term results in the post-interview (78%)

    and post-assessment (75%) respectively show similar results in terms of percentage of students that

    answered correctly. Additionally, the results show that topics students found more or less difficult in the

    short-term translated to similar difficulties in the long-term. This observation is especially true for the

    most difficult concept for students, variables.

    The exit interview shed light on common misconceptions, ideas that were understood, and confusion on

    question structure or wording. In the next subsections, we will be discussing our observations of student

    misconceptions per each topic. Please note that topics are discussed in the order they were taught,

    however in the final cumulative assessment the questions were asked in a different order to eliminate

    any bias from topic order.

  • Demonstrated Understanding of Loops Exit Interview

    In the cumulative assessment there were two questions on loops, both were questions using Scratch code,

    one was categorized as conceptual and the other as a coding question. The conceptual question was based

    on debugging, it asked student to correct a program to act as desired. The coding question asked students

    to trace through the code and count how many times a specific action occurred.

    Figure 12: Cumulative Assessment Loop Conceptual Question. Correct answer is b.

  • Figure 13: Cumulative Assessment Loop Coding Question. Correct answer is b.

    Successful Learning. For loops in the post-interview analysis, 3 out of 4 students answered the conceptual

    question (Figure 12) correctly and 3 out of 4 students answered the coding question (Figure 13) correctly.

    All students were able to identify where and what the loop was. When prompted they could all point to

    what part of the code was the loop and referred to each type correctly, either a repeat n loop or a forever

    loop. All students could also correctly explain that a repeat n loop would cause some portion of code to

    repeat n times. They also were all able to explain that a forever loop would cause code to repeat without

    end. And they correctly understood that code not affected by the loop would only happen once.

    Learning Misconceptions or Confusions. We observed two types of misconceptions: contents of loops

    and linguistic misunderstanding. Two out of four students did not understand that the code inside of the

    loop would repeat as a result of the looping construct. Instead, they only identified that there was a loop

    and incorrectly said that some part of the code outside of the loop would repeat as a result. For one

    student the confusion persisted after the interview, but for the other student with verbal discussion

    during the interview, she revised her reasoning and was able to identify correctly what part of the code

    would be affected by the loop.

    One student experienced confusion on the wording of the conceptual question (Figure 12). When she

    took the cumulative assessment, she selected the last answer choice, “I do not know”. Then during the

    interview through discussion she picked the correct answer choice, b, however she promptly changed it

    to d. The problem was although she could explain that in Scratch when the “say Bark! for 2 seconds” block

    was used a speech bubble appeared with the word “Bark!” and when the “start sound bark” block was

  • used a dog sound was played, she kept thinking that it was three times in combination, so she incorrectly

    choose option d from Figure 12 that had 1 speech bubble and 2 sounds, thinking they should be

    aggregated. The correct answer choice b is reproduced below in Figure 14 in a running state for clarity.

    This misconception can be looked as a linguistic misconception, where in a real-world scenario when a

    dog says bark, it would be equivalent to hearing the sound bark. During the interview this student

    highlighted that she believes she is better at math than reading, which might also contribute to this

    misconception.

    Figure 14: Answer choice b from Cumulative Assessment Loop Conceptual Question (Figure 12).

    Reproduced here in a running state for clarity.

    Demonstrated Understanding of Functions Exit Interview

    In the cumulative assessment there were two questions on functions, both were questions using Scratch

    code, one was categorized as conceptual and the other as a coding question. The conceptual question

    asked students to identify what happened in a function, without it being incorporated into a program

    (Figure 15). The coding question required students to trace through the code and explain how a function

    incorporated into a program affected code flow (Figure 16).

  • Figure 15: Cumulative Assessment Function Conceptual Question. Correct answer is c.

    Figure 16: Cumulative Assessment Function Coding Question. Correct answer is c.

    Successful Learning. For functions in the post-interview analysis, 4 out of 4 students answered the

    conceptual question correctly (Figure 15) and 3 out of 4 students answered the coding question correctly

    (Figure 16). All students understood that when a function was called all code in the definition was

    executed, which was seen from their correct understanding of the conceptual question (Figure 15).

    Learning Misconceptions or Confusions. We observed three types of misconceptions: generally

    misunderstanding program flow, misunderstanding program flow for linguistic reasons and confusing

    functions with variables. However, several students were confused about code flow when the function

    was incorporated into a program. One student followed correct flow for the first function, then didn’t

    connect that execution would return to the main program. Another student didn’t see that execution

    would change to the corresponding function definition when either function block was reached in the

    main program. Instead she thought the function would just produce the result in the function name. In

    the context of this question (Figure 16), she thought the first function, called SayHello, would have the cat

    say, “Hello” and the second function, called SayILike, would have the cat say, “I like”, instead of the code

    that was inside the function definitions. This is the same student that also had a linguistic misconception

    in the loop conceptual question.

  • One student answered both the conceptual and coding question correctly. However, when asked what

    concept was being tested, she could not recall the term function. Instead she referred to it as being like a

    variable. “I forgot what it’s called but it’s like…oh yeah, okay. It’s like a variable.” Although this

    misconception didn’t negatively affect her understanding of the concept it’s interesting to note. This

    misconception is likely due to the organizational structure of the specific Girls Who Code “Animations in

    Scratch” tutorial that was used. In this tutorial variables were taught right after functions and in the

    learning exercises parallels were drawn between the similar potential of functions and variables to replace

    areas in the code where the same action or value was used in multiple instances. The resulting

    misconception of functions being like a variable didn’t create confusion in the scope of the final

    cumulative assessment, or even in the scope of this program. However, it is possible that this

    misconception could transfer to future misuse of the two concepts or mistakes by thinking that the two

    constructs are interchangeable.

    Demonstrated Understanding of Variables Exit Interview

    In the cumulative assessment there were two questions on variables, one was in text and one using

    Scratch code. The conceptual question was about variable assignment and how to use the assignment

    correctly to complete a sentence (Figure 17). The coding question required students to trace through the

    code and identify the starting and ending value for a specific variable (Figure 18).

    Figure 17: Cumulative Assessment Variable Conceptual Question. Correct answer is c.

  • Figure 18: Cumulative Assessment Variable Coding Question. Correct answer is c.

    Successful Learning. For variables in the post-interview analysis, 2 out of 4 students answered the

    conceptual question (Figure 17) correctly and 2 out of 4 students answered the coding question correctly

    (Figure 18). In the coding question all students understood the assignment statement for the variable.

    And as a result, could all correctly communicate what value the variable would start with. Students were

    able to correctly identify the concept as a variable, and provide a general, low-detail explanation. For

    example, one student said, “They are something that you change. You tell the code what to change.” In

    the conceptual question, one student was confused by the assignment part of the question. She read the

    visual variable = value portion as variable – value instead. Another student skipped over reading that

    portion of the question out-loud, which suggests confusion on the question or not having read the

    question entirely and thoroughly before answering the question.

    Learning Misconceptions or Confusions. We observed three types of misconceptions: confusion in

    understanding conceptual assignment statement, differentiating conceptual use of variable versus

    variable name, and a linguistic misunderstanding of a Scratch coding block. In the conceptual question

    (Figure 17) two of the students were confused that the variable names also made sense when used in the

    sentence and both opted to use the name of the variable in completing the sentences instead of the

    variable’s value. This conceptual question was modified from a coding activity worksheet found on

    Code.org. In the coding question (Figure 18) two students were confused by the “change toys by 1” block.

    One student thought this meant the value of the variable would become one. The other student thought

    this meant value would decrease by one, instead of increase. When looking for the correct answer said,

    “But they don’t have a three here [as one of the answer choices].” The correct answer should have been

    four. This shows that she understood the block would change the value of the variable by one, but the

    confusion stemmed from which way the value would change. It is possible that this construct could be

  • avoided by a linguistic change in the Scratch block. If the block specified “increase toys by 1” or “decrease

    toys by 1” it would be easier for users to conceptualize which way the variable changes and could even

    help communicate that this would modify the variable’s existing value instead of setting it to a new value.

    Instead, currently in Scratch to decrease or increase a variable, users must specify a positive 1 or negative

    1 to correctly indicate which way to change a variable, which requires prior understanding of negative

    numbers and how to utilize them.

    Demonstrated Understanding of Conditionals Exit Interview

    In the cumulative assessment there were two questions on conditionals, one was in text and one was

    using Scratch code. The conceptual question (Figure 19) was about identifying the outcome of a scenario

    explained with a conditional. The coding question (Figure 20) required students to understand a

    conditional in code and identify the outcome of satisfying or not satisfying the condition.

    Figure 19: Cumulative Assessment Conditional Conceptual Question. Correct answer is b.

  • Figure 20: Cumulative Assessment Conditional Coding Question. Correct answer is d.

    Successful Learning. For conditionals in the post-interview analysis, 4 out of 4 students answered the

    conceptual question (Figure 19) correctly and 4 out of 4 students answered the coding question correctly

    (Figure 20). All four students understood the first conceptual question and were able to correctly identify

    which part of the conditional applied to each outcome and then able to aggregate these results into a

    cumulative total. Each in their own way approached the question as a math problem. When asked what

    she thought the question was asking, one of the students answered, “I think it’s asking you to do the

    math.” All students could correctly make the distinction between something belonging to the if part of

    the condition versus the else part. All students were able to develop the if…then…else construction and

    provide a real-world example. For example, one student gave the example, “If I break my arm, then I go

    to the doctor. Else I will be happy.”

    Learning Misconceptions or Confusions. We observed one type of misconception:

    identifying/understanding the condition within code. Two students had difficulty connecting code

    behavior with meeting or not meeting the condition. In the context of the conceptual question (Figure

    20), that meant answering what happened if the space key was or was not pressed. Both students needed

    prompting to connect executing the specific action that would make the condition true to applying the if

    part of the conditional and vice versa. This difficulty might be addressable by making sure to explicitly

    explain the weight of a condition within the coding environment and how a condition is or is not satisfied

    in the coding environment. It might have been easier to understand the conceptual question (Figure 19)

    because there were 3 separate scenarios, and each happened sequentially. In the conceptual question all

    students pointed to a specific outcome (one of the resulting balls) and connected it correctly to the part

  • of the conditional that would be applied (the if or the else). This demonstrates that the confusion is most

    likely in deciding if the condition is satisfied or not. In the coding question the outcome wasn’t as easily

    connected to the conditional. The student had to understand if pressing the key or not would satisfy the

    conditional, which seemed to generate some confusion.

    Attitude and Perceptions Toward Computing Survey

    At the beginning and end of the program each student was given an Attitude and Perceptions Toward

    Computing survey. The idea was to use these results to gauge student engagement and gain insight into

    program impact and effect on students. For each question students identified their agreement level on a

    scale of 1-5, 1 being strongly disagree, 3 being neutral, and 5 being strongly agree.

    Computer

    s are fun

    Programming

    is hard

    Computer

    science

    skills are

    not

    dependent

    on gender

    Computer

    jobs are

    boring

    I am good at

    programming

    I am

    interested

    in

    computer

    science

    I can become

    good at

    programming

    I like the

    challenge of

    programming

    I want to

    find out

    more

    about

    computer

    science

    Pre-Program 5 4 4 3 2 5 5 5 5

    4 4 5 2 1 4 3 5 5

    5 4 5 1 4 5 5 5 5

    5 4 5 4 1 4 3 4 5

    Total 19 16 19 10 8 18 16 19 20

    Post-

    Program

    5 2 5 1 3 5 5 5 5

    4 4 5 1 3 5 4 5 5

    5 4 5 1 5 5 5 5 5

    4 2 4 2 4 4 4 4 4

    Total 18 12 19 5 15 19 18 19 19

    Table 4: Attitude and Perceptions Toward Computing survey results

    Two potential trends were identified in the students’ responses. The smallest difference was seen in

    questions that addressed their current and future interest in computer science. These questions were

    “Computers are fun”, “I am interested in computer science”, “I like the challenge of programming” and “I

    want to find out more about computer science”. There was also no difference in the question of gender

    bias, “Computer science skills are not dependent on gender.” Other studies have also identified

    consistency on current and future interest in computer science before and after program participation.

    One such case is Sullivan, Byrne, Bresnihan, O’Sullivan, and Tangney (2015) found that within their

    CodePlus after-school computing program there was no significant change in their perceived likelihood to

    study computer science. They identified this could be due to the fact that the group had an existing

    interest in the subject matter. This is also the case in our program because all our students were self-

    selecting participants who searched for this Girls Who Code chapter because of their preexisting interest

    in Computer Science.

  • The largest difference between the pre-program and post-program responses was seen on questions that

    addressed personal perceived current and future ability. Those questions were “I am good at

    programming”, “I can become good at programming”, and “Programming is hard”. As well as the question

    that asked about opinion of jobs involving computers, “Computer jobs are boring.” The same CodePlus

    after-school program discussed above found similar results. Sullivan et. al (2015) found that after

    participation in the program students’ perceived ability improved as well as their self-efficacy. They also

    noted that baseline comparisons in the pre-questionnaire demonstrated girls had lower ratings of self-

    efficacy and confidence in Computer Science. Another study also found that less participants found

    programming to be hard after participation. In this study Jemmali (2016) piloted a game to teach middle

    and high school girls programming called May’s Journey, after participation more girls reported that

    programming wasn’t hard. Out of 8 females, before participation 1 believed programming wasn’t hard,

    and after participation four reported it wasn’t hard. In our case 2 out of our 4 participants changed their

    answer from agree that “programming is hard” to disagree.

    Statistical Analysis in Attitude and Perceptions Toward Computing Survey

    Further analysis shows that there is even statistical significance in the findings discussed above,

    specifically for the questions “I am good at programming” and “Computer Jobs are Boring”. As explained

    previously, the questionnaire scale was from 1 to 5, where 1 corresponded to strongly disagree, 2 to

    disagree, 3 to neither agree nor disagree, 4 to agree, and 5 to strongly agree.

    Figure 21: Box plot of questionnaire results before and after the program for question “I am good at

    programming”

  • Figure 22: Box plot of questionnaire results before and after the program for question “Computer Jobs

    are Boring”

    The boxplots in Figure 21 and 22 show the distribution of results before and after the program for these

    two questions. We can see that after the program, all students answered the question “I am good at

    programming” (Figure 21) on the top half of the scale somewhere from “neither agree nor disagree” to

    “strongly agree”. Whereas previously 3 out of 4 students had answered this question as “strongly

    disagree” or “disagree” and only one student had answered “agree”. Similarly, after the program all

    students answered, “Computer jobs are boring” (Figure 22) on the lower half of the scale as either

    “strongly disagree” or “disagree”. Whereas previously students had answered all over the scale from

    “strongly disagree” to “agree”.

    I also ran a paired t-test to compare the pre- and post-program answers for these two questions. The t-

    test results for “I am good at programming” were t(3)=-3.65563 and p=0.035353. The t-test results for

    “Computer jobs are boring” were t(3)=2.611165 and p=0.079605. In the scenario of a small data set

    standard errors tend to be larger so an alpha level of 0.1 may make sense as opposed to a more stringent

    level of 0.05 or 0.01 (Noymer 2008). At an alpha level of 0.1 there is a significant difference in the scores

    for both questions. Suggesting that participation in the program increases participant's self-efficacy in

    relation to their programming skills as well as helps change their perception of jobs involving computers.

    The results could have been that at the end of the program the girls became discouraged at their own

    abilities and found programming too difficult and their existing interest in Computer Science could have

    also gone down. Instead, our results reflect a positive impact from participation in the program. The

    students’ self-confidence improved, and they are now more likely to say they are good at programming

    and can become good at programming as well as find a job involving computers to be interesting.

    These results are also in line with findings from verbally discussing what the girls’ wanted to be when they

    grew up. For example, one student had said she wanted to be a dentist before starting the program, but

    now wanted to be a fashion designer that used technology. She went on to explain that she could do this

    via one of two options. By either placing lights on clothing and making the clothing technological or by

  • designing the clothing with technology and computers. The first option she gave is especially interesting

    because one of the women in tech spotlights was on Miral Kotb whose company iLuminate wirelessly

    controls lights on dancers’ clothing to create a unique artistic performance.14 This supports that these

    women in tech spotlights help young girls develop role models and help them envision themselves with

    similar potential to create and invent.

    Other students also shared interesting responses for what they wanted to be when they grew up at the

    end of the program. Two students said they wanted to be game developers and one of these students

    had said she wanted to be a Computer Scientist when the program began. This showed that she was able

    to further define what in Computer Science appealed to her. The fourth student had said at the beginning

    of the program that she wanted to be a veterinarian. At the end of the program she explained that she

    wanted to bring animals to the veterinarian through programming by creating websites and services. She

    also detailed how she wanted to code a program that would help bring food directly to cats and dogs. All

    students incorporated programming and coding into what they wanted to pursue as a job in the future.

    This demonstrates that not only did coding become something attainable, it became part of what they

    aimed to do in the future.

    Discussion

    Short- vs. Long-Term Learning

    A summary of the average percentage of students who answered the questions correctly across concepts

    in the short-term versus the long-term shows that there is no meaningful difference in level of learning in

    the short-term versus the long-term. It also shows that the areas were students struggled the most and

    excelled the most stayed consistent. These areas were variables and conditionals respectively.

    Average Percent of Students Who Answered Correctly

    Loops Functions Variables Conditionals Average Across Concepts

    Post-Assessment (Short-Term) 50% 88% 63% 100% 75%

    Post-Interview (Long-Term) 75% 88% 50% 100% 78%

    Table 5: Average Percent of Students Who Answered Correctly in the Post-Assessment (Short-Term) and

    Post-Interview (Long-Term). 15

    These results are in-line with observational data where variables were confusing both in conceptual and

    coding questions. And conditionals were very easy to understand in conceptual questions and fairly easy

    to understand in coding questions. While loops and functions produced more mixed levels of

    understanding among students.

    14 http://www.iluminate.com/bios/miral-kotb/ 15 The percentage for Post-Assessment (Short Term) Loops does not include the coding question because it was thrown out as out-of-scope for this course because it involved an understanding of multiplication this might falsely lower that resulting percentage.

  • These findings are also in line with Smith et. al (2014) who found that across all student projects, variables

    were in the 3 concepts most rarely used while control statements were in the top 3 most frequently used.

    Similar to Maloney, Peppler, Kafai, Resnick, and Rusk (2008) who found that in an analysis of 425 student

    projects with no formal instruction, conditional statements were used in 26.1% of projects, while variables

    were only used in 9.6% of projects. Suggesting that students in other programs also find variables to be a

    more complicated, harder to grasp concept, and conditionals to be an easier, more understandable

    concept.

    Learning Trajectories

    Rich et. al (2017) created three learning trajectories for sequencing, repetition, and conditionals based on

    an in-depth analysis of current Computer Science research and relevant scholarly articles. A learning

    trajectory is a map or route of how students can navigate from existing knowledge to complete

    understanding of a certain topic. The learning trajectories presented are constructed in a spiral formation

    consisting of three levels of understanding. I mapped the student’s understanding and abilities from the

    weekly pre- and post-assessments, cumulative assessment, and exit interviews conducted at the end of

    the program to each of these learning trajectories.

    For the sequence learning trajectory over the course of this 10 week Girls Who Club program students

    navigated through the beginning level trajectory and finished at the last block of the intermediate section,

    “Creating working programs requires considering both appropriate commands and their order.” This

    trajectory is highlighted below in Figure 23. I outlined this mapping based on the material covered within

    the lessons and the understanding students demonstrated during the assessments and exit interviews

    with respect to code sequencing. For example, the last learning objective in our highlighted learning

    trajectory was exemplified by the first loop question (Figure 12) on the final cumulative assessment that

    asked students to achieve a specific outcome by debugging the current program construction and

    replacing the current forever loop with a more appropriate solution. This learning objective is also

    exemplified by the first function question on the final cumulative assessment (Figure 15) that asked

    students to identify which actions occur and in what order. However, they did not move past this block,

    and did not have a chance to explore the trajectory in the intermediate level.

    Figure 23: Sequence learning trajectory

    For the repetition learning trajectory, students in the Gainesville GWC program completed the

    intermediate trajectory and did not explore the beginning trajectory or move on to any concepts in the

  • advanced level. This highlighted trajectory is shown in Figure 24. In the beginning trajectory the second

    step is identifying that “Instructions like ‘Step 3 times’ do the same thing as ‘step, step, step.’” This concept

    was not explicitly taught during the program and understanding of this concept was not explicitly shown

    by any of the participants. However, students did exemplify understanding of concepts in the

    intermediate level such as understanding different kinds of repetition and that repetitions can go on

    forever or stop. For example, in the first loop question on the final cumulative assessment (Figure 12)

    participants were able to understand why the current forever loop was not an accurate solution to the

    goal at hand and were able to identify that a finitely terminating solution was more appropriate.

    Figure 24: Repetition learning trajectory

    For the conditionals learning trajectory, students experienced the beginning level and reached the bottom

    block of the intermediate level, “Conditional statements can create branches in the flow of execution”.

    But did not explore any other concepts in the intermediate section and did not move on to any concept

    in the advanced level. The highlighted path is shown in Figure 25 below. For example, most of the concepts

    in the beginning level are demonstrated in the conceptual conditional question in the final cumulative

    assessment (Figure 19). This question exemplifies mapping a condition to its outcome. Similarly the

    bottom block in the beginning level “Conditional statements are computer commands to evaluate

    conditions and complete connected actions” as well as the previously mentioned bottom block of the

    intermediate level are shown by the second conditionals question in the final cumulative assessment

    (Figure 20) where students have to evaluate a condition and follow the resulting branches of execution.

  • Figure 25: Conditionals learning trajectory

    Attitude and Perceptions Toward Computing Survey

    As discussed earlier, the two most pertinent results from the pre- and post-program Attitude and

    Perceptions Towards Computing survey were for the questions “I am good at programming” and

    “Computer Jobs are Boring”. At an alpha level of 0.1 there is a significant difference in the scores for both

    questions when comparing the pre- and post-program results. This suggests that participating in the

    program improved the girl’s coding confidence and also helped them positively change their perception

    of jobs involving computers.

    This second finding was strengthened when participants were asked what they wanted to be when they

    grew up because when all participants incorporating something related to computers or programming.

    This result can be attributed to the effective engagement of the program, success of the women in tech

    spotlights and successfully exposing them to real women in the technology workforce with the tour of the

    local software company SharpSpring at the end of the program.

    Limitations

    One of our largest limitations, was time. The entire program was a total of 11 weeks, with 7 weeks of

    conceptual lessons, 3 of individual project work, and 1 celebration tour at SharpSpring. These sessions

    were each only an hour long. We had originally planned an hour and a half as the Girls Who Code planning

    structure suggests. However, we had revised it to one hour because the high school potential participants

    had expressed that an hour-long commitment was easier for them to make. Although, this might be true,

    the group of high school students weren’t consistent anyways and instead it lowered the quality of lessons

    we were able to provide the elementary school group with.

    Another limitation of our data is that we only conducted exit interviews at the end. This means that our

    insight into their areas of understanding and misconceptions is limited. We can only speculate what they

    might have had confusion on during their weekly assessments. Our data would be stronger if these exit

    interviews had been conducted after each assessment.

  • Future Work

    Findings in the study are confined to one specific Girls Who Code chapter with only four participants.

    There are Girls Who Code chapter across the United States. Future work could focus on making broader

    implications for more participants in more chapters. Other differences in chapter success could also be

    studied such as how the level of instructor’s preexisting knowledge of Computer Science affects the

    overall experience of club members. Along similar lines, best practices could be studied to help volunteers

    across all clubs run successful programs.

    It would also be insightful to study clubs across different age groups. This study was originally planned

    with 2 different cohorts, a high school aged group and the elementary school aged group. As a result of

    the high school aged group’s inconsistent participation, results and analysis had to be focused solely on

    the elementary school participants. Investigating if older students have more difficulty with consistent

    participation and what measure can be taken to ameliorate this problem could yield interesting findings

    as well.

    Conclusion

    We found no significant difference between short- and long-term learning and found that topic difficulties

    in the short-term translated to similar difficulties in the long-term. Students demonstrated a satisfactory

    level of learning and retention which can be seen in an increase from pre-assessment to post-assessment

    results in the short-term as well as an increase in pre-interview to post-interview results in the long-term.

    We found that at the conclusion of the program students were able to articulate their thought process

    and explain their level of understanding through a one-on-one exit interview. We believe this is partly a

    result of their individual projects that the students worked on during the last 3 weeks of the program. In

    the exit interview, several girls often explained concepts in terms of their personal project and defended

    their reasoning by explaining how or what they had experienced during the creation of their project.

    We found that general interest in Computer Science as a subject stayed consistent. This finding is most

    likely due to the fact that participation was not mandatory, students and their parents purposefully

    searched for such a program and they also decided to stay voluntarily. However, there was a significant

    change in perceived current programming ability and a significant change in perception of jobs involving

    computers. Although non-significant there was also a visible increase in perceived future programming

    ability as well as a decrease in number of students that categorized programming as hard.

    The conclusions above suggest that the Girls Who Code curriculum is effective in teaching students both

    in the short- and long-term and has a positive impact on students. Specifically, in encouraging them to

    believe they are and can be good at programming and that jobs involving computers can be interesting,

    which can eventually help encourage students to consider such jobs for themselves.

    References

    Clements, D. H., & Gullo, D. F. (1984). Effects of computer programming on young children's cognition. Journal of educational psychology, 76(6), 1051.

  • Ericson, B., & McKlin, T. (2015, August). Helping African American students pass advanced placement computer science: A tale of two states. In 2015 Research in Equity and Sustained Participation in Engineering, Computing, and Technology (RESPECT) (pp. 1-8). IEEE. Jemmali, C. (2016). May's Journey: A serious game to teach middle and high school girls programming. Maloney, J. H., Peppler, K., Kafai, Y., Resnick, M., & Rusk, N. (2008). Programming by choice: urban youth learning programming with scratch (Vol. 40, No. 1, pp. 367-371). ACM.

    Noymer, A. (2008). Alpha, significance level of test. In P. J. Lavrakas (Ed.), Encyclopedia of survey research methods (pp. 17-19). Thousand Oaks, CA: SAGE Publications, Inc. doi: 10.4135/9781412963947.n13 Rich, K. M., Strickland, C., Binkowski, T. A., Moran, C., & Franklin, D. (2017, August). K-8 learning trajectories derived from research literature: Sequence, repetition, conditionals. In Proceedings of the 2017 ACM Conference on International Computing Education Research (pp. 182-190). ACM. Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K., Millner, A., Rosenbaum, E., Silver, J.S., Silverman, B. and Kafai, Y.B., (2009). Scratch: Programming for all. Commun. Acm, 52(11), pp.60-67 Sheridan, I., Goggin, D., & O’Sullivan, L. (2016). EXPLORATION OF LEARNING GAINED THROUGH CODERDOJO CODING ACTIVITIES. In INTED2016: 10TH INTERNATIONAL TECHNOLOGY, EDUCATION AND DEVELOPMENT CONFERENCE, INTED Proceedings (pp. 6541-6548). Smith, N., Sutcliffe, C., & Sandvik, L. (2014, March). Code club: bringing programming to UK primary schools through scratch. In Proceedings of the 45th ACM technical symposium on Computer science education (pp. 517-522). ACM. Sullivan, K., Byrne, J. R., Bresnihan, N., O'Sullivan, K., & Tangney, B. (2015, October). CodePlus—Designing an after school computing programme for girls. In 2015 IEEE Frontiers in Education Conference (FIE) (pp. 1-5). IEEE.

    Personal Biography

    I am an Industrial and Systems Engineering student with a minor in Computer Science at the University of

    Florida. I was born in Bogota, Colombia, but grew up all over the United States. I graduate from undergrad

    this semester and will begin a full-time job with Accenture in Chicago in the fall. I love to travel and learn

    about new cultures. I’m an avid reader, amateur soccer player, and food enthusiast. I enjoy data

    visualization, streamlined designing, and teaching, especially Computer Science education. I hope to

    pursue a masters or PhD in the near future in the space of Human Centered Computing.