Excerpt

## Content

1 Growth and fixed mindset

2 Mindset everywhere

3 The foundation of growth mindset

4 Mindset predicts academic achievement

5 Meta-analysis and larger sample sizes needed

6 A new mindset intervention

7 Mindset’s role in the replication crisis

8 Replication attempt in China

9 Replication attempt in Norway

10 Discussion

11 References

12 Appendix

**Abstract**

The concept of the growth mindset has become better known over the last years and is presented as an important key factor determining failure and success (Dweck, 2006). This paper evaluates the current research on Dweck’s mindset theory as well as its scientific foundation. The main goal of this paper is to critically review the concept of the growth mindset and rate its status appropriately. By taking a closer look at the used methods and statistical inferences, the paper reassesses the science behind the growth mindset and discusses if its application has gone beyond the data. It ascertains an existing discrepancy in the literature regarding the efficacy of mindset intervention and concludes that mindset interventions can be helpful and effective, however not to the degree as pledged. The growth mindset remains an interesting phenomenon which needs future research to clarify if it is benificial at all, and if so under which conditions. With regard to the current replication crisis, Dweck’s research serves as a good example which needs replication, using proper statistical methods and interpretation.

**Keywords**: mindset, growth mindset, fixed mindset, academic achievement, replication crisis

## 1 Growth and fixed mindset

The *growth mindset* is based on the belief that one’s basic qualities can be developed through hard work and dedication. *Growth-minded* people embrace challenges, persist in the face of setbacks and learn from their failures. Academic challenges are not perceived as a threat to one’s ability, but rather an opportunity for learning and improvement. They get inspired by the success of others and see effort as the path to mastery. Intelligence is seen as a malleable quality which can be developed over time (Dweck, 2006).

*Fixed-minded* people, on the contrary, see their intelligence as a static, inherent trait which is fixed and cannot be changed. The fixed mindset creates an urgency to prove yourself over and over. Instead of a desire to learn approach academic challenges, *fixed minded* people want easier problems that will make them look and feel smart. Challenges will be avoided instead of embraced, effort is seen as something needed in order to look smart and the success of others is perceived as a threat. Intelligence is seen as a fixed quality which cannot be developed which leads to the fix-minded assumption that one’s IQ determines success and failure (Dweck, 2006).

## 2 Mindset everywhere

In recent years, the concept of the growth mindset garnered a great deal of attention and is today very well-known and applied in many different settings. In Dweck’s bestselling book *Mindset: the new psychology of success* (Dweck, 2006), which has been sold over one million times, she gives practical advice how and where to implement the growth mindset. According to Dweck, a growth mindset explains Michael Jordan’s success as well as Enron’s failure. Looking at today’s business world, well-known companies such as NASA, Microsoft, Google, and Apple integrate the growth mindset in their hiring processes (Schmidt & Rosenberg, 2015; Keller & Papasan, 2013). Huge successes are attributed with endorsing a growth mindset. When Scott Forstall, former senior vice president at Apple, formed a new team he was looking for “growth-minded” people. These people then created the iPhone (Keller & Papasan, 2013). Also, Google’s ideal candidates are so called “learning animals” (Schmidt & Rosenberg 2015, p. 77). They have a deep desire to learn and endorse a strong growth mindset.

Dweck’s TED talk about “The power of believing that you can improve” helps to spread the idea of the growth mindset. The talk has been viewed over five million times and makes the growth mindset a wide known phenomenon. However, the way this concept and its effects are presented in the media, raises interest to find out more about the science behind it.

The following three questions drive this paper: what is the science behind the concept of growth mindset? Has the application of the growth mindset idea gone beyond the data? What role does the growth mindset have in context of the current replication crisis?

This paper begins by taking a closer look at the science behind the growth mindset idea. Particularly, it focusses on the first and most influential studies which shaped the concept of mindset. Methods and statistical interpretations will be critically reviewed. Further, the paper will discuss newer research findings on this subject as well as examining the growth mindset in regard to the replication crisis.

## 3 The foundation of growth mindset

Dweck’s breakthrough about her mindset theory came with the study in 1998 together with Claudia Mueller. The researchers predicted that *Praise for Intelligence Can Undermine Children’s Motivation and Performance* (Mueller & Dweck, 1998, p. 33). To challenge the hitherto held belief that praise for ability has beneficial effects on motivation, they set up an experiment which contrasted praise for hard work (effort-praise) versus praise for being smart (ability-praise). 128 fifth-graders (70 girls and 58 boys) participated in the study. Their age ranged from 10 to 12 years. All of them took an intelligence test and were told they scored 80%, no matter what their actual score was. In addition, all children were told that this was “a really high score” (p. 49). After this initial feedback children were praised differently. A third (n=41) were praised for their ability and were told: “You must be smart at these problems” (fixed mindset praise). Another third (n=41) were praised for their effort (growth mindset praise) by being told: “You must have worked hard at these problems”. The remaining children (n=45) were in the control condition and received no additional feedback. After being praised the children were asked whether they prefer to work on a performance goal or a learning goal. Children were given four choice alternatives: A: “*problems that aren’t too hard, so I don’t get many wrong*”, B: “*problems that are pretty easy, so I’ll do well*”, C: “*problems that I’m very good at, so I can show that I’m smart*” and D: “*problems that I’ll learn a lot from, even if I won’t look smart*”). A, B and C was scored as a performance goal preference and D as learning goal preference.

Goal choice was clearly affected by the content of the praise. Children who were praised as “smart” largely were more likely to choose the easy problems, whereas the effort-praised children tend to choose the harder ones. The control group was evenly split.

Taking into consideration only the effort-praised children, astonishing 92% of the children chose the learning goals and only 8% the performance goal. This effect looks huge. Even though Mueller & Dweck (1998) did not report an effect size, a post-hoc power analysis with an estimated medium effect size of w=0.3 gives the study, according to G*Power, a power of around 81%, which is solid. However, post-hoc power analyses are problematic, since they don’t reveal new information. The main point to critique here are the small sample sizes of around N=43. Studies with small sample sizes tend to overestimate effect sizes (Simmons, Nelson & Simonsohn, 2011), which is why it is preferable to think about power, effect size and sample size beforehand, and report them in a pre-registration.

The researchers were also interested in children’s task enjoyment. Task enjoyment was measured by questions like “How much did you like working on the first set of problems”, which could be rated on a scale from 1 (not at all) to 6 (very much). When children were given another test of increased difficulty, those who had been praised as smart reported enjoying the challenging questions less (M=4.11) than the children who had been praised for their effort and hard work (M=4.89). Children in the control condition fell in between the other groups (M=4.52). Unlike the goal choice measurement, no absolute numbers for task enjoyment were reported. Means differed significantly (p<.005).

As of today, this paper has been cited over 1,200 times and builds the foundation for Dweck’s mindset theory. Ever since much research has been conducted to examine the role of mindsets and its efficacy.

## 4 Mindset predicts academic achievement

Another highly influential study by Blackwell, Trzesniewski & Dweck (2007) examined the longitudinal effects of a mindset intervention among 7th graders in the USA. The children’s age has not been reported but “young adults” in 6th and 7th grade is expected to be around 12-14 years old. Blackwell et al. conducted two studies to explore the relationship between implicit theories of intelligence (fixed- or growth-mindset) and children’s academic performance. In the first study, participants entering the 7th grade filled out a questionnaire assessing theory of intelligence, goals, beliefs about effort, and helpless versus mastery-oriented responses to failure. Each variable was measured through a set of subscales, all containing items rated on a 6-point Likert scale from 1 (*Agree Strongly)* to 6 *(Disagree Strongly)*. For example, six items measured the children’s mindset: three fixed-mindset statements (e.g., *“You have a certain amount of intelligence, and you really can’t do much to change it”*) and three growth-mindset statements (e.g. *“You can always greatly change how intelligent you are”*).

Academic performance was measured through mathematics achievement test scores every term (fall and spring). Four waves of students entering 7th grade were followed for two years. A total sample size of 373 students sums up by four successive entering classes of 67-114 students each. In the beginning of their study, they tested whether children’s mindset was related to their prior academic achievement (6th-grade test scores). They found no significant relation.

After the two years, the researchers found that those students, who endorsed a growth mindset at the beginning of 7th grade, were outperforming their peers with a fixed mindset. Regression modeling showed a significant effect of children’s mindset on change in grades (*β* =.53, *t* = 2.93, *p* <.05). This is explained by the fact that a growth mindset predicts more positive motivational patterns (such as effort beliefs, learning goals, and responses to failure), which in turn leads to increasing academic performance. Thus, children’s motivational patterns seem to mediate the relationship between children’s mindset and their performance.

In their second study, they first replicated their mediational model on a new smaller sample over a shorter period of time. Secondly, they performed a growth mindset intervention to half of the students and examined its effects on classroom motivation and achievement among them. The growth mindset condition (N=48) was compared with a control group (N=43) which had not been taught that intelligence is malleable. They were interested in finding out whether students in the growth mindset condition outperform their peers in terms of grades as well as showing more positive motivation and effort in the classroom. Before performing the intervention, they examined how the children’s math achievement changed across the junior high school transition. They found that the sample as a whole was decreasing in grades, which is apparently commonly observed over the junior high school transition (Gutman & Midgley, 2000). Though, it was interesting to see how grades changed after the growth mindset intervention. Blackwell et al. (2007) found a significant effect in the experimental condition on change in grades across the intervention (*p* <.05). Thus, the decline in grades went on for the control condition, but was stopped, and even changed direction for the growth mindset condition (See Appendix, Figure 1).

Additionally, the researchers hypothesized that the effect of the intervention would be greatest for those students who initially endorsed a fixed mindset more strongly. Therefore, they examined the interaction of growth mindset intervention and students’ initial mindset on change in math grades from preintervention to postintervention. They report the result as “marginally significant” (p<.10). A *P* -value of this size being reported as significant gives doubt about their statistical practice. It is creditable that they commented on this reported *P* -value, explaining that it turned out only marginally significant due to their small sample size and thus the effect should be replicated with more power. However, reporting an obviously non-significant *P* -value as significant seems dubious and questions the scientific practice behind this and previous mindset studies.

The paper by Blackwell et al. (2007) has been cited over 1500 times and accounts as one of the most influential studies, praising significant effects of growth mindset intervention. However, taking into consideration that the sample consisted of N=48 and N=43 participants per condition one might assume this study has been conducted with too little power. Given these samples, a post-hoc power analysis with G*Power calculates around 65% power to detect a medium effect size with a significance level of .05. More strikingly, however, is the non-significant p-value of *p* <.10 which is reported as significant. Contrary to the author's interpretation, this p-value is rather diverging from - than approaching - significance which means that this effect is most probably non-existing.

A common key finding in the described studies above (Mueller & Dweck, 1998; Blackwell et al., 2007) is that children’s mindsets are related to their academic performance and teaching the growth mindset can increase children’s grades. Dweck herself refers to these findings in her TED talk “The power of believing that you can improve”. That being confidently claimed, it could lead to wide application of the growth mindset intervention in classrooms in order to boost children’s academic performance.

After confronting some methodological and statistical problems in the key studies Dweck referred to, it seems necessary to examine the relation between mindset and academic performance more closely.

Worth mentioning at this point is the study by Paunesku et al. (2015) about the relation between mindset and academic achievement. Inspired by previous research (Aronson, Fried & Good, 2002; Blackwell et al., 2007; Good, Aronson & Inzlicht, 2003), which showed the efficacy of academic-mind-set interventions on a small-scale, the team around Paunesku (including Dweck herself) were interested in finding out whether a growth-mindset intervention could be a practical way to raise school achievement on a large scale. They set out to deliver growth-mindset intervention through online modules to over 1,500 students. Their primary research question concerned the efficacy of mindset intervention in general, but they also focussed on underperforming students in U.S. high schools. About one-third of the total sample size met *high-yield* indicators of dropping out from high school. They delivered the mindset intervention to all of them and tested how their performance, measured by grade point average (GPA), differed from pre-intervention to post-intervention. In addition, they also included a *sense-of-purpose* intervention. The authors’ main findings were that both interventions are beneficial for students, most beneficial for poorly performing ones. They claim that each intervention significantly raises students’ academic performance.

However, reading through the methodology and results, doubts arise regarding these claims. According to Paunesku et al. (2015), the linear regression revealed that the predicted *Risk* x *growth-mindset intervention* was significant for each intervention condition: the interaction for the growth mindset intervention (*p* =.048) and the *sense-of-purpose intervention* (*p* =.021). Further, when both intervention conditions combined into a single intervention the researchers report the interaction as “marginally significant” (*p* =.071).

When dividing the sample into *at-risk* and *not at-risk* students, the results change. Regression analysis revealed a significant interaction effect for the *at risk* x *combined intervention* (*p* =.011), but not among the *not at-risk* students (*t* < 1).

At this point, it is noteworthy to mention at least two of the reported *p* -values. As mentioned before in the study by Blackwell et al. (2007), interpreting an interaction with a *p* -value of .071 as (marginally) significant is not correct. It rather seems that there is no significant effect and yet Paunesku et al. (2015) interpret it as significant. As in the study by Blackwell et al. (2007), a *P* -value of *p* =.071 is a sign that the effect distances – rather than approaches – significance. The other *p* -value worth mentioning is the as significant interpreted interaction effect for the growth mindset intervention alone, reported with a *P* -value of *p* =.048. Strictly speaking, the researchers interpret this value correctly as significant, since it meets the definition of significance when *p* <.05. However, leaping to the conclusion that this interaction effect is certainly significant seems a bit hasty and unprofessional. Stumbling upon a *p* -value of .048 surrounded by other weak *p* -values, the possibility of *p* -hacking arises. Within the recent years, a phenomenon called *P* -hacking gained more and more attention within the field of psychology. It refers to the practice of reanalysing data in many ways to yield a targeted result (Simmons, Nelson & Simonsohn, 2012). Regarding the study by Paunesku et al. (2015), statistical significance is defined as a *P* -value less than 0.05, which means that the difference observed between the two groups would not be seen even in 1 out of 20 times by chance. However, what if the effect has been tested 21 times and turned out to be *significant* (*p* =.048) only once? If only this one *significant P* -value is reported, we might think there is an effect when in fact it is much more likely that there is none.

After looking closer in the study and its reported statistics, the main conclusion which can be drawn from this paper is that Paunesku et al. (2015) found that a growth mindset intervention failed to significantly raise grades. Only after excluding adequately performing students (not at risk of dropping out from school) and merging the growth mindset intervention with a *sense-of-purpose* intervention, they found a marginally significant impact. Contrasting the authors’ confident interpretation of their findings, as it can be seen in their abstract and discussion, it does seem the evidence here is quite weak. Especially the low *P* -values and their misinterpretations are a good reason to doubt proper scientific practice behind these studies.

**[...]**

- Quote paper
- Fabio Zander (Author), 2017, Doubting the efficacy of the growth mindset. A literature review, Munich, GRIN Verlag, https://www.grin.com/document/366433

Publish now - it's free

Comments