
Sarah von der Mühlen and her colleagues presented the 20 undergrads and 20 psychologists with two passages of text, approximately 400 words long, about smoking and addiction, each containing a mix of plausible and implausible arguments (note the superficial meaning and grammar of the implausible arguments was not at fault).
There were several elements to the task: All the participants were asked to identify the different components of the arguments and to judge the plausibility of the arguments. They were specifically told to evaluate the arguments based on their internal consistency and quality, not based on their own prior knowledge or opinion. The participants were also interviewed afterwards about what they'd thought of the task, the strategies they'd used to evaluate the arguments, and whether the arguments contained any of a list of fallacies, such as being circular. For one of the texts, the participants were asked to speak their thoughts out loud as they evaluated the arguments, granting the researchers immediate insight into their evaluation strategies.
As you might expect, the psychologists were better than the students at judging the plausibility of the arguments (achieving roughly 80 per cent vs. 70 per cent accuracy). The psychologists were especially superior at spotting weak or implausible arguments (they spotted nearly 80 per cent of these vs. 60 per cent spotted by the students). The psychologists, who took more time to judge plausibility, were also better at breaking down the structure of the arguments, especially at recognising what's known as the argument "warrant" – this is the link made between the claim and the evidence cited to support that claim.

Psychologists and other scientists aren't usually given formal training in argument logic and analysis, but the researchers think they probably pick up a lot of relevant analytical skills through their training and the social aspects of being a scientist. Further analysis suggested that a greater awareness of the formal structure of arguments (check out the Toulmin model of argumentation for more on this), and the range of argument fallacies, helped the psychologists better evaluate the arguments used in this study. However, we need to be aware that the study was cross-sectional so we don't know that this knowledge caused their better performance – for example, perhaps being the kind of person to take on post-doctoral science studies makes you better at judging arguments and/or maybe the psychologists were more motivated to excel at the task and follow the instructions.
Another limitation of this research is that the students and psychologists were assessing arguments in a context that was at least partly related to their domain of expertise or study (but note that no prior knowledge was required to judge the plausibility of the arguments). It would be interesting to know how well the psychologists argument evaluation skills would extend to other topics. For now though, what this research reveals is that when it comes to evaluating arguments, people find it very difficult to put aside their gut instincts, their prior opinions and knowledge and to judge the arguments in a logical way, based on their actual quality and coherence. Although we think of scientists as highly knowledgeable experts, their greater skill at evaluating arguments actually seems to come from their ability to forget what they know and to judge an argument on its merits.
_________________________________
SOURCE:
von der Mühlen, S., Richter, T., Schmid, S., Schmidt, E., & Berthold, K. (2015). Judging the plausibility of arguments in scientific texts: a student–scientist comparison Thinking & Reasoning, 1-29 DOI: 10.1080/13546783.2015.1127289(accessed 1.2.16)
No comments:
Post a Comment