Some time ago I came across a paper from Stephen Gorard called ‘Warranting research claims from non-experimental evidence’. This paper makes some important points about warrants for research conclusions in educational research. Gorard states:
many of the high-profile criticisms of educational research are not, on reflection, about the nature of the evidence produced but about the way in which it is presented as unwarranted conclusions (generalising from nonrepresentative samples and so on). A piece of evidence cannot be either good or bad as long as it is presented with its appropriate caveats. It is when the researcher, or others, seek to go beyond what that evidence entails that problems occur (e.g. when there is overclaiming) (p2).
He goes on:
it is alarming how often passive researchers attempt to make comparisons over time and place on the basis of one observation (p.4).
A warrant, Gorard argues, is ‘the crucial link between the findings and the conclusions ostensibly drawn from them‘ (p.5).
As an educational researcher, this paper has provided me with another frame through which I critically review the work of other researchers. Importantly, though, it is a framework through which established ideas and taken-for-granted concepts in education can also be viewed.
A good example of this is Roland Tormey’s 2014 paper in which he critiques the ‘deep/surface approach to learning’ and argues that it is an oversimplified conceptual framework which has significant empirical weaknesses. Tormey identifies that this ‘deep/surface approach’ idea towards learning has become quite firmly entrenched within higher education:
Despite the widespread use and appeal of the model there has been a relative lack of critique (Haggis 2003; Case 2008), and Haggis (2009, 377) has concluded that ‘research into student learning in higher education is still frequently either based on these ideas [deep/surface learning], or takes them for granted’ (see also Case and Marshall 2004, 60) (p.2).
However, he goes on to critique this ‘approach to learning framework’ in the following ways:
1. It is based upon an inadequate system of classification that has only two approaches to learning (deep/surface), which are seen to be in opposition (deep vs. surface) (p.4).
…In short, the evidence on teaching and learning suggests that the relationship between memorisation, understanding and teaching approaches is more complicated than what the approaches to learning framework may have led academic practitioners to believe (p.5).
2. It does not have empirical predictive validity (p.4).
Tormey reviews a number of studies here and argues that the data presents a far more nuanced situation occuring than it simply being ‘deep approaches to learning’ that are resulting in high quality learning achievements. Other factors also play a role, such as relevant prior learning, access to resources, nature of motivation, etc.
Tormey also questions Entwistle’s (2009, 3) supposedly ‘unequivocal finding from student learning research over the last 20 years’ (p.6) that deep learning in higher education can be encouraged by changing teaching approaches:
the data here are, at best, mixed. While there are some data to support the contention (e.g. Prosser and Trigwell 1999), Tormey and Henchy (2008) found considerable diversity in how students approached learning in the same course, while a recent review of available evidence by Baeten et al. (2010, 246) found that the studentcentred teaching methods that were expected to give rise to deep learning approaches often did not do so and that numerous studies have found students in such contexts either adopting more surface approaches or showing no significant change in their approaches (p.6).
3. It has not changed sufficiently over time and not adapted to changes in psychology and learning theory over the last 40 years, particularly those which focus on power and on the agency of the learner (p.4).
Webb (1997) has highlighted that the dominance of the approaches to learning framework has marginalised other perspectives on learning and has made invisible any idea that falls outside or challenges the foundations of the paradigm (see also Haggis 2003) (p.6).
Haggis (2003, 2004) has argued that developments that have been ignored include those in academic literacies (2003), in situated cognition and in complexity theory (2004) (p.7).
Ultimately, Tormey arguest that the deep/surface metaphor and the extravagant claims made for it can no longer be seen as being unproblematically derived from the ongoing body of available empirical evidence (p.9).
In other words, the ‘extravagant claims’ are not warranted by the ‘available empirical evidence’.
Obviously I have just picked out key snippets from this paper so I would encourage people to read the whole piece to see in full the arguments Tormey is making and his supporting evidence (I would also encourage a complete reading of Gorard’s paper too).
The key messages from these papers are ones that I am taking into my own research on learning outcomes. It could be argued that a learning outcomes approach in higher education is becoming a taken-for-granted idea. The abundance of (often institutional) information and resources on the internet about how to write learning outcomes for, and apply them to, modules and courses would support this argument, particularly as it is mostly presented with no discussion about the value or evidence-base of such an approach.
Indeed, it is because a learning outcomes approach does appear to have become so taken-for-granted and, from a ‘common-sense perspective, would seem to be a positive move that we at the ‘Learning Outcomes Project’ are committed to interrogating the existing empirical-base concerning the use of learning outcomes in HE and further adding to this base the results of our own empirical investigations (see, for example, our paper on students’ perspectives about learning outcomes).
The arguments about unwarranted research conclusions are also helping me to further review and critique ideas that have increasingly become embedded within a learning outcomes approach. A good example of this is Biggs’ notion of ‘constructive alignment’ (CA).
Whilst I am not taking issue here with the notion of CA itself, I do query whether his conclusion is warranted that the principle of CA can be generalised from the specific context of in-service teacher education. Biggs developed the principle of CA when teaching one particular unit to one particular group of in-service primary and secondary teachers (82 students). From this one particular learning situation, a principle has developed that is now most often unquestioningly incorporated into a learning outcomes approach (or unquestioningly advised that it should be part of a learning outcomes approach).
Saying that his conclusion is unwarranted is not saying the principle of CA is wrong (or flawed, etc). But it is saying that further investigations in other teaching contexts, disciplines, and with other student groups should perhaps have been carried out before a model was offered with such unequivocal conclusions that ‘although [arising] in a professional programme, it can be implemented in virtually any course at any level of university teaching’ (p.104).
My reason for this blog post is not to attack Biggs’ notions. Rather, it is to share arguments and ideas that I have found to offer powerful guidance in the way I approach the research findings I make and, particularly, those that are made by others. I think it might actually be quite frightening when we start to realise the extent of changes and/or initiatives occuring in education that are based on unwarranted conclusions.
Comments are closed, but trackbacks and pingbacks are open.