More on research uptake and multiple forms of knowing - a conversation at a railway station
The meeting
at Patisserie Valerie at Kings Cross Station with an old friend from academia,
was very different to the meeting in the park that was the subject of my last
blog. She is finding this emphasis on
impact and research uptake quite positive because it is giving her “applied
research” a lot more credence and validity in academic circles. We had an interesting conversation on “impact
and uptake”, mainly recognising that judging impact merely via “uptake” could
imply a level of arrogance that is not clearly in the spirit of independent
research, and is more an outcome of advocacy than knowledge creation. What we need to show as impact is that
decision-makers have taken our research
into account and have considered it when making their decisions, though other
factors may sway their decision-making in a different direction. My friend seemed to think that it is a matter
of time: that piling up a sufficient body of evidence and communicating it to
the decision-makers will eventually wear them down. My thought is that if one does want decision
makers to act in the way we think they ought to, our research programmes need
to incorporate activities that go beyond building up evidence, to working on
informing and influencing the people who make the decisions, and this requires
a different set of tactics not to mention skills.
I
suppose one thing we must realise is that the knowledge generated by research
exists in competition with other forms of knowing. Policy makers bring to the decision making
table the knowledge they have developed as individuals, tacit or explicit
knowledge that is shaped by their social and physical
setting, is unique to each individual and is part
of their identity, and as such can be fiercely guarded; there could be
perspectives gained from being part of a particular community; they may have professional or specialised knowledge
that values certain ways of knowing above others (the quantitative over the
qualitative for example) and they may have organisational objectives that shape
their experience into more strategic, goal-oriented ways of knowing. We recognise this multiplicity of knowledge
when we triangulate our data collection; we also need to recognise it when we seek to
communicate our research outcomes, and this means investing in getting to know
our policy makers better.
this takes us to a fundamental problem about how to judge the effectiveness of research organisations. The focus on "uptake" has a sickly underbelly - because it can incentivise the researchers to push their findings to policymakers, when we know the world is a complex place and no single research can give all the answers (even on a small research question, whatever the RCT lobby group says...). so there are (or at least can be) moral implications in how strenuously we push our findings out there - we can call it arrogance, foolhardiness...or perhaps even ignorance. IMHO, research needs to be judged by how well it can trigger and sustain an active policy focus and effective policy dialogue on a given issue. The outcome is irrelevant, what we need to be judged by is whether or not we made people think.
ReplyDelete