Archive for the 'Research' Category

Research and busy practitioners: won’t get fooled again?

Karl Wilding

Karl Wilding

by Karl Wilding

Originally posted at http://www.ncvo-vol.org.uk/networking-discussions/blogs/209/12/10/23/research-busy-practitioners-wont-get-fooled-again on 23 October 2012.

Last week I had the good fortune to speak at the launch of a new book, Helen Kara’s Research and evaluation for busy practitioners: A time saving guide. Whilst the event – and the book – wasn’t aimed specifically at a voluntary sector audience my task was to think the unthinkable (or should that be research the unresearchable?) and question just where is the voluntary sector at the moment when it comes to research and evaluation?

I reckon its a pretty mixed bag. My guess – i.e. not evidence based – is that once we put aside the charities whose primary purpose is research or public policy, there are parts of the sector that excel in being evidence driven: I’m thinking the large national children’s charities, for example, organisations with dedicated research and evaluation staff and, hopefully a culture of learning. But after that, it is still the case that there is a long tail of organisations where research and evaluation play a much smaller role in developing policy and practice. The reasons are likely to vary: lack of, or prioritisation of, resources will figure highly. In many organisations, research and evaluation will be part of somebody’s job role, a slice of their daily time budget.

But is it still the case that there are organisations out there with a more difficult relationship with research and evaluation? What you might call the research refuseniks, the gut instinct operators, and in some cases the research abusers. Whether by accident or design, it strikes me that we need to help this long tail get better in its use of research and evaluation. The evidence hurdle gets higher – and harder to jump over – every year. The ‘do we make a difference’ challenge now comprises the expectation that organisations with small numbers of staff are sometimes expected to undertake complex needs analyses, cost benefit assessments, summative evaluations, SROI calculations. And so on: my point is, this is often skilled stuff. Given that there are probably few full time researchers, what should we do?

I’m conflicted on this. My desire to train practitioners and managers is tempered by the question I often ask myself: if the central heating broke down at home, would I try and fix it myself? The answer being a definite no. So I think the answer is one we’ve tried to do on the Cass Charity MSc, where I teach a module called Research Methods for Managers. Let’s think about practitioners in the voluntary sector who have to deal with research and evaluation as being creators, curators, commissioners and consumers. Let me explain

The research creators are, as the name suggests, undertaking primary research. I think we need to put this group in contact with each other (more peer support, such as via VSSN) and provide them with guidance that supports without leading them into any bear traps – ARVAC‘s guide to getting started in community research is a good example. We should also help to network these people with the academic community, where a surfeit of initiatives are trying to build community-university partnerships. But I’m not sure how realistic it is to aim for more creators – and I know I am possibly going against the research co-production grain here.

More numerous are likely to be the curators: those who are trying to pull together ‘the state of the art’ type reviews, ideally in 2-4 pages from my experience. For this group I reckon we should be trying to make use of tools such as Rapid Evidence Assessments or, even better, getting our academic colleagues to produce more such briefings. One wonders whether the research councils should hand out far more brownie points for such reviews.

The commissioners are increasingly commonplace as voluntary organisations attempt to become more evidence-based. I’ve heard many a comment that much commissioning in our sector is, ahem, wasted. Bad practices – such as the idea that any commission should cost no more than £10,000, or that primary research is always necessary – can again be addressed by better dissemination of some good guidance. I think this is increasingly a critical group that we should support, whether in terms of skills or knowledge of where to buy from.

And finally, the research consumers. As we’ve moved to a knowledge economy we are faced with a deluge of data, information, knowledge, intelligence, insight…but what’s valuable? How do we sort the wheat from the chaff? This is a problem for the commissioners too. I wonder if its where we should concentrate our effort – helping busy practitioners and managers to commission and consume research and evaluation by empowering them to know what good research and evaluation looks like. There are available frameworks to help public sector policy makers judge research quality, for example. Hence, at the launch of Research and evaluation for busy practitioners I argued that if the book was a song, it would be ‘Won’t get fooled again’.

I reckon the advice in the book for practitioners is part of the answer to the sector’s uneven progress to being ‘research ready’. But its only a part. We also need to focus on other parts of the system, for example the funders and commissioners who are asking for evidence. And it’s not enough for voluntary organisations to be research ready: researchers need to be sector ready. As such, I’m glad that NCVO is supporting the Alliance for Useful Evidence. We also need to think about issues at a sectoral, or sub-sectoral, level, such as standards and principles. This issue came up at the launch of Inspiring Impact.

The launch covered many other issues – including how we better translate research into policy and practice (avoiding the sudden reveal, what the excellent Jane Lewis called the ‘Ta Da!’ approach to research findings), and how as voluntary organisations we deal with research that shows we aren’t as good as we think or hope we are. I’ll blog about these another time if someone asks! In the meantime, this is a good book worth buying.

Research and evaluation for busy practitioners by Helen Kara is available to buy with 20% discount here.

The covert censorship of Gold Open Access

Helen Kara

Helen Kara

by Dr Helen Kara, author of Research and Evaluation for busy practitioners

Helen Kara has been an independent social researcher and writer since 1999, and is also Associate Research Fellow at the Third Sector Research Centre, Birmingham University. Her background is in social care and the third sector, and she works with third sector organisations and social care and health partnerships.  Here she writes about the Open Access to journal articles debate which has been growing over the last few years.

I support the principle of Open Access, i.e. that reports of research funded with public money should be available for any taxpayer to read. But I am worried that the planned implementation of this in the UK may lead to unintended censorship.

For those who may not be fully up to date with the progress of the Open Access movement in the UK, let me recap briefly. A group chaired by Professor Dame Janet Finch, of Manchester University, was asked to make recommendations on how access could be broadened.  The Finch Group reported in June of this year, recommending that the UK work towards ‘gold’ Open Access, where authors rather than readers pay for publication. In mid-July the Government accepted the Group’s recommendations, and is now working on their implementation.

People often conflate censorship with redaction, where parts of a publication are blacked out or removed. This is one overt form of censorship, but there are also covert forms, which are more insidious because they’re less obvious.  I believe that Gold Open Access will lead to at least three different forms of covert censorship.

Gold Open Access will save institutions money because they won’t have to pay for expensive journal subscriptions.  However, in these days of cuts and squeezes, there are no guarantees that money saved will be used to cover the costs of staff who want to publish their research.  The existing cuts and squeezes are already causing some forms of censorship.  That could become much more widespread because, as a result of Gold Open Access, there is likely to be fierce competition for publication funds within academic institutions.  This is the first form of covert censorship, because any academic who loses such a competition will be unable to publish, regardless of the merit of their work.

Under the Gold Open Access approach, the cost of publishing an article is expected to be around £1,500, which is a significant sum even for institutions with sizeable research budgets.  And it is completely prohibitive for most individuals.  Therefore the retired academic, the unemployed academic, the postgraduate student, the practitioner-researcher, the independent researcher, will all be unable to publish their work in academic journals – which is a second form of covert censorship.

Researchers from outside academia can bring valuable perspectives.  Of course I would say that – I’m an independent researcher – but the academics who choose to work with me seem to agree.  So do journal editors, as it appears that around one in three authors of articles in academic journals are retired, unemployed, students, practitioner-researchers or independent researchers.  Therefore the move to Gold Open Access could also see some journals disappearing, as their submissions dry up from both academic and non-academic sources.  And that’s a third form of covert censorship.

I’m sure censorship was not at all what the Finch Group intended.  And let me restate my own support for the principle of Open Access.  But even the UK Open Access Implementation Group has acknowledged that the transition to Open Access will not be straightforward.  I think care must be taken to make access truly open, for writers, editors and publishers, as well as for readers.

What is the impact of evaluation research on public policy?

Evaluation for the real world book imageEvaluation research findings should be a key-element of the policy-making process, yet in reality they are often disregarded. In this blog post, Colin Palfrey, one of the authors of Evaluation for the real world, looks at the history and impact of evaluation:

“The formal evaluation of public services has a history of little more than 50 years. Discovering what impact various social policies, programmes and projects have had on the intended beneficiaries makes political and economic sense. Why, one might ask, has evaluation had such a relatively limited pedigree?

Part of the explanation could perhaps be explained by the response from several medical practitioners in the 1970s and 1980s who considered the movement towards evidence-based medicine as an unwarranted assault on their professional wisdom and integrity.

Nevertheless, in spite of initial opposition from some quarters, evidence-based medicine, with its emphasis on the randomised controlled trial as the primary, if not the sole method of producing cogent evidence, became widely accepted as the ‘gold standard’ on which to base professional practice.

Although academic articles and books began appearing in some numbers in the USA during the 1960s, there was little academic or political interest in formal evaluation in the UK until two decades later. It would appear that in the UK, for example, the formulation of a policy, particularly when enshrined in legislation, was deemed sufficient to ensure its full implementation and once implemented to have the intended effect.

However, it is highly probable that the movement towards evidence-based medicine impinged on the world of civil servants and politicians. Certainly with the Thatcher government in the 1980s questioning the value of the public sector in terms of its efficiency, major projects and initiatives – notably the National Health Service – came under close scrutiny. Government spending on public sector services now had to prove its cost-effectiveness.

In the UK this concern with efficiency and cost-effectiveness spawned a number of government documents directed at policy advisers. Politicians now needed to know ‘what works’ and at what cost. This emerging culture of evidence-based policy prompts the question of how evaluation research commissioned by governments influenced or even shaped central policy.

It is on this question that our book focuses. Given the plethora of learned articles and books on the subject of evaluation over the past 50 years or so, what evidence is there that evaluation research in its many manifestations – commissioned project evaluation, policy evaluation, theory-driven evaluation – has had an impact on public policy at central and more local levels. In short, how cost-effective has evaluation research been?

The book looks at the possible reasons why academics, in particular, appear somewhat sceptical, if not despondent about the outcome of their research-based findings. Those who make decisions about allocating taxpayers’ money to a range of policies and their embodiment in programmes and projects, are not bound by any contractual arrangements to act on the results of evaluation research  – whether this has been designed and delivered by academics or by research-oriented private companies. .

We contend that the exploration of the impact of evaluation research on public policy is long overdue.”

Evaluation for the real world: the impact of evidence in policy making, by Colin Palfrey, Paul Thomas and Ceri Phillips was published on 13 June 2012 by The Policy Press. You can order a copy at 20% discount here.

Hedgehogs, Foxes and Sociologists*

Dr. Katherine Smith                        Dr. Nasar Meer

 

Dr. Nasar Meer and Dr. Katherine Smith write:

The late Isaiah Berlin once distinguished between two types of political animal: the first was a prickly hedgehog (who views the world through the lens of a single defining idea), and the other a cunning fox (for whom the world cannot be boiled down to a single idea).  Sociologists have traditionally aspired to be neither. Motivated less by ‘normative’ positions and arguments, it is with some bemusement that many of us have encountered Aditya Chakrabortty’s recent admonishments.

Like Bill Jordan (see previous blog post), we agree that the study of economics has been found wanting, and that Chakrabortty certainly catches something of a deeper conversation amongst academics, with the important proviso that Chakrabortty’s piece on occasion conflates those who study markets with those who feverishly endorse them. True, economics has in places been stripped of its critical and holist features, but there are political economists who continue, often persuasively, to take a more direct route (see for example David Harvey’s RSA lecture on the financial crisis). We do not wish to intrude on private grief however and so will leave economists to speak for themselves and focus instead on those who have disappointed Chakrabortty most.

A prevailing strand of sociological inquiry in Britain has long sought to make our social world more knowable through a methodology of verstehen; a term employed by the German sociologist Max Weber (1864-1920).  While this can incorporate quantitative and comparative perspectives, Weber’s task was to ‘empathetically understand’ the ways in which the actions of people and groups in society are inscribed with ‘meaning’.  Through the study of this meaning, he maintained, we could observe intentional or unintentional social outcomes, as shown in his study of early capitalism in Northern Europe, and specifically the role of a Calvinist-Protestant work ethic in encouraging capital accumulation and investment.

Much has changed in sociology, and we have past many ‘post-’s, but these approaches remain familiar to students and teachers of the discipline whose research spans the seemingly banal to the most contested; the most intimate to the most innocuous topics. That is to say that there is perhaps a consensus that whatever else sociological inquiry resembles, it must necessarily be motivated by a concern with something greater than political debate. It is here that Chakrabortty’s lament that a ‘Focauladian lens’ or studying ‘the holistic massage industry’ is a distraction from what really matters comes up short; not least because he repeats the error he is critiquing by giving primacy to all that is seemingly ‘economic’. Another way of putting this is to say that economics is not the only sphere of the social world and, to reverse the problem, it is short-sighted to uncouple economics from the study of culture, gender, ethnicity, and so forth, and so miss the intersectionalities of social phenomena.

This means it is not for sociologists to ‘defeat’ economists but to engage in sociologically valid inquiry that incorporates more than economics.  This does not mean ignoring the economic crisis rather to take it in the round. Hence the core theme of the 2010 British Sociological Conference (BSA) was ‘Inequalities and Social Justice’, while ‘Sociology in the Age of Austerity’ was the core theme for our 2012 meeting. Each of these showcased important arguments that are yet to find their way into press, partly because the rigours of peer-review can entail a lag of around eighteen months between article submission and publication (we have elsewhere discussed what the implications of increased auditing of scholarship might entail) http://www.timeshighereducation.co.uk/story.asp?storycode=419128.  Nonetheless, there is a diverse range of sociological scholarship on the economic crisis that offers more than the sum of its parts and so deals with the big questions too http://tinyurl.com/6wy6jrb

In many ways Chakrabortty’s concern strikes at the heart of what has been debated widely – indeed on the pages of the journals he says ignore the economic crisis – as Public Sociology.  An important point here is that there is more than one ‘public’.  So when sociologists engage in the mass media, as is easily observed in the mediatised letters and campaigns against the NHS and Social Care Bill or the hike in tuition fees, Michael Gove’s ‘free’ schools, or the Government’s targeting of the most vulnerable, this is just one kind of public.  Sociologists also engage with other ‘publics’, many of which may be less visible to journalists such as local communities, prisons, virtual communities, and students (of various kinds, both inside and outside universities), as well as conventional academic publics. These too are sociological terrains of political economy.

It may be easy for Chakrabortty to dismiss a few (purposively selected) niche research topics as irrelevant but it is equally important to ensure that those with a public voice do not presume to know what is, and what is not, of interest to different kinds of publics. In the context of the economic crisis and its fall out, debates that take place between broadsheet commentators, academics and policymakers are just one kind of conversation (and, if we are honest, a rather elite and limited kind).

*Dr. Nasar Meer is a Senior Lecturer in Sociology at Northumbria University www.nasarmeer.com, and author of The impact of European Equality Directives upon British Anti-Discrimination Legislation, Policy & Politics, 38(2). Dr. Katherine Smith is a Lecturer in the Global Public Health Unit at Edinburgh University http://www.sps.ed.ac.uk/staff/social_policy/katherine_smith


Enter your email address to follow this blog and receive notifications of new posts by email.

Twitter Updates

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archives

Creative Commons License

Republish our articles for free, online or in print.

The work on the Policy Press blog is licensed under a Creative Commons licence.