This was the final LIS DREaM (Developing Research Methods and Excellence) workshop, back at Craighouse Campus in Edinburgh. I’ve been using these blog posts as an opportunity to point towards resources on the talks from this event, and also reflect on how I can use these methods within my own work, and this post will do more of the same. Materials from this workshop are available online.
Horizon Scanning – Dr Harry Woodroof
Horizon scanning is the process of trying to spot incoming threats and opportunities for the future. Harry pointed us towards a couple robust techniques which he recommends for this. The first was DSTL‘s approach: scanning scientific websites and journals to identify future technology trends and using sentiment analysis to pick out innovations which experts had spoken about in glowing terms. The second was the Horizon Scanning Centre‘s approach: conducting a ‘scan of scans’, using the Sigma Scan database to identify existing horizon scans and provide a meta-review of their contents.
I think all of us could benefit from a little horizon scanning, and the resources and techniques Harry recommended could be applied on a smaller scale (e.g. using the Sigma Scan database to identify a few of the most relevant scans for a particular question and looking for common themes). As I’ve just become a subject librarian for Politics and Public Policy I’m also interested in tracking down the book he mentioned on experts and the limitations of their predictions in horizon scanning (set in the political sphere).
Repertory Grids – Dr Phil Turner
Phil introduced us to repertory grids as a way of recording personal understandings of a particular domain, and then talked us through the practicalities and some applied examples. The technique works by asking your interviewee to identify some examples of a particular category, asking them to comment on how two of these examples are similar to each other, but different from a third, and then seeing how all the examples score on these ‘constructs’. This gives a idea of core concepts associated with the category, and how the examples cluster together or are different from each others according to these constructs.
Phil’s example looked across several interviews to identify shared constructs, and then tested how further examples (in his case, treasured objects) were rated on these dimensions. I could see myself applying this approach in the work with learning spaces I’ve been discussing (across De Montfort and Northampton Universities) where we’re particularly keen to look at the meaning of spaces for students. One I’ll mull over and discuss with the rest of the research team…
Data Mining – Kevin Swingler
The final research method discussed was data mining. This term covers a whole stable of different methods used to create models from data, which can make predictions or be used to classify further data. Kevin talked us through the careful checks that need to be applied to data to ensure it is appropriate for the creation of such models, and potential pitfalls and ways of avoiding them.
The techniques covered weren’t a million miles away from the statistical methods I’ve used in Psychology (my original academic discipline) but slightly more focused on use of the model, rather than modelling the underlying processes. The talk inspired some spirited discussion of how much data we have sitting around in libraries that we never try and do anything with (for example, predicting demand based on existing usage figures). Maybe it’s something that could be explored more fully across multiple library services, using a similar approach to the Library Impact Data project.
Impact of Research on Practice – Professor Hazel Hall
Our last session of the day was facilitated by Hazel, and was designed to get us thinking about the links between research and practice in a more active way. We were split into 6 groups, 3 of researchers and 3 of practitioners (broadly specified). Each group focused on an aspect of the link between research and practice and what could be done to improve it. Then the groups were paired up (one researcher group with each practitioner group) and asked to come up with 3 ways of improving application of research to practice based on our shared discussions.
We had quite similar ideas across the groups, implying that we were all reasonably aware of and agreed upon the issues, and even potential solutions. Therefore, for me, the most interesting part of the discussion was about incentives: how do we ensure that researchers are incentivised to fully include practitioners in their research and dissemination strategies, and how do we incentivise practitioners to draw upon research in their practice? I think a big factor is the infrastructure in which both groups work, and it’s made me think about impact (particularly in professionally-focused research) from a whole new angle!
The next and last event in the DREaM series will be a conference. Unfortunately I’m not able to attend in person, but I’m looking forward to participating remotely: having seen the quality of information sharing at the previous DREaM events, I’m quite looking forward to that. It’s certainly been a fascinating set of events to participate in, although, as I said on the feedback forms, one where I think that appraising its true impact is going to require a long-term perspective.