While I was waiting for confirmation that I could submit my project proposal to the ethics committee, I started thinking about Survey Monkey versus a paper questionnaire – and it did feel as though Survey Monkey had much to recommend it. It’s more streamlined and saves anyone having to email completed questionnaires to me.
Since the questionnaire already existed, I reasoned, what could be easier than to turn it into a Survey Monkey one?
They don’t tell you about the ten question limit until you’ve input ten questions. And then … well, let’s say this was annoying. Fortunately, I didn’t have to go for a personal upgrade (at several hundred £££ per year, this is a relief!), so once I’d made my arrangements, I began all over again, until I finally had a whole questionnaire ready to go.
Now I have to submit all the “paperwork” (documentation) to be signed off and submitted for ethical approval. I’ll try to do so tomorrow, depending on whether I get a clear break at lunchtime.
Thinking about my PGCert project, I am constantly reminded that educational research is very different from musicological or historical research.
Nonetheless, this is one of the paths I must tread for the next few months. So here’s my challenge: to establish how effective our e-resource training has been and to gain some insights into how much more effective it might be.
I’m hoping to use three kinds of data:-
- Data from library surveys, 2009-2016
- Data from a SurveyMonkey questionnaire which is yet to be written, and which will be distributed amongst my PGCert/MEd peers.
- Transcriptions of a few short interviews.
Since (1) is historical data, not gathered for the purposes of the present project, I realise that it may not offer much depth of detail. I hope to establish if more people, generally, have used our growing e-resource offering, year on year, and to see what insights I can gain about what specifically is being used or attracting more attention. Some of the surveys asked which resources the respondents would like to know more about. I might be able to infer which kinds of resources have tended to be most popular.
The disadvantage of such data is that I have to use summary reports. I won’t be able to get much granularity of detail as regards different categories of students, whether by subject or their course level.
Specifically, I won’t necessarily be able to tell whether postgrads have been heavier users – or more confident users – of electronic resources. Neither will I be able to tell whether the age and length of time out of full-time education for first years (of any degree or diploma) affects their confidence levels. I can certainly ask questions to unearth this factor, in my own SurveyMonkey questionnaire (2) and subsequent interviews (3), though.
To date, I’ve got my copies of all the library summary reports from 2009-2015, and raw data (not yet summarised) for 2016. One of my first tasks, after I’ve finished the documentation surrounding the project proposal, will be to see what I can glean from this data.