De Wever et al (2006) reviewed fifteen content analysis schemes used to analyse the transcripts of online asynchronous discussion groups. Their paper focuses mainly on issues of validity, reliability, unit of analysis, etc and concludes that standards are not yet in place to ensure sufficient quality, coherence and comparability. Although rather technical, it presents a good starting point for my thinking about which coding scheme I could use for the analysis of the data that I shall collect in the next eTwinning Learning Event (LE) on web 2.0 tools.
Content analysis aims to 'reveal information that is not situated at the surface of the transcript' (De Wever et al, 2006, p.7). Transcripts are analysed using a 'research methodology that builds upon procedures to make valid inferences from text' (Anderson et al, 2001 cited in De Wever et al, 2006, p.8). The authors of the paper assert that content analysis should be 'accurate, precise, objective, reliable, replicable and valid' (p.8); that is be free of bias, have sufficient granularity between categories, avoid subjectivity, be coherent in the way it is applied and be repeatable. They emphasise the importance of content analysis schemes being underpinned by an appropriate theoretical basis, but indicate that this is not always the case. They stress the importance of having an appropriate choice for the unit of analysis (the level at which coding is performed). They also place a lot of importance on researchers declaring the level of reliability of their work, for example by have individual researchers had their coding cross-checked and what was the degree of difference between researchers working in a team (termed inter-rater reliability). Sadly, they note, this data is often not provided.
Henri's (1992) work was pioneering and her scheme has been used in many research projects, either directly or as a basis for further developed coding schemes. It is based upon a cognitivist approach to learning, which recognises cooperative learning, collective knowledge and interactivity. As such it addresses both the social interactivity of a group of learners and the cognitive development of individuals. The unit of analysis is the unit of meaning, leaving it up to the researcher to define whether this is a sentence, a paragraph or a whole message. As a well rounded and thoroughly tested scheme, this may well be a good choice for my analysis – leaving sufficient scope for me to ultimately place my focus on the social, cognitive or meta-cognitive elements.
Newman et al's (1995) scheme focuses more on group learning, deep learning and critical thinking. It builds upon Henri's model and Garrison's five stage model for critical thinking. It uses indicators that represent both positive and negative contributions to a measure of critical thinking. The unit of analysis is again the unit of meaning, though only relevant text is coded (this must be difficult to manage, in practice, as it sounds very subjective). A drawback would appear to be the authors' suggestion that some indicators can only be encoded by experts in the domain. Nevertheless, this scheme may be useful if I decide to focus on meta-cognition.
Gunawardena is known for her pioneering work on social presence and the model that she has jointly proposed (Gunawardena et al, 1997) focuses on the social construction of knowledge in computer mediated conferencing. It looks at the phases of a discussion and tries to measure the knowledge constructed. As such, it would appear to be less appropriate for my analysis, unless I decide to change my focus.
The other schemes of interest to me are the three relating to the Community of Inquiry model, which has so far inspired my thinking for the next LE. Rouke et al (1999) propose a model for analysing social presence, in which the unit of analysis is the thematic unit. Garison et al (2001) propose a scheme for analysing cognitive presence, in which the unit of analysis is the entire message. And Anderson et al (2001) propose a scheme for analysing the teaching presence, which looks at the message or sub-messages. Together, these three schemes would link well to the theoretical model of the Community of Inquiry. However, coding would be a challenge with three different schemes being applied in parallel, using different units of analysis. I could decide, for example, to use only two of the schemes – for social presence and cognitive presence – relating to my two fundamental research questions, however this would leave out the important dimension of teaching presence and the influence, in particular, of the moderators - of which I will be one.
This paper has made me realise that the choice of coding scheme is not going to be easy, yet it is fundamental to my research. Clearly I have a lot more reading to do around the subject before I take a decision.
PS: There is a nice table summarising the differences between the fifteen coding schemes, however I've decided not to include this in my posting for copyright reasons.
Anderson, T., Rourke, L., Garrison, D. & Archer, W. (2001) 'Assessing teaching presence in a computer conferencing context'. Journal of Asynchronous Learning Networks, 5 (2), pp.1-17
De Wever, B., Schellens, T., Valcke, M. & Van Keer, H. (2006) 'Content analysis schemes to analyze transcripts of online asynchronous discussion groups: A review'. Computers & Education, 46 (1), pp.6-28
Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15, 7–23.
Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17, 397–431
Henri, F. (1992). Computer conferencing and content analysis. In A. R. Kaye (Ed.), Collaborative learning through computer conferencing. The Najadan Papers (pp. 117–136). London: Springer-Verlag.
Newman, D. R., Webb, B., & Cochrane, C. (1995). A content analysis method to measure critical thinking in face-to face and computer supported group learning. Interpersonal Computing and Technology, 3, 56–77.
Rourke, L., Anderson, T., Garrison, D. & Archer, W. (1999) 'Assessing Social Presence in Asynchronous Text-based Computer Conferencing'. Journal of Distance Education, 14 (2), pp.50-71
About my research
My research was set in the context of the European Commission’s eTwinning initiative and it looked specifically at the use of eTwinning Learning Events (non-formal learning). It examined how the community influences the development of teachers’ competence in online collaboration and discourse, and it considered the contribution of social aspects and online moderation.
I am very grateful to my supervisor, Dr. Julie-Ann Sime from Lancaster University, and to my eTwinning soulmate, Tiina Sarisalmi, for their invaluable support. And to my examiners, Prof. Marilyn Leask from the University of Bedfordshire and Dr. Don Passey from the University of Lancaster, for their valuable advice.