Health knowledge made personal
Join this community!
› Share page:
Search posts:

(Community 101 Series)- Understanding the Conversations

Posted Apr 03 2011 11:21am

Without a doubt some of my best personal learning lately has been done in the company of two Australians: Sofia Pardo  and Richard Olsen of ideasLab in Melbourne.

Over the past three years I have spent countless hours online and in person with these two brilliant minds thinking about the knowledge construction that happens in connected spaces. Our journey together began with a small research project of a four month joint initiative between ideasLab and Powerful Learning Practice and the Department of Education in Victoria in which 100 educators from across Victoria came together as a PLP cohort .

Most recently, we have embarked on a year long adventure we are calling PLPConnectU which includes 70 tech savvy teacher leaders who will be looking closely at using inquiry driven approaches in the classroom.

In preparation for the analysis of knowledge construction that will occur in this year long, job-embedded experience, we have tweaked our research methodology to align with what we learned from our first go round. We are now calling our approach a Collective Knowledge Construction Model.

We have made some significant changes to the coding book based on what we learned from the pilot. We will be recreating the content codes entirely and have added some new ideas to the function codes based on knowledge construction as connected learners. For example, we generated these questions related to the lurking role.

Define it?
Is there a skill set (better lurkers than others)?
What is best practice and appropriateness of lurking behavior?
Is it a novice behavior?
Does diversity of the lurker’s behavior result in better understanding?
How does the quality of conversation effect the lurker behavior?
What are the hovering patterns of lurkers and is that related to quality?
Can we optimize the lurking patterns to get them to go where we want them to go?
How do lurkers measure value?
Do lurkers help to create quality by what they view and make more popular?
Do lurkers apply or generalize what they learn?
Does more lurking produce a better outcome?

Sofia and I will be presenting the plpNetbooks research at ISTE in June in Philly. Together we constructed an abbreviated paper which served as an initial report for ideasLab. We will be submiting a full paper for the conference proceedings at ISTE.

PLPNetbooks Project
In 2009, we developed a content analysis coding strategy based on an a priori design to look at the computer mediated conversations that were taking place in a PLP online community we developed in Australia called PLPNetbooks.  The analysis sought to describe the nature of the conversations taking place in the community and afford a deeper understanding of the dynamic, benefits and challenges online communities of practice can foster and more specifically the knowledge building and professional growth that can occur when educators are given time and space to have professional conversations.


The increasing number of online communities and other computer mediated communication spaces highlight the ease with which people are able to connect, share and learn together. Despite educators’ generally favorable views that online communities add value, the positive impact on teaching and learning practices are not as widely understood or endorsed. Our research project of the PLPNetbook trial sought to provide insight into the nature of the professional conversations housed in this online community with a two fold objective, first to deepen our understanding of 21st century connected learning and second to influence practices around the design and use of online learning communities.

Research Methodology
We decided on a quantitative  content analysis approach using an a priori design that looked for predetermined functions in the roles of the participants and certain predetermined topics of discussion.

Research Questions

What is the nature of professional conversations among educators in an asynchronous, team-based, online community?

R.Q.1 What is the flow (i.e., direction) and frequency of the posts among differing roles within the learning community?

R.Q.2 What is the content (i.e., topics) and frequency of the posts among posts differing roles within the learning community?

R.Q.3 What is the function (i.e., purpose) and frequency of the posts among posts differing roles within the learning community?

Sample Selection

There were two types of spaces within the online community where participants could post comments, (1) public spaces that included the discussion forum and the blog and (2) group spaces that included a blog and a wall of comments. The comments posted on the public spaces were coded in their entirety while from the groups a random sample was selected to represent 51% of the groups using a

A final sample of 1215 (76%) comments were coded out of a total of 1636 comments. The remaining 24% of comments posted within the groups were used for coders’ training and were excluded from the sample.

Code Book

The code book was developed from many long and interesting conversations between Richard, Sofia, Lani Ritter-Hall and myself.  The function codes were influenced by Bonk and Kim (1998), Gunawardena et al. (1997) and the work I had previous done with Dr. Chris Garies on electronic induction (2007).  The content codes were drawn from the e.potential Teacher ICT Capability Survey (DEECD, 2008) which highlights the typical topics that professional conversations fit into in Australia.

PULSE, the science of collaboration

PULSE is an embedded online content analysis tool that facilitates the analysis of the comments posted on the PLP Netbook online community (or any other Computer Mediated Conversations). This tool consists of a browser plug-in and a website that allows coding and aggregation of any content anywhere on the web. The tool, created by Richard and sponsored through ideasLab, was actually an offshoot from the coding needs of the project itself. If you are interested in knowing more about the tool send ideasLab an email.


The comments in the sample were divided amongst three coders. This division maintained the integrity of the threads and ensured all coders had some comments from group and public spaces. Only one of the coders was heavily involved in the online community while the other two had no involvement. Two of the coders were USA-based and one was based in Australia. The coding of the sample was carried out over one week.

Inter-coder reliability rating

Coders were trained over six meetings to ensure a high inter-coder reliability rating was obtained. The training involved discussion and revision of the coding book, several rounds of coding and follow up discussion to clarify and adjust misinterpretations. The ReCal3 online reliability calculator (Freelon, 2010) was chosen to generate the rating as it was suited for 3+ coders and provided not only the percentage of agreement between coders but a number of statistical coefficients that made the results more robust.  Table 2 shows the rating obtained for flow, content and function and the average.

Average %  Agreement Average 

Cohen’s Kappa

Flow 95.8 % 0.941
Content 95.8 % 0.921
Function 81.25 % 0.606
AVERAGE 90.97 % .82

Table 2: Inter coder reliability rating

Research Findings

The findings presented in this section are organized around the three research questions stated above.


R.Q.1 What is the flow (i.e., direction) and frequency of the posts among differing roles within the learning community?

Five different roles were delineated within the community from the outset. Members, team leaders (who played more of a logistics role), fellows, community leaders and experienced voices who were international experts invited to join in the conversation.

The majority (62%) of comments posted in the community were broadcasted (directed to everyone rather than any one person) (Graph 1) with a slightly higher number of broadcast comments posted in the public discussion forum and blog (384 comments) than in the group spaces (358 comments).  Community leaders, team leaders and members received larger number of comments in the public spaces than in the groups. Community leaders received three times more comments in public (53 comments) than in groups (17 comments), members received almost double number of comments in public (101) than in the group spaces (58 comments).  Fellows and Experienced voices were the only two roles that deviated from the overall trend with Fellows being addressed more often at the group level (64 comments) than in public (38 comments), and Experience voices being exclusively addressed in the groups (32 comments).

Graph 1 Direction of comments










Graph: 2 Comments posted by roles







Graph 3: Comments direction in public vs group










Graph 4: Comments posted in public vs group










Members in the community were slightly more inclined to broadcast comments in the groups (154 comments) than in the public spaces (125 comments). In line with this tendency, members addressed Team leaders and Fellows in the groups more often, perhaps due to these roles being clearly associated with supporting the work taking place within the groups. On the other hand, members’ comments to other members and to community leaders were mostly found on the public discussion forum and blog.  In addition, members addressed other members (40 comments) and fellows (42 comments) two and a half more times than community leaders (16 comments), suggesting predominance in peer interaction. Only 6 comments were directed to experienced voices by members in the community.

Overall, team leaders broadcasted more comments in the groups (89 comments) than in the public spaces (78 comments). However their comments to members and community leaders were mostly found in the public discussion forum and blog. It is interesting to notice the opposite trend between team leaders and members. Team leaders addressed members, in most cases, in public (15 comments) while the latter addressed the former in most cases in the groups (21 comments). Despite the longer presence of fellows in the community as compared with experienced voices, equal number of comments was directed to fellows (10 comments) and to experienced voices (9 comments) by team leaders.

In a similar way to members and team leaders, fellows’ broadcasting comments were mostly located in groups, which also housed most of fellows’ posts to members and all of their posts to experienced voices.  This predominance of posting in groups may be the result of fellows and team leaders being the only two roles associated with particular groups.  In addition, the flow of comments between fellows and team leaders was asymmetrical with fellows addressing team leaders three times more than team leaders did to fellows. Fellows’ comments to team leaders, other fellows and community leaders were mostly found in the public spaces.

Experienced voices flow of comments adhered to the overall trend with most of their comments being broadcasted to the whole community. However, the number of comments posted by this group was significantly less than the other roles, perhaps due to the short period of time they were part of the community. Their posting mostly took place within the groups with a very small number of comments posted on the public discussion forum and blog. Experienced voices were in turn addressed exclusively in the groups with no comment directed to them in the public spaces.

Lastly, most (53.4%) of community leaders’ comments were broadcasted and largely posted in the public spaces.  The second largest number of comments by the community leaders was addressed to members in the public spaces. The flow of comments from community leaders to the other two leadership roles of fellows and team leaders were very similar in numbers and only slightly higher than the amount of comments community leaders received from fellows and team leaders. In sum, the amount of comments flowing amongst community leaders, team leaders and fellows seemed to be fairly similar with the exception of the flow between fellows and team leaders, where the former addressed the latter three times more.


R.Q.2 What is the content (i.e., topics) and frequency of the posts among posts differing roles within the learning community?

The content codes were drawn from the e.potential ICT teacher survey to provide insight into the topic of the conversations held within the community. These topics were learning and teaching, assessment and reporting, classroom organization and management, ICT ethics, resources, and leadership.

In terms of community content discussions there was the highest concentration of comments by members around professional learning (181), resources (115) and learning and teaching (80). The predominance of comments around professional learning and teaching and learning are encouraging as these were the key focus of the community. However, the high incidence of comments around resources suggests that this was an important issue for members despite community leaders and fellows’ attempts to steer away from a resource driven discourse.

Conversely, very few comments occurred around assessment and reporting (0) and classroom organization and management (1). Leaders (team leaders, community leaders and fellows) also spent most of their time in the community discussing professional learning, resources and learning and teaching with approximately same frequency. And likewise, they also spent the least amount of time in discussions around assessment and reporting, classroom organization and management. The low numbers of comments around classroom organization and management maybe due to having a community of seasoned and experienced teachers. The absence of assessment related comments is on the other hand less expected since this is one of the most challenging issues around 21st century learning and teaching.

Some discussions around leadership were carried out by fellows and community leaders in most part, and all participants had discussions around ICT ethics at about the same frequency.

While members posted mostly in group spaces, the topic of learning and teaching was discussed mostly in public spaces in the community. However the opposite happened with the topic of professional learning as this was discussed in most part in the group spaces. This may suggest that participants felt more comfortable discussing and reflecting their professional development in the smaller, intimate setting of the groups with the exception of the community leaders.

Members talked about resources in both private and public spaces, fellows discussed it more in the groups, perhaps in response to the initial briefing they received where a strong focus on resources was discouraged. 


R.Q.3 What is the function (i.e., purpose) and frequency of the posts among posts differing roles within the learning community?

The three most utilized levels of knowledge building were sharing information, sharing a point of view and sharing/contrasting experiences, key for building trust and sense of community as participants are willing to open and share with others. However, a very small number of comments involved contrasting points of view which suggest that the benefits gained from the identification of trends, similarities and dissonances amongst different viewpoints was limited. The content they were sharing, contrasting, or giving a point of view on aligned nicely with the top content areas mentioned before, professional learning, resources and teaching and learning.











The least used knowledge functions were the highest order skills- negotiation of meaning and professional growth, with negotiation of meaning only occurring at the community leader level. The content in these higher order skills revolved around

The function of mentoring was almost solely displayed by community leaders and some fellows, with no peer mentoring taking place amongst members and team leaders.  The Mentoring function was mostly displayed by community leaders and some fellows. This mentoring regarded professional learning, leadership, and learning and teaching.

Research Findings Highlights

  • Out of 130 community members, 20% chose not to post and just observe (lurk).
  • Most comments across all roles were broadcasted to the whole community.
  • Experienced voices invited to the community was an underutilized resource as indicated by the small number of comments they posted and received and the predominance of group level interactions only.
  • While the community was designed with loose governance, the higher the perceived leadership role the comments took on a more specific direction.
  • Members posted mostly in group spaces yet they were addressed more often in public spaces.
  • Leaders posted mostly in public spaces.
  • Resources were the second most frequently addressed topic in the community, with the first one being professional learning and third being learning and teaching.
  • Sharing was strong within the community, yet analytical discourse around the identification of dissonances and similarities amongst people´s point of views and experiences was remarkably lacking. We feel this was due to the abbreviated time frame (4 months) and that there simply wasn’t enough time to build trust and community to a level that would allow such deep dialog to take place.
  • Members in the community did not engage in negotiation of meaning. Community leaders did engage in negotiation of meaning. We feel this was due to the abbreviated time frame (4 months) and that there simply wasn’t enough time to build trust and community to a level that would allow such deep dialog to take place.


Bonk, C. J., & Kim, K. A. (1998). Extending sociocultural theory to adult learning. In M. C. Smith, & T. Pourchot (Eds.) Adult learning & development: Perspectives from educational psychology. Mahwah, NJ, USA: Erlbaum Associates.

Freelon, D.G. (2010) ReCal: Intercoder Reliability Calculation as a web service. International Journal of Internet Science 5 (1), 20-33.

Gareis, C. and Nussbaum-Beach, S.L. (2008) Electronically Mentoring to Develop Accomplished Professional Teachers. Journal of Personnel Evaluation in Edcuation. 20(3-4), 227-246.

Gunawardena, C. Lowe, C & Anderson, T. (1997). Analysis of global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research 17(4), 37-431.

DEECD (2008). ePotential Teacher ICT Capabilities Survey; Powerful Learning Enabled by ICT.

Photo credit:

Post a comment
Write a comment: