Corpus Analysis of Interactional Response Words as Indicators of Shared Understanding in Medical Problem-Based Learning Tutorials
Article Information
Olukayode Matthew Tokode1,2*, Reg Dennick1
1University of Nottingham Medical School, Nottingham, United Kingdom
2College of Health Sciences, Osun State University, Osun state Nigeria
*Corresponding Author: Olukayode Matthew Tokode, University of Nottingham Medical School, Nottingham, United Kingdom.
Received: 28 November 2024; Accepted: 05 December 2024; Published: 19 December 2024
Citation: Olukayode Matthew Tokode, Reg Dennick. Corpus Analysis of Interactional Response Words as Indicators of Shared Understanding in Medical Problem-Based Learning Tutorials. Journal of Psychaity and Psychaitric Disorders. 8 (2024): 303-311.
View / Download Pdf Share at FacebookAbstract
Background: Shared understanding is essential to effective collaborative learning. Interactive processes occurring in problem- based learning (PBL) tutorials have been explored to determine their cognitive and social advantages, but shared understanding is a relatively under-researched social process of PBL. The objective of this study is to describe how medical students share understanding in medical problem-based learning tutorials. Method: We recruited participants from first-year medical students in a single institution’s problem-based learning graduate entry curriculum. Transcripts from full cycles of eight tutorial groups were compiled to form the study corpus. Small interactional response words as indicators of shared understanding were measured using the Wmatrix 3 programme, and concordance lines were analysed manually to determine word functions. Results: Interactional response words were most prevalent in session 1 and least prevalent in session 2 of the PBL cycle. Interactional response words were used to mark unexpanded and simple and complex content expansion functions. While affirmation content expansion functions and reactive content expansion functions were more prevalent in sessions 1 and 3, negation content expansion functions were more frequent in session2. The frequency of interactional response words and their functions seem to align with the focus of each PBL tutorial session. Conclusions: Demonstrating the feasibility of corpus linguistics methodology for PBL concept analysis, this study showed that students in PBL tutorials attained sophisticated levels of shared understanding. We discussed the implication of the results for interprofessional teamworking and patient-doctor communication.
Keywords
Problem-Based Learning; Wmatrix3; Shared Understanding
Problem-Based Learning articles; Wmatrix3 articles; Shared Understanding articles
Article Details
1. Introduction
1.1 Background
Research on problem-based learning (PBL) curricula has evolved into microgenetic analyses of why and how PBL works [1]. Process-focused research has studied several aspects of PBL processes including learning issue generation [2], knowledge construction [3, 4], biomedical reasoning [5], and conceptual change [6, 7]. Shared understanding processes in PBL tutorial conversation (talk) have been insufficiently studied, even though students collaborate in PBL tutorials [8] and shared understanding is essential to collaboration [9]. These processes result in shared understanding marked by evidence of conceptual convergence such as moving to the next topic, simple affirmative acknowledgement or recitation, and mutual elaboration or concept completion [10].
The technologies so far applied to talk analysis do not profile verbal data into grammatical categories, and conducting an analytic process is difficult [11]. The Wmatrix 3 software has proved useful for measuring linguistic categories for further manual analysis [12]. We applied corpus linguistic methodology to analyse graduate entry medical students’ shared understandings during PBL tutorial talk. We used Wmatrix 3 to measure small interactional response words as indicators of shared understanding to answer the following questions:
- What is the frequency of common interactional response words in PBL tutorial transcripts?
- What are the common functions of small interactional response words in PBL tutorial transcripts?
- What level of shared understanding is evident in PBL tutorial conversation?
- How is the evidence of shared understanding related to PBL discourse content across tutorial sessions?
1.2 Shared understanding
All knowledge is bound up in social, cultural, and physical activity [13]. Academic talk has been viewed as display, confirmation, and repair of knowledge within active situations [10]. Shared understanding refers to overlapping understandings among group members brought about through collaborative negotiation and acceptance of individual contributions [14]. Shared meaning is a group achievement attained when discourse participants engage in collaborative social activity [15]. Shared understanding involves refining ambiguity and partial meanings through cycles of display, confirmation, clarification, questioning, and repairing of shared meaning [16]. The current emphasis on interprofessional teamwork [17] and patient-doctor shared decision-making [18] indicates that medical students need to learn understanding in addition to the acquisition of content knowledge [19].
2. Method
2.1 Study design
Our previous study [12] demonstrated the suitability of corpus methodology for the analysis of transcripts of PBL tutorial discussions more systematically and with less bias. The present study extends the application of the corpus analysis methodology to an analysis of PBL talk to assess evidence of shared understanding.
2.2 Setting
Graduate entry PBL at the University of Nottingham Medical School in Derby is a hybrid curriculum. Students and facilitators meet for 4–5 hours weekly, divided into three sessions (PBL 1, 2, and 3). The first session concerns problem analysis and learning issue generation for self-study, with the results of the self-study being presented in the second session. Students then devise a management plan and reflect on a specific case in the third session.
2.3 Participants
Participant recruitment occurred through the provision of verbal and written information. We invited the 2009 and 2010 student cohorts to participate in the study. Participation was voluntary. Of the twelve tutorial groups in each cohort, six of the 2009 cohort and five from the 2010 cohort participated in the research. Inclusion criteria were willingness to participate in the research and completion of consent forms for audio and video recordings. Exclusion criteria were unwillingness to participate in the study, refusal to consent to audio and/or video recording, and being a temporary facilitator. Recruitment into the study took place after the students had acquired three months’ experience with the PBL curriculum.
2.4 Data collection
The student’s audio- and video-recorded the tutorial discussions using an Olympus DS-2500 dictation machine and a Sony HD camcorder, respectively. An external professional transcriber transcribed the audio recordings verbatim.
2.5 Corpus formation
We removed irrelevant conversation from the transcripts. The first author used video footage to assign transcript statements to the tutorial participants. Unique codes were assigned to the participants for identification. The study corpus consisted of transcripts from eight tutorial groups. Of the eleven consenting groups, transcripts from three groups were excluded because of poor transcriptions due to inaudibility, and multiple incomplete recording because of equipment failure. Transcripts were compiled by PBL session. The students’ contributions formed the students’ file, and the whole corpus file contained the contributions of the students and the facilitators. The transcript files were converted to plain text files and uploaded to the Wmatrix 3 online software. The students’ file was used for measuring the interactional word frequency, while the whole corpus file was used for concordance analysis. The study corpus consisted of 253,145 words: PBL 1 = 86,414, PBL 2 = 108,655, and PBL 3 = 58,076. Further information on Wmatrix 3 is available on the University of Lancaster’s website.
2.6 Shared interactional response words
The following small interactional response words have been considered to mark shared meaning during interactional conversation [20]:
- Acknowledgement responses such as continuers (e.g., ‘uh’, ‘yeah’)
- Assessment tokens (e.g., ‘gosh’, ‘really’)
- Repair tokens (e.g., ‘I mean’)
- Attention check tokens (e.g., ‘you know’)
- Agreement tokens (e.g., ‘that’s right’, ‘exactly’, ‘I see’)
- Appreciation (e.g., ‘thank you’, ‘well done’)
- Negation tokens (e.g., ‘No’).
This study adopted these response words in its analysis.
2.7 Data analysis
The Wmatrix 3 programme was used to retrieve the interactional response words from the interjection (UH) parts- of-speech category. The five most frequently used words were retrieved, further inspected, and analysed. Concordance lines were exported to an Excel spreadsheet file. Manual analysis was done to disambiguate, remove repeated words, and determine the functions and evidence levels of the interactional words. The raw and normalised frequencies of the words were then calculated. Coding of the word functions followed a directed content analysis procedure [21].
2.8 Statistical analysis
The Log-Likelihood calculator was used to calculate the normalised frequency (per 100 tokens) and normalised frequency comparison. A p-value of less than 0.05 and a Log- Likelihood (LL) value greater than 3.84 were considered significant.
3. Results
3.1 Frequency of interactional response word
There were 4,213 words in the five most frequently used small interactional responses in the whole corpus, as follows: non-lexical affirmation word ‘yeah’, 1,722 (40.87%); lexical affirmation word ‘yes’, 1,164 (27.63%); negation lexical word ‘No’, 1,002 (23.78%); and reactive word ‘Oh’/’Ah’, 325 (7.72%)
Overall, the students used more than 1.0 interactional word per 100 tokens to mark their discourse across PBL sessions, but the interactional words were less frequent in PBL 2 (1.40 per 100 tokens) than in either PBL 1 (1.96 per 100 tokens) or PBL 3 (1.79 per 100 tokens). Participants used more than 1.0 affirmation word (‘yeah/yes’) per 100 tokens in PBL 1 and PBL 3 (1.35 and 1.21 per 100 tokens, respectively), but less than 1.0 per 100 tokens in PBL 2 (0.94 per 100 tokens). The variation of affirmation words across the PBL sessions was statistically significant (Table 1 below).
Table 1: Identifying shared understanding: Raw and normalised frequencies per 100 tokens of occurrence of the five top small interaction response tokens indicating shared understanding and their log-likelihood values in each problem-based learning session
Word |
Interactional response words |
Log Likelihood (LL) value |
Combined PBL groups |
||||||||
PBL 1 |
PBL 2 |
PBL 3 |
1 vs 2 |
2 vs 3 |
1 vs 3 |
RF |
NF |
||||
RF NF |
RF NF |
RF NF |
|||||||||
Yeah/yes |
1169 |
1.35 |
1017 |
0.94 |
700 |
1.21 |
+ 73.99* |
- 26.04* |
+ 5.89* |
2,886 |
1.14 |
No |
350 |
0.41 |
395 |
0.36 |
257 |
0.44 |
+ 2.16 NS |
- 5.93* |
- 1.16 NS |
1,002 |
0.4 |
Oh/Ah |
137 |
0.16 |
107 |
0.1 |
81 |
0.14 |
+ 13.76* |
- 5.47* |
+ 0.84 NS |
325 |
0.13 |
Total |
1656 |
1.96 |
1519 |
1.4 |
1038 |
1.79 |
+ 90.95* |
- 36.56* |
+ 5.57* |
4213 |
1.66 |
The number of small interactional response words indicating shared understanding was measured by using the Wmatrix 3 tag ‘interjections’ (UH); PBL = problem-based learning; 1, 2 and 3 = sessions 1, 2 and 3; NS = not significant; RF = raw frequency; NF = normalised frequency per 100 tokens; *P < 0.05; critical value ≥ 3.84; vs, versus. |
Although the students generally used fewer than 1.0 negation word per 100 tokens across the PBL sessions, the least prevalence was noted in PBL 2 (0.36 per 100 tokens). There was no significant difference in the negation frequency in PBL 1 and PBL3 (0.41 vs 0.44 per 100 tokens, LL – 1.16). However, the negative words were significantly overused in PBL 3 compared to PBL 2 (0.44 vs 0.36 per 100 tokens, LL + 5.93). Likewise, less than 1.0 reactive word per 100 tokens was used across the PBL sessions, the reactive words having about the same prevalence in PBL 1 and 3 (0.16 vs 0.14 per 100 tokens, LL + 0.84) but being least used in PBL 2 (0.10 per 100 tokens). The results suggest that PBL1 was the most interactive session, and PBL2 the least.
Functions of small interactional response words. The analysis of interactional response words showed various functions. However, due to limitations of space, only functions that occurred in tens or more in each tutorial session have been reported, as follows: 3,509 (83.29%) of the total 4,213 words: PBL 1 = 1,387, PBL 2 = 1,288, and PBL 3 = 834 (Table 2 below).
The figures in Table 2 (below) reflect the distinctive feature of PBL session 1 (PBL 1), that affirmation functions (viz., acknowledgement, confirming, restatement, addition, commenting, contrasting and question preface) were more prevalent than in either PBL 2 or PBL 3. Also, negation function (viz., addition) and reactive functions (viz., recall and information orientation) were more frequent in PBL 1 than in PBL 2 or 3.
However, addition type of affirmation function was more frequent in PBL 2 (0.21 per 100 tokens) than in either PBL 1 (0.15 per 100 tokens) or PBL 3 (0.18 per 100 tokens). While the addition function was significantly overused in PBL 2, there was no significant difference in its prevalence in PBL 1 and PBL 3 (0.18 vs 0.15 per 100 tokens, LL – 1.90). Although students used about 0.1 affirmation words per 100 tokens: acknowledgement (0.11 per 100 tokens), confirming (0.12 per 100 tokens) and restating (0.10 per 100 tokens) in PBL 2, the figures were significantly less than those recorded in PBL 1 and PBL 3 (Table 2 below). The students used fewer than 0.1 affirmation words per 100 tokens to mark other affirmation functions. In PBL 2, less than 0.1 negation words per 100 tokens were used to mark negation functions. However, the prevalence of simple negation (0.09 per 100 words) and disagreement (0.09 per 100 tokens) functions was close to 0.1 per 100 tokens. While there was no significant variation in the prevalence of simple negation function across PBL sessions, the disagreement function was significantly more prevalent in PBL 2 (0.09 per 100 tokens) than in PBL 1 (0.06 per 100 tokens, LL + 6.38) and PBL 3 (0.04 per 100 tokens, LL + 13.39). Similarly, PBL discourse participants used less than 0.1 interactional reactive words per 100 tokens to mark orientation (0.04 per 100 tokens) and recall (0.01 per 100 tokens) functions in PBL 2. The prevalence of these functions was significantly less than in PBL 1 (0.08 vs 0.04, LL +10.91; 0.04 vs 0.01, LL + 15.15). The difference between these functions in PBL 2 and PBL 3 was not significant in relation to orientation function (0.06 vs 0.04, LL + 1.31) but significant with regard to recall function (0.02 vs 0.01, LL + 6.02).
The students used more than 0.1 interactional response words per 100 tokens to mark acknowledgement (0.19 per 100 tokens), agreement (0.17 per 100 tokens), confirming (0.12 per 100 tokens) and addition (0.18 per 100 tokens) functions in PBL 3. However, the frequency of acknowledgement and confirming functions in PBL 3 was significantly less than in PBL 1 (LL + 8.50 and LL +45.92 respectively), whereas there was no significant difference in the frequency of addition function between PBL 1 and PBL 3 (LL – 1.90). PBL 3 was, however, distinctive because, unlike PBL 1 and PBL 2, the students overused affirmation words to sequence (0.08 per 100 tokens) and specify (0.04 per 100 tokens) talk and preface agreement (0.17 per 100 tokens), cause-effect (0.07 per 100 tokens) and question (0.05 per 100 tokens) functions (Table 2). Furthermore, the students overused negation interactional words to mark correction (0.03 per 100 tokens) and cause-effect (0.02 per 100 tokens) function in PBL 3 compared to PBL 1 and PBL 2 (Table 2). The students used reactive interactional words to mark idea orientation (0.06 per 100 tokens) and recall (0.02 per 100 tokens) functions in PBL 3. While there was no significant difference in the prevalence of idea orientation function in PBL 1 and PBL 3 (0.08 vs 0.06 per 100 words, LL + 1.31), the students significantly overused reaction interactional words to mark idea recall function in PBL 1 than in PBL 3 (0.04 vs 0.02 per 100 tokens, LL + 6.02). Generally, the results suggest that the interactional response functions were most frequent in PBL 1, less frequent in PBL 3 and least prevalent in PBL 2.
Table 2: Patterns of shared understanding codes: Normalised frequencies per 100 tokens and Log Likelihood values for frequent expansions of small interactional response words indicating shared understanding
|
NF per 100 words |
Log Likelihood (LL) |
Combined |
|||||
Word |
Function |
PBL1 |
PBL2 |
PBL3 |
1 vs 2 |
2 vs 3 |
1 vs 3 |
NF |
Affirmation (Yeah/Yes) |
Acknowledgement |
0.27 |
0.11 |
0.19 |
+ 69.93* |
- 19.38* |
+ 8.50* |
0.1 |
Talk sequence |
0.03 |
0.04 |
0.08 |
- 0.35 NS |
- 12.01* |
- 14.44* |
0.05 |
|
Agreement |
0.01 |
0.09 |
0.17 |
- 61.25* |
- 16.69* |
- 113.58* |
0.08 |
|
Confirming |
0.28 |
0.12 |
0.12 |
+ 66.48* |
- 0.00 NS |
+ 45.92* |
0.17 |
|
Comment |
0.18 |
0.04 |
0.06 |
+ 89.35* |
- 1.42 NS |
+ 45.49* |
0.09 |
|
Restate |
0.15 |
0.1 |
0.09 |
+ 10.19* |
+ 0.09 NS |
+ 8.75* |
0.12 |
|
Specify |
0.01 |
0.01 |
0.04 |
+ 0.12 NS |
- 16.38* |
- 12.57* |
0.02 |
|
Addition |
0.15 |
0.21 |
0.18 |
- 7.28** |
+ 0.93 NS |
- 1.90 NS |
0.18 |
|
Contrast |
0.08 |
0.05 |
0.07 |
+ 5.11* |
- 1.27 NS |
+ 0.71 NS |
0.06 |
|
Cause-effect |
0.06 |
0.06 |
0.07 |
- 0.09 NS |
- 0.41 NS |
- 0.77 NS |
0.06 |
|
Preface question |
0.04 |
0.03 |
0.02 |
+ 1.07 NS |
+ 0.56 NS |
+ 2.49 NS |
0.03 |
|
Question token |
0.02 |
0.03 |
0.05 |
- 0.29 NS |
- 4.70* |
- 6.38* |
0.03 |
|
Simple negation |
0.08 |
0.09 |
0.09 |
- 0.72 NS |
- 0.06 NS |
- 0.92 NS |
0.08 |
|
Negation (No) |
Addition |
0.04 |
0.03 |
0.04 |
+ 1.17 NS |
- 0.44 NS |
+ 0.04 NS |
0.04 |
Correction |
0.01 |
0.02 |
0.03 |
- 3.66 NS |
- 0.31 NS |
- 4.80* |
0.02 |
|
Disagreement |
0.06 |
0.09 |
0.04 |
- 6.38* |
+ 13.39* |
+ 2.02 NS |
0.07 |
|
Cause-effect |
0.01 |
0.01 |
0.02 |
- 0.14 NS |
- 0.15 NS |
- 0.47 NS |
0.01 |
|
Reactive (Oh/Ah) |
Orientation |
0.08 |
0.04 |
0.06 |
+ 10.91* |
- 3.02 NS |
+ 1.31 NS |
0.06 |
Recall |
0.04 |
0.01 |
0.02 |
+ 15.15* |
- 0.73 NS |
+ 6.02* |
0.02 |
|
The number of small interactional response words indicating shared understanding was measured by using the Wmatrix 3 tag ‘interjections’ (UH); PBL = problem-based learning; 1, 2 and 3 = sessions 1, 2 and 3; NS = not significant; RF = raw frequency; NF = normalised frequency per 100 tokens; *P < 0.05; critical value ≥ 3.84; vs, versus. |
3.2 Shared understanding evidence
The affirmation interactional functions. The figures in Table 3 (below) show that, overall, more affirmation words (0.40 per 100 tokens) were used for interactional responses without content expansion than for interactional responses with complex (0.33 per 100 tokens) and simple (0.29 per 100 tokens) expansions and least used to mark questions (0.06 per 100 tokens). The interactional responses without content expansion (acknowledgement and continuative) and simple content expansion responses (agree, restate, comment and confirm) were significantly more frequent in PBL 1 (0.58 and 0.34 per 100 tokens respectively) but less prevalent in PBL2 (0.26 and 0.24 per 100 tokens respectively). While there is no significant difference in the prevalence of simple content expansion in PBL 1 and PBL 3, the variation in the prevalence of unexpanded content responses across PBL groups was significant. Interactional responses with complex expansions (addition, contrast, specify and cause-effect) and questioning responses were more prevalent in PBL 3 (0.36 and 0.07 per 100 tokens, respectively) than in PBL 1 and 2. While the variation of the questioning function across the PBL sessions was not significant, complex content expansion functions were significantly more prevalent in PBL 3.
Table 3: Commonly occurring standards of shared understanding: Normalised frequencies per 100 tokens and Log Likelihood values for frequent degrees of interactional responses
NF per 100 words |
Log Likelihood (LL) |
Combined NF |
||||||
PBL1 |
PBL2 |
PBL3 |
1 vs 2 |
2 vs 3 |
1 vs 3 |
|||
Affirmation |
Without content expansion |
0.58 |
0.26 |
0.39 |
+119.75* |
- 19.06* |
+ 25.66* |
0.4 |
With simple content expansion |
0.34 |
0.24 |
0.32 |
+ 20.06* |
- 9.58* |
+ 0.32 NS |
0.29 |
|
With complex content expansion |
0.3 |
0.33 |
0.36 |
- 1.19 NS |
- 1.15 NS |
- 3.84* |
0.33 |
|
Mark question |
0.06 |
0.05 |
0.07 |
+ 0.19 NS |
- 1.32 NS |
- 0.53 NS |
0.06 |
|
Negation |
Without content expansion |
0.08 |
0.09 |
0.09 |
- 0.72 NS |
- 0.06 NS |
- 0.92 NS |
0.08 |
With complex content expansion |
0.13 |
0.16 |
0.13 |
- 4.31* |
+ 3.23 NS |
- 0.00 NS |
0.14 |
|
Reactive |
Without content expansion |
0.08 |
0.04 |
0.06 |
+ 10.91* |
- 3.02 NS |
+ 1.31 NS |
0.08 |
With simple content expansion |
0.04 |
0.01 |
0.02 |
+ 15.15* |
- 0.73 NS |
+ 6.02* |
0.04 |
|
The number of small interactional response words indicating shared understanding was measured by using the Wmatrix 3 tag ‘interjections’ (UH); PBL = problem-based learning; 1, 2 and 3 = sessions 1, 2 and 3; NS = not significant; RF = raw frequency; NF = normalised frequency per 100 tokens; *P < 0.05; critical value ≥ 3.84; VS, versus. |
3.3 Negation interactional functions
Generally, the negation words were more frequently used for complex content expansion functions (0.14 per 100 tokens) than for unexpanded content functions (0.08 per 100 tokens). Unexpanded content (simple negation) functions have equal prevalence in PBL 2 (0.09 per 100 tokens) and PBL 3 (0.09 per 100 tokens) but slightly less in PBL 1 (0.08 per 100 tokens). However, there is no significant difference in the variation of unexpanded functions across the PBL sessions. The complex content expansion functions (viz., addition, contradiction and correction) were significantly more prevalent in PBL 2 (0.16 per 100 tokens) than in either PBL 1 (0.13 per 100 tokens) or PBL 3 (0.13 per 100 tokens).
3.4 Reaction interactional functions
The reaction interactional words were used predominantly for unexpanded functions (0.08 per 100 tokens) than for simple content expansion functions (0.04 per 100 tokens). Both reaction interactional functions were frequent in PBL 1 (0.08 and 0.04 per 100 tokens), less prevalent in PBL 3 (0.06 and 0.02 per 100 tokens) and occurred least in PBL 2 (0.04 and 0.01 per 100 tokens). The unexpanded function was significantly more prevalent in PBL 1 than PBL 2 (LL + 10.91), but there was no significant prevalence difference between PBL 1 and PBL 3 (LL + 1.31). The simple content expansion function was significantly more prevalent in PBL 1 than in PBL 2 (LL + 15.15) or PBL 3 (LL + 6.02).
3.5 Discussion content
Generally, students used 1.5 interactional response words to mark knowledge talk, about 1.0 interactional response words to preface task plan and fewer than 1.0 interactional response words to mark physical action, humour and reflection talk (Table 4 below). Also, while there was no significant difference in the overall frequency of theinteractional discourse contents in PBL 1 and PBL 3 (1.92 vs 1.79 per 100 tokens, LL + 3.12), the interactional discourse contents were less prevalent in PBL 2 when compared to PBL 1 (1.40 vs 1.92, LL - 78. 83) and PBL 3 (1.40 vs 1.79, LL - 36.56). Table 4figures show that students generally used more than 1.0 interactional response word to mark knowledge discourse across the tutorial sessions. The knowledge discourse has the highest prevalence in PBL 3 (1.65 per 100 tokens) but there was no difference in the prevalence of knowledge talk marked with the interactional response words in PBL 1 and PBL 3 (1.57 vs 1.65 per 100 tokens, LL – 1.43), implying that knowledge talk was the least prefaced with interactional response words in PBL 2. The interactional response words were significantly overused to mark task plan talk in PBL 1 (0.21 per 100 tokens),
whereas the task plan talk had an equal frequency in PBL 2 and 3 (0.003 per 100 tokens each, LL – 0.63). The physical activity talk marked with interactional response words varied significantly across tutorial sessions, but were most prevalent in PBL 1 (0.11 per 100 tokens) and least frequent in PBL 2 (0.01 per 100 tokens). Also, interpersonal humour marked with interactional response words was most prevalent in PBL 1 (0.02 per 100 tokens), whereas reflection marked with interactional response words was limited to PBL 3.
Table 4: Discourse content: Raw and normalised frequency per 100 tokens and Log Likelihood value for the commonly occurring discourse content in each problem-based learning session
Content |
PBL session |
Log Likelihood (LL) |
All PBL sessions combined |
||||||||
PBL 1 |
PBL 2 |
PBL 3 |
1 vs 2 |
2 vs 3 |
1 vs 3 |
||||||
RF |
NF |
RF |
NF |
RF |
NF |
RF |
NF |
||||
Knowledge |
1358 |
1.57 |
1474 |
1.36 |
960 |
1.65 |
+ 15.24* |
- 22.36* |
- 1.43NS |
3792 |
1.5 |
Task plan |
184 |
0.21 |
28 |
0.03 |
19 |
0.03 |
+ 166.90* |
- 0.63 NS |
+ 97.63* |
231 |
0.09 |
Physical action |
99 |
0.11 |
9 |
0.01 |
14 |
0.02 |
+ 109.79* |
- 6.45* |
+ 42.64* |
122 |
0.05 |
Interpersonal humour |
15 |
0.02 |
8 |
0.01 |
1 |
0 |
+ 4.07* |
+ 2.68 NS |
+ 9.76* |
24 |
0.01 |
Reflection |
0 |
0 |
0 |
0 |
44 |
0.08 |
+ 0.00 NS |
- 92.81* |
- 80.21* |
44 |
0.02 |
Total |
1656 |
1.92 |
1519 |
1.4 |
1038 |
1.79 |
+ 78.83* |
- 36.56* |
+ 3.12 NS |
4213 |
1.66 |
PBL = problem-based learning; 1, 2 and 3 = sessions 1, 2 and 3; NS = not significant; RF = raw frequency; NF = normalised frequency per 100 tokens; *P < 0.05; critical value ≥ 3.84; vs, versus. |
4. Discussion
In this study, we applied corpus linguistic methodology to explore graduate entry medical students’ shared understanding in medical PBL tutorial conversations.
The interactional response words indicating shared understanding showed certain noteworthy features. The low prevalence of non-lexical ‘oh/ah’ response words, overall and in each PBL session, suggests that the students engaged in active interactive conversations, as one would expect in a PBL situation, and invested more in collaborative efforts, with less emphasis on non-vocal reactions. Shared understanding is attained through a collaborative process that requires effort from discourse participants [22, 23]. The affirmation interactional response words contained a mixture of ‘yeah’ and ‘yes’ words. The prevalence of ‘yeah’ interactional response words could suggest that the students pronounced the lexical form ‘yes’ informally, which supports the likelihood of a more relaxed collegial discourse within participants’ interactive responses that generally characterises a tutor- led classroom.
In general, data analysis results suggest that interactional response words were most frequent in PBL 1, and least prevalent in PBL 2. This finding suggests that most interactional-response-word-prefaced conversation may have occurred in PBL 1, perhaps meaning that the students were engaged in trying to understand what was required and the perspectives involved concerning the new case problem. The participants in this study were mature learners with rich educational, work, and life experiences that they could bring to bear collectively. Any ensuing conflicts, which perhaps needed resolving to attain a shared understanding, may have resulted in the overuse of interactional response words to preface their discourse. Moreover, the low prevalence of interactional response words in PBL 2 seems reasonable; the students might have resolved conflicts due to contrasting understandings in PBL 1, and PBL 2 could have been devoted to long stretches of discourse as they presented the results of their self-directed learning. The prevalence of interactional response words in PBL 3 followed the PBL 1 pattern. This finding suggests that the students marked their discourse with interactional response words as they negotiated the pros and cons of management plans and expressed individual views about the case scenario, their input, and how the PBL cycle had been conducted.
Shared meaning in interactive talk progresses and accumulates incrementally through processes of refinement and monitoring [10, 24]. The various affirmation interactional response functions in this study suggest that the students appear to have engaged in interactional responses in which they agreed explicitly with peers’ contributions, checked and monitored mutual understanding, as well as confirmed, reasserted, repaired, and expanded peers’ ideas and information to achieve shared meaning. The prevalence of negation interactional response functions, overall and across the PBL sessions, suggests that the students engaged in discussions involving contending views in relation to their knowledge and ideas, and provided sophisticated evidence for their disagreements in the form of corrections, additions, and cause-effect relationships. Reactive interactional response words were used to orientate students to peers’ contributions and recall previous knowledge and ideas. Orientation to information and information recall are considered to be associated with the creation of shared knowledge [25].
Further analysis of interactional response words provided various levels of evidence to show how the students shared an understanding in their tutorial conversation. Affirmation interactional response words were used for a mixture of unexpanded and simple and complex content expansion talk. Although interactional response functions without content expansion, such as acknowledgement and talk sequence (in instalments and continuatively) may constitute lower-order evidence of shared meaning, they are essential in that they indicate the attention and mutual support that students give to each other during talk-in-interaction [26]. These functions were more prevalent in PBL 1, where the focus was on hypothesis generation with limited criticism, than in other PBL sessions. Interactional response functions with content expansion could be simple or complex. Simple content expansion provided more developed evidence of shared meaning through confirming, restatement, paraphrasing, and comment on ideas and perspectives. More sophisticated and complex forms of content expansion were also evident in the students’ conversation, as the students extended the contribution of a prior speaker through the addition of further information, contrasting of ideas, the development of specificity through refining previous contributions, and cause-effect enhancement [10]. This process of shared understanding aligned with integration- oriented consensus building as described by Weinberger and Fischer [27].
Conflict is a potent stimulus for knowledge development and attainment of shared understanding, in that it can generate explanation, justification, and reflection [26]. While students engaged in simple negation responses in all tutorial groups in this study, they were also involved in content expansion conflict-oriented talk. There was disagreement about ideas and correction of perspectives with the potential for conceptual change and shared meaning. This finding suggests the presence of conflict-oriented consensus building talk [27] in the tutorial groups, primarily in PBL 2. Overall, disagreement functions were more prevalent in PBL 2, and this finding concurs with the focus of the session, where students were expected to challenge each other’s ideas and critically scrutinise the credibility and sources of the knowledge emanating from self-study.
The presence of reactive interactional response information orientation and recall functions is also noteworthy. Heritage [28] and Goffman [29] have observed that information orientation and recall evoked by peers’ contributions lead to understanding convergence through aligning a listener’s understanding with that of the speaker. Schiffrin [25] also observed that orientation to information and information recall are associated with shared knowledge.
The interactional processes in this study were mainly knowledge-based. Physical action and task coordination were more prevalent in PBL 1 than in either PBL 2 or PBL 3. This finding was expected, because the students planned tasks and engaged in writing on the blackboard in PBL 1. It is also not surprising that interactional response words were used to mark reflection talk in PBL 3, since this type of discourse activity was confined to this session.
5. Conclusions
This study was process-focused, conducted in a natural educational setting, used a systematic corpus analysis methodology to analyse transcripts involving full cycles of eight PBL tutorial groups, and explored a fundamental concept of PBL, namely, shared understanding. Moreover, through the methodology, we were able to detect statistically significant differences in relative frequencies between PBL sessions, thus enabling us to relate the differences to the learning focus of the sessions.
However, the study has some limitations. First, shared understanding lies in the minds of discourse participants. Since it is impossible to examine human minds directly to establish whether an understanding is shared, discourse content has been used as a surrogate for this. The discourse participants in this study may have used discourse tokens of shared understanding as face-saving tactics without necessarily agreeing with peers. Second, the results of the study may not be readily generalizable to other institutions, since PBL transcripts from only one institution have been analysed. Third, the results of the study may also not be generalized to an undergraduate PBL curriculum, as the study participants were graduate students. However, the goal of the study was to enable generalizability in relation to PBL theory and not regarding the participants [30]. Fourth, participation in this study was voluntary. It is impossible to know whether students who did not participate had a similar pattern of shared understanding in the tutorial discourse. Fifth, we only investigated frequent small interactional response words as indicators of shared understanding. Shared understanding could be attained through many other linguistic tokens [20, 31] that were not investigated in this study. Shared understanding could also be achieved through nonverbal gestures. However, analysis of nonverbal gestures was not the focus of this research. Finally, the reliability of coding of interactional response word functions has not been assessed in this study for practical reasons, although other researchers have applied a similar procedure to the corpus analysis methodology [32, 33].
Many questions remain to be answered in future research. More work is needed to explore how other linguistic tokens and nonverbal gestures are used to achieve shared meaning in PBL tutorial discourse. Future research could also explore the effect of group composition on the evidence levels of shared understanding. The findings of this study have numerous theoretical and practical implications. Theoretically, this study explored shared understanding as an essential PBL concept, and practically, it provided insight into how medical students’ shared understanding developed at different phases of the PBL cycle by describing and analysing linguistic tokens of shared understanding.
This study showed that the extent of verbalization influences the quality of shared understanding. The interactional response tokens enriched with expanded content provided sophisticated evidence of shared understanding compared to unexpanded tokens. Practically, suggestions that are relevant to educational practice could be derived from this finding. Facilitators need to encourage students to expand their interactional responses and urge quieter students to verbalise their ideas. Medical educators also need to train students on how to communicate with understanding as this is very important for interprofessional practice and effective patient- doctor decision-making process.
Abbreviations
PBL: Problem-based learning
OMT: Olukayode Matthew Tokode
RG: Reg Dennick
NF: Normalised Frequency
RF: Raw Frequency
LL: Log Likelihood
NS: Not Significant
VS: Versus
Ethical approval and consent to participate
The study was approved by the University of Nottingham Ethics Committee – Ethics approval for the study is D/9/2008. The study participants signed an informed consent to participate in the study.
Consent for publication
The authors have reviewed the manuscript and agree on its content for publication.
Availability of data and material
The anonymized data that support the findings of this study are available from the authors upon request.
Competing of interest
The authors report no declarations of interest.
Funding
No funding was received for this project.
Author’s contributions
OMT initiated the project and processed the data, and RD collected the data. Both authors drafted the paper and approved the final version for publication.
Acknowledgements
We acknowledge the assistance of the students and staff of the University of Nottingham Medical School in Derby.
References
- Moving beyond “it worked”: The ongoing evolution of research on problem-based learning in medical education. Educ Psychol Rev 19 (2007): 49-61.
- Hurk MM, Dolmans DH, Wolfhagen IH, et al., Quality of student-generated learning issues in a problem-based curriculum. Med Teach 23 (2001): 567-571.
- Hmelo-Silver CE, Barrows HS: Facilitating collaborative knowledge building. Cognition and Instruction 26 (2008):48-94.
- Imafuku R, Kataoka R, Mayahara M, et al., based learning: A discourse analysis of group interaction. Interdisciplinary Journal of Problem-Based Learning 8 (2014):1-18.
- Diemers AD, van de Wiel MWJ, Scherpbier AJJA, et al., Diagnostic reasoning and underlying knowledge of students with preclinical patient contacts in PBL. Med Educ 49 (2015):1229-1238.
- De Grave WS, Boshuizen HPA, Schmidt HG. T Problem based learning: Cognitive and metacognitive processes during problem analysis. Instructional Science 24 (1996): 321-341.
- De Grave WS, Schmidt HG, Boshuizen. Effects of problem-based discussion on studying a subsequent text: A randomized trial among first year medical students. Instructional Science 29 (2001): 33-44.
- Yew EHJ, Schmidt HG. Evidence for constructive, self-regulatory, and collaborative processes in problem- based learning. Advances in Health Sciences Education 14 (2009): 251-273.
- Stahl G. A model of collaborative knowledge-building. In: Fourth international conference of the learning sciences: 2000; Mahwah, NJ: Erlbaum (2000): 70-77.
- Roschelle J. Learning by collaborating: Convergent conceptual change. Journal of the Learning Sciences 2 (1992): 235-276.
- Koschmann T, Mac Whinney B. Opening the black box: Why we need a PBL TalkBank database. Teach Learn Med 13 (2001): 145-147.
- Da Silva AL, Dennick R. Corpus analysis of problem-based learning transcripts: an exploratory study. Med Educ 44 (2010): 280-288.
- Vygotsky LS. Mind in society: The development of higher psychological processes Massachusetts: Harvard University Press (1978).
- Mulder I, Swaak J, Kessels J. Journal of Educational Technology & Society 5 (2002): 35-47.
- Schegloff EA. Conversation analysis and socially shared cognition. In: Perspectives on socially shared cognition. edn. Edited by Resnick LB, Levine JM, Teasley SD. American Psychological Association (1991): 150-171.
- Oliveira AW, Sadler TD. Interactive patterns and conceptual convergence during student collaborations in science. Journal of Research in Science Teaching 45 (2008): 634-658.
- Vyt A. Interprofessional and transdisciplinary teamwork in health care. Diabetes Metab Res Rev 24 (2008): S106-S09.
- Teutsch C. Patient–doctor communication. Med Clin 87 (2003):1115-1145.
- Gilbert J, Camp II R, Cole C, et al., Preparing students for interprofessional teamwork in health care. Journal of Interprofessional care 14 (2000): 223-235.
- McCarthy M. Talking back:" Small" interactional response tokens in everyday conversation. Research on Language and Social Interaction 36 (2003): 33-63.
- Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 15 (2005): 1277- 1288.
- Baker M, Hansen T, Joiner R, et al., The role of grounding in collaborative learning tasks. In: Collaborative learning: Cognitive and computational approaches. edn. Edited by Dillenbourg P. Oxford: Elsevier Sciences Ltd (1999): 31-63.
- Clark HH, Wilkes-Gibbs D. Referring as a collaborative process. Cognition 22 (1986): 1-39.
- Sacks H, Schegloff EA, Jefferson G. conversation. Language 50 (1974): 696-735.
- Schiffrin D: Discourse markers. Cambridge: University Press (1987).
- Van Boxtel C, Van der Linden J, Kanselaar G. knowledge. Learning and Instruction 10 (2000): 311-330.
- Weinberger A, Fischer F. supported collaborative learning. Computers & Education 46 (2006): 71-95.
- Heritage J. A change-of-state token and aspects of its sequential placement. In: In Structures of social action: studies in conversation analysis. edn. Edited by Atkinson JM, Heritage J. Cambridge, London, New York, New Rochelle, Melbourne, Sydney: Cambridge University Press (1984): 299-345.
- Goffman E. Forms of talk. Philadelphia, Pennsylvania. University of Pennsylvania Press (1981): 19104-4011.
- Yin RK. Case Study Research and Applications: Design and Methods, 6th edn. California, London, New Delhi and Singapore: SAGE Publications, Inc (2018).
- Duncan Jr S, Niederehe G. On signalling that it's your turn to speak. J Exp Soc Psychol 10 (1974): 234-2347.
- Carbonell-Olivares M. A corpus-based analysis of the meaning and function of although. International Journal of English Studies 9 (2009):191-208.
- Demmen J, Semino E, Demjén Z, et al., A computer-assisted study of the use of violence metaphors for cancer and end of life by patients, family carers and health professionals. International Journal of Corpus Linguistics 20 (2015): 205-231.