Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay

Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay

The paper “Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment by P. R. Jeffries” is a great example of an article on nursing. The essay aims to address a two-fold objective to wit to select and read a peer-reviewed cardiovascular or pulmonary advanced-practice assessment article featuring an assessment technique; and (2) to write an article critique with an initial thread of 400-500 words. Article Critique The study of Jeffries et al. (2011) entitled “ Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Curriculum for Advanced Practice Nurses” aimed to develop, implement, and evaluate outcomes of a cardiovascular assessment curriculum for advanced practice nurses at four institutions (p.

316). The purpose is clear and relevant to the nursing practice as cardiovascular assessment skills are essential to the provision of cardiac life support and minimizing errors during the delivery of patient care. There is a need for the study because according to Jeffries et al. (2011), advanced practice nursing (APN) students and providers are deficient among cardiovascular assessment skills, including auscultation (p. 316). Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay. The study will improve nursing practice in terms of cardiovascular assessment and will add to the body of nursing knowledge regarding cardiovascular assessment curriculum for advanced practice nurses.

The study utilized the one-group pre-to-post-intervention design, the SPSS for statistical analysis, and content validity of instrument via a group of six APN experts from Miami Simulation Group (Jeffries et al. , 2011, 316-319). Samples consisted of 36 participants from four institutions. The research plans decisions are justified adequately and explained in detail and the design used is appropriate for observing the effect of educational intervention (both the independent learner review and the instructor-led sessions) prior to the treatment.

The SPSS tool was a good statistical tool; however, the small sample size might be one of the weaknesses of this research because findings cannot be generalized until a larger sample size is used. In addition, the determination of content validity via experts review may lead to biases due to possible high-fidelity simulations of a faculty member. Discussion of the problem examined the effect of a simulation-based curriculum and a deliberately grounded practice curriculum in terms of cardiovascular assessment, particularly auscultation.

The study found out that both of the strategies were effective but the deliberate practice model increased skills in cardiovascular bedside assessment, diagnostic reasoning, and acquisition and mastery of important clinical skills in cardiovascular assessment and auscultation (Jeffries et al. , 2011, 321). Discussion of the problem in the study was supported by literature and statistics as evidence. The literature is comprehensive and integrated into each section of the study. Although the literature has not been published within the last five years, the majority of the literature has been published during 2002-2009. Benchmark publications included the deliberate practice Harvey curriculum which substantiated the need for cardiovascular knowledge and skills among advanced practice nurses. Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay.

Majority of the sources were primary but lacks organization and a section for model/theory. As stated earlier, the study application needs a follow-up study with a larger sample size because the small number of participants limits the generalizability of findings. Implications to nursing included an emphasis on the use of the deliberate practice model and a simulation-based curriculum in the future and development of a non-threatening, interactive, and self-paced learning environment in order to acquire mastery of cardiovascular assessment and diagnostic reasoning skills among advanced practice nurses.

The current review examines the effectiveness of simulation-based medical education (SBME) for training health professionals in cardiac physical examination and examines the relative effectiveness of key instructional design features.

METHODS

Data sources included a comprehensive, systematic search of MEDLINE, EMBASE, CINAHL, PsychINFO, ERIC, Web of Science, and Scopus through May 2011. Included studies investigated SBME to teach health profession learners cardiac physical examination skills using outcomes of knowledge or skill. We carried out duplicate assessment of study quality and data abstraction and pooled effect sizes using random effects.

ORDER A PLAGIARISM-FREE PAPER NOW

RESULTS

We identified 18 articles for inclusion. Thirteen compared SBME to no-intervention (either single group pre-post comparisons or SBME added to other instruction common to all learners, such as traditional bedside teaching), three compared SBME to other educational interventions, and two compared two SBME interventions. Meta-analysis of the 13 no-intervention comparison studies demonstrated that simulation-based instruction in cardiac auscultation was effective, with pooled effect sizes of 1.10 (95 % CI 0.49–1.72; p < 0.001; I2 = 92.4 %) for knowledge outcomes and 0.87 (95 % CI 0.52–1.22; p < 0.001; I2 = 91.5 %) for skills. In sub-group analysis, hands-on practice with the simulator appeared to be an important teaching technique. Narrative review of the comparative effectiveness studies suggests that SBME may be of similar effectiveness to other active educational interventions, but more studies are required.

LIMITATIONS

The quantity of published evidence and the relative lack of comparative effectiveness studies limit this review.

CONCLUSIONS

SBME is an effective educational strategy for teaching cardiac auscultation. Future studies should focus on comparing key instructional design features and establishing SBME’s relative effectiveness compared to other educational interventions.

Electronic supplementary material

The online version of this article (doi:10.1007/s11606-012-2198-y) contains supplementary material, which is available to authorized users.

KEY WORDS: simulation, medical education, cardiac physical examination

Competence in physical examination of the cardiovascular system is a key clinical skill. Clinicians use the cardiovascular physical exam as a non-invasive tool to directly diagnose and assess severity of disease and guide further evaluation.1 However, multiple studies have documented the weak cardiac physical examination skills of trainees and clinicians.2,3

The optimal approach to teaching cardiac physical examination remains unknown. To become competent in cardiac physical examination, a clinician must examine patients with a wide variety of cardiac conditions and have encountered such diagnoses on a repetitive basis.4 The availability and accessibility of patients with cardiac pathology constitute barriers to the acquisition of cardiac physical exam skills. Rare but important pathology presents infrequently in clinical encounters. In academic centers, trainees must often compete to examine an ever-dwindling number of suitable patients.

Although many of these obstacles are not unique to the teaching of the cardiovascular physical exam, cardiology has led the way in developing alternative teaching formats to overcome these limitations, including recorded audio files, multimedia CD-ROMs, and mannequin-based cardiopulmonary simulators (CPS). Simulation-based medical education (SBME) physically engages the learner in educational experiences that mimic a real patient encounter,5 as in the learning experience provided by the CPS. Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay.Medical schools frequently use simulation to teach physical diagnosis, with a reported prevalence of 84–94 % from first to fourth year among medical schools responding to a 2010 AAMC survey.6 Conceptually, SBME has great potential as a tool to enhance the education of trainees’ physical examination skills by facilitating repetitive and deliberately planned exposure to key auscultatory abnormalities without the constraints of patient and pathology availability, and by providing an environment in which questions and errors can be discussed without negative clinical consequences.4 By embedding the cardiac findings within a mannequin, SBME could improve the transfer of clinical skills from the teaching setting to real patients.

Despite these potential advantages, the relative benefit of SBME compared to other instructional modalities for cardiac physical examination training remains unclear, and the issue of how best to use SBME in this context is unresolved. Many individual studies have evaluated educational modalities for teaching cardiac auscultation skills, and a comprehensive synthesis of this evidence would help educators and clinicians use these tools effectively. We could not identify a systematic review synthesizing the evidence for any of these modalities. Rather, reviews to this point have focused on SBME generally, including two recent reviews showing that SBME is effective across multiple educational domains compared to no educational intervention or in addition to traditional clinical education.5,7 To address this gap, we conducted a systematic review and meta-analysis of the literature. Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay.

Go to:
METHODS

The general methods for this systematic review have been published previously and are presented here in abbreviated format.7 This review was planned, conducted, and reported in adherence to PRISMA standards of quality for reporting meta-analyses.8 IRB approval was not required.

Questions

We sought to examine (1) the effectiveness of SBME for training health professionals in cardiac physical examination skills and (2) the instructional design features that enhance the effectiveness of SBME for teaching cardiac auscultation. We defined SBME as an educational tool or device with which the learner physically interacts to mimic an aspect of clinical care. We excluded studies that evaluated only audio-recordings, CD-ROMS, computer-based virtual patients, phonocardiosimulators, or standardized patients.

Study Eligibility

We included studies published in any language that investigated the use of SBME to teach health profession learners cardiac physical examination skills at any stage in training or practice using outcomes of learning, behaviors with patients in clinical practice, or patient outcomes. We included single-group pretest-posttest studies, two-group randomized and nonrandomized studies, and studies in which simulation was added to other instruction common to all learners, such as a traditional clerkship rotation, bedside teaching session, or classroom instruction.

Study Identification

We searched MEDLINE, EMBASE, CINAHL, PsychINFO, ERIC, Web of Science, and Scopus using search terms designated by an experienced research librarian, focused on the intervention (e.g., simulator, simulation, manikin, Harvey), topic (e.g., physical examination skills), and learners (e.g., education medical, education nursing, education professional, student health occupations, internship, and residency). No beginning date was used, and the last date of the search was May 11, 2011. We searched for additional studies in the reference lists of all included articles.7

Study Selection

We screened all titles and abstracts independently and in duplicate for inclusion. In the event of disagreement or insufficient information in the abstract, we independently and in duplicate reviewed the full text of potential articles.7 The inter-rater agreement for study inclusion, as assessed using an intra-class correlation coefficient, was 0.69.7Conflicts were resolved by consensus discussion between the two reviewers. Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay.

Data Extraction

We extracted data independently and in duplicate for all variables and resolved conflicts by consensus.

The information we extracted included the training level of learners, clinical topic, training location, study design, method of group assignment, outcomes, and methodological quality. We graded the methodological quality of the studies using the Medical Education Research Study Quality Instrument (MERSQI).9 We further coded the presence of simulation features identified in a review of simulation: mastery learning, distributed practice (whether learners trained on 1 or >1 day), feedback (low = rare and unstructured; medium = brief, sporadic and unstructured from 1 to 2 sources; high = substantive, structured and intensive, from at least 2 sources), curriculum integration (the simulation intervention as a required element of the curriculum), and group vs. solo learning.10 We coded the time engaged in SBME and whether or not hands-on practice on the simulator occurred (defined as direct contact with the simulator as opposed to listening with group stethoscopes).

We abstracted information separately for all outcomes. We defined knowledge outcomes as assessments of factual recall, conceptual understanding, or application of knowledge. Skill outcomes were assessments of the learners’ cardiac physical examination proficiency in a simulated environment. Skill outcomes included the use of standardized patients or real patients brought in for assessment purposes only (e.g., not part of usual clinical practice). No studies examined learners in clinical practice. If multiple measures of a skill outcome were reported, we selected a single outcome using the following order of priority: (1) outcomes assessed in a different setting (e.g., different simulator or standardized patients) over those assessed in the simulator used for training, (2) the author-defined primary outcome, or (3) a global or summary measure of effect.

Data Synthesis

We classified studies as no-intervention comparison if they were a single group pre-post design or a two-group study where simulation was added to other instruction common to all learners. We classified all other two-group studies as other-comparison (comparison to another type of educational intervention) or simulation-comparison (comparison of two simulation-based interventions). We planned to quantitatively synthesize the results of all classifications with more than three studies.

As we have described previously,7 for each reported outcome we calculated the standardized mean difference (Hedges’ g effect size) using accepted techniques.1113 We contacted primary authors directly if additional outcome information was needed.

We quantified the inconsistency (heterogeneity) across studies using the I2 statistic.14 I2estimates the percentage of variability across studies above that expected by chance, and values >50 % indicate large inconsistency. Due to large inconsistency in our analyses, we used random effects models to pool weighted effect sizes.

We conducted planned subgroup analyses based on selected instructional design features (presence of curricular integration, feedback, and individual hands-on practice), type of simulator (Harvey® and other simulators considered separately), and study design (1 vs. 2 group designs).Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay.  We performed sensitivity analyses excluding (1) nonrandomized studies, (2) low total quality score (MERSQI < 12), and (3) studies that used p value upper limits or imputed standard deviations to estimate the effect size. We calculated the Pearson correlation coefficient to examine the relationship between minutes of practice and effectiveness of SBME.

We used funnel plots and the Egger asymmetry test to explore possible publication bias.15 We used trim and fill (random effects) to estimate revised pooled effect sizes, although this method has noted limitations in the presence of high inconsistency.16

We used SAS 9.2 (SAS Institute, Cary, NC) for all analyses. Statistical significance was defined by a two-sided alpha of 0.05. Determinations of clinical significance emphasized Cohen’s effect size classifications (0.2–0.49 = small; 0.5–0.8 = moderate, and >0.8 = large).17

Go to:
RESULTS

Trial Flow

Using our search strategy, we identified 10,297 articles with an additional 607 identified from our review of reference lists and journal indices. From these we identified 18 articles for inclusion (Fig. 1, Table 1).

An external file that holds a picture, illustration, etc.
Object name is 11606_2012_2198_Fig1_HTML.jpg

Figure 1.

Study flow.

Table 1

Description of Included Studies

Author, year No. and level of participants Intervention and comparison Assessment* MERSQI score†
No-intervention comparison studies: randomized
  Penta 197318 30 sophomore medical students CPS (4.5 h) + a traditional course compared to a traditional course Diagnostic accuracy on 6 CPS sounds 11.5
  Oddone 199321 56 internal medicine and family medicine PGY-1 residents Harvey® CPS (8 h) + a 4-week cardiology rotation compared to a 4-week cardiology rotation Diagnostic accuracy on 3 real patients 12.5
  Tiffen 201128 29 pre-practicum nursing students VitalSimKelly CPS (1 h) + ‘usual teaching’ compared to ‘usual teaching’ Written test and self-reported confidence in cardiac physical examination skills 14.5
No-intervention comparison studies: cohort
  Butter 201027 118 3rd and 4th year medical students Harvey® CPS and UMedic multimedia computer system (2 h) compared to a no-intervention control Written test and diagnostic accuracy on 4-5 real patients 13.5
  Kern 201129 405 3rd year medical students Harvey® CPS (0.5 h) and standardized patient compared to an historical standardized patient intervention OSCE performance on a single station with standardized patients without real findings 10.5
No-intervention comparison studies: single group
  Gordon 19804 23 4th year medical students Harvey® CPS (8 h) MCQ and diagnostic accuracy on 2 CPS disorders 8
  Woolliscroft 198719 224 pre-clinical medical students Harvey® CPS (4 h) Diagnostic accuracy on 1 of 2 CPS disorders 12
  Harrell 199020 49 critical care nurses Heart Sim II CPS (1 h) Diagnostic accuracy on 20 CPS heart sounds 13
  Takashina 199722 21 primary-care physicians CPS (9 h) Diagnostic accuracy on 5 CPS disorders 10
37 nurses
  Issenberg 200023 53 Physician assistants Harvey® CPS (2 h) Written test 12
  Issenberg 200224 67 internal medicine PGY-2 & -3 residents Harvey® CPS and UMedic multimedia computer system (5 h). Included an historical cohort MCQ 13
  Issenberg 200325 43 osteopathic internists Harvey® CPS (2 h) Written test 13
14 osteopath residents
7 osteopath medical students
  Fraser 200926 152 first year medical students Harvey® CPS (2 h) + traditional clinical training. Second arm taught respiratory disorders. Diagnostic accuracy on 2 CPS scenarios 11
Other intervention comparison studies: randomized
  de Giovanni 200930 37 3rd year medical students Harvey® CPS (3 h) compared to CD of recorded sounds Diagnostic accuracy on 5 real patients 14.5
Other intervention comparison studies: cohort
  Ewy 198731 208 4th year medical students Harvey® CPS (10.5 h) + real patients compared to a larger volume of real patients MCQ and checklist on 2 real patients 13.5
  Waugh 199532 182 senior medical students Harvey® CPS (time not specified) + UMedic multimedia computer system compared to video and UMedic system MCQ 9.5
Simulation comparison study: randomized
  Champagne 198934 37 masters level nursing students Heart Sim II heart sounds (0.5 h) with and without palpatory cues Diagnostic accuracy on 20 CPS heart sounds 15.5
  Fraser 201133 86 1st year medical students Harvey® CPS (2 h) mitral regurgitation compared to Harvey® CPS (2 h) aortic stenosis Diagnostic accuracy on real patient with mitral regurgitation 11.5

*Assessment method(s) included in the meta-analysis. For studies with more than one skill assessment, one assessment was selected for the meta-analysis using pre-defined criteria. Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay.

†MERSQI score is an assessment of study quality.9 Maximum score 19, low quality ≤ 12

Study Characteristics

We identified 13 single group pre-post studies or two group studies with simulation added to common instruction (classified as no-intervention studies),4,1829 3 studies comparing SBME to another instructional modality,3032 and 2 studies comparing simulation to another simulation modality33,34 (Table 1). The majority of learners were medical students, but residents, attending physicians, nursing students, nurses, and osteopathic internists (attendings, residents and students) were also among the study participants. A single simulator (Harvey® Laerdal Medical, Miami, FL, USA) was used in the majority of studies for both outcomes. The skill outcomes were assessed with simulated heart sounds in 14 of the 18 studies. Five studies assessed auscultatory skill using patients with real cardiac findings, and 1 study assessed cardiac physical exam technique using a standardized patient without cardiac findings.29

Examining instructional design features of the 18 studies, 8 studies used a curriculum distributed over more than 1 day, 7 studies embedded the intervention within their curriculum, 1 study employed high feedback, 1 study employed mastery learning, and in 8 studies the learner engaged in hands-on practice (e.g., direct contact) with the simulator.

Study Quality

Of the 13 no-intervention comparison studies, 3 were randomized comparative studies. Two of the other-comparison studies were randomized, as were the simulation-simulation comparison studies. Blinded assessment of outcomes was done in 11 of 18 studies. Six studies lost more than 25 % of participants prior to outcome evaluation or failed to report follow-up. One study used a self-reported skill; all other outcomes were determined objectively. The mean study quality as assessed by MERSQI was 11.8 ± 1.9; MERSQI scores for each study are provided in Table 1.

Quantitative Meta-Analysis of No-Intervention Comparison Studies

We pooled data using meta-analysis for the 13 single-group pre-post studies or two group studies with simulation added to common instruction. Six of these studies assessed knowledge outcomes in 343 learners (Fig. 2), and 10 studies assessed skill outcomes in 1,074 learners (Fig. 3). Meta-analysis of these studies demonstrated that simulation-based instruction in cardiac auscultation was effective, with large pooled effect sizes of 1.10 (95 % CI 0.49–1.72; p < 0.001; I2 = 92.4 %) for knowledge outcomes and 0.87 (95 % CI 0.52–1.22; p < 0.001; I2 = 91.5 %) for skill outcomes (Figs. 2 and ​and3,3, respectively).

An external file that holds a picture, illustration, etc.
Object name is 11606_2012_2198_Fig2_HTML.jpg

Figure 2.

Knowledge outcomes. Effect sizes for simulation compared with no-intervention. Positive numbers favor simulation. For pooled effect size, p < 0.001; I2 = 92.4 %.

An external file that holds a picture, illustration, etc.
Object name is 11606_2012_2198_Fig3_HTML.jpg

Figure 3.

Skill outcomes. Effect sizes for simulation compared with no-intervention. Positive numbers favor simulation. For pooled effect size, p < 0.001; I2 = 91.5 %.

Funnel plots and Egger tests suggested the possibility of publication bias (on-line Appendices A and B). Trim and fill analysis yielded an adjusted effect size for the knowledge outcome of 0.78 (95 % CI, 0.03 to 1.53; p = 0.04) and for the skills outcome of 0.57 (95 % CI, 0.12 to 1.02; p = 0.01). Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay.

We display the effect sizes for the pre-planned subgroup and sensitivity analyses in Table 2. Use of a two-group study design and higher-quality studies as assessed by MERSQI yielded statistically significant smaller effect sizes. Among the instructional design features, studies incorporating hands-on practice (direct contact with the simulator) were associated with consistently larger effect sizes for both knowledge and skill compared to studies without hands-on practice, although this difference was not statistically significant. Curricular integration did not have a consistent association with effect size. As only one study had high feedback, this planned subgroup analysis was uninformative.

Table 2

Subgroup and Sensitivity Analyses of No-Intervention Comparison Studies

Characteristic Sub-group Knowledge (N = 6) Skills (N = 10)
No. studies ES (95 % CI) No. studies ES* (95 % CI)
Randomized† Yes 1 0.71(−0.05, 1.48) 3 0.33(0.02, 0.64)
No 1 1.30 (0.85, 1.75) 2 0.40(0.02, 0.64)
Number of study arms 2 groups 2 1.09 (0.54, 1.65) 5 0.35(0.19, 0.51)‡
1 group 4 1.15 (0.32, 1.98) 5 1.41(0.78, 2.04)
Exact effect size Yes 4 1.54 (0.98, 2.11)‡ 8 1.00 (0.54,1.46)‡
Quality: MERSQI High ≥12 5 0.93 (0.31, 1.56)‡ 5 0.59 (0.33, 0.86)
Curricular integration Yes 2 2.01 (1.63, 2.38) 6 0.53(0.27, 0.79)
No 4 0.64 (0.22, 1.07) 4 1.46 (0.33, 2.59)
Feedback High 6 1.10 (0.49, 1.72) 1 0.33 (0.16, 0.5)‡
Mod or low 0 7 1.11 (0.44, 1.77)
Hands-on practice Yes 2 2.01(1.63, 2.38) 2 1.32 (0.76, 1.88)
No 4 0.64 (0.22, 1.07) 8 0.75 (0.38, 1.13)
Simulator Harvey 4 1.42 (0.56, 2.29)‡ 6 0.53 (0.28, 0.77)
Other 2 0.34 (0.03, 0.66) 4 1.51 (0.24, 2.79)

*ES effect size

†Randomized vs. non-randomized is only applicable to studies with a two-group design; thus, the number of studies for this subgroup analysis is less than the total number of studies for each assessment modality

p < 0.05 for comparison of ES between subgroups

In correlation analyses, the association between time on task and effect size was r = 0.76 (p = 0.08; N = 6) for knowledge outcomes and r = 0.59 (p = 0.07; N = 10) for skills outcomes.

Narrative Review of Comparative Effectiveness Studies

When simulation was compared to another instructional modality in three studies, there was little to no relative benefit to knowledge and skill acquisition (Fig. 4). One of these randomized trials compared a CPS to a CD of recorded heart sounds and demonstrated no significant difference in skills, although the CD group was exposed to more examples of each murmur compared to the CPS group.30

An external file that holds a picture, illustration, etc.
Object name is 11606_2012_2198_Fig4_HTML.jpg

Figure 4.

Simulation comparison studies. Effect sizes for simulation compared with other instructional modalities or compared with another simulation modality for knowledge and skill outcomes.

One of the non-randomized studies compared SBME with bedside instruction and demonstrated a statistically significant benefit of SBME for both knowledge and skills outcomes.31 The other non-randomized study showed no difference in knowledge between students who used a multimedia computer system with a CPS and students who used the same computer program with videos.32

Two of these three studies were of high methodological quality,30,31 and two assessed skill using patients with real cardiac findings.30,31

There were inconsistent benefits of simulation on skill acquisition in the simulation-comparison studies (Fig. 4). One randomized trial of high methodological quality compared listening to simulated heart sounds with and without palpatory cues. Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay. The results demonstrated improvement in auscultatory skills from pre-test to post-test but no difference in skills between the two groups.34 The other study randomized medical students to learn mitral regurgitation, aortic stenosis, or right ventricular strain without a murmur using a CPS.33 When tested on a real patient with mitral regurgitation, the group who learned this disorder on the CPS identified more clinical features and had higher diagnostic accuracy than the other two groups.

Go to:
DISCUSSION

This systematic review and meta-analysis suggest SBME is an effective instructional approach for teaching cardiac auscultation. The pooled results for 13 single-group pre-post studies or two group studies with simulation added to common instruction indicate a positive effect, and this finding was consistent in subgroup and sensitivity analyses and for both knowledge and skills outcomes. Among the three studies comparing SBME to another instructional modality, SBME showed few or no benefits on knowledge and skill acquisition relative to the other modality such as recorded heart sounds or real patients. The two simulation comparison studies demonstrated inconsistent benefits. In one of these studies, the results demonstrated transfer of skills for the same murmur presented on the simulator and a real patient, but a lack of transfer from simulated murmurs that were different from the real patient’s diagnosis.33 We conclude that strong evidence supports SBME as an effective instructional approach to cardiac physical exam teaching and that limited evidence suggests it is comparable to other available modalities.

Of importance to undergraduate MD course directors and post-graduate program directors is how to use SBME effectively. Our data yield a few practical suggestions. First, direct contact (hands-on practice) with the simulator appears to increase the effectiveness of cardiac skills acquisition. This likely relates to increasing the opportunity for repetitive practice35 and deliberate practice,5 both of which are known to have positive educational impacts.

Second, curricular integration of the SBME cardiac physical examination course does not appear to be an important instructional design feature, and it can be implemented as a stand-alone intervention. Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay. This contrasts with other authors who suggest that integration is key.10 This unexpected finding may be due to the fact that skill in cardiac auscultation is a competency that crosses many disciplines and content domains. Its specific placement within a curriculum may be less relevant than the importance of including it at all.

Third, more time engaged in learning simulation-based cardiac physical examination may lead to better learning outcomes. Consistent with a previous review of repetitive practice,36 and views generally accepted among medical educators, time on task had a moderate correlation with learning outcomes (explaining 58 % and 35 % of the variance in knowledge and skill outcomes, respectively). The lack of statistical significance likely reflects the relatively small sample of studies available for statistical pooling.

Limitations and Strengths

The strengths of this review include the comprehensive search strategy, rigorous data extraction, and subgroup analyses addressing focused, pre-specified between-study differences.

Our review is limited primarily by the quantity of published evidence, which reduced the statistical power of subgroup analyses. Although comparisons with active interventions (e.g., comparative effectiveness studies) would yield the strongest recommendations for educational best practices, there were only five studies of this type, with variable results. We also found high inconsistency between individual study results. Such inconsistency may plausibly arise from differences in the intensity or effectiveness of the intervention, the measurement of the outcome, or the study design. In exploring these inconsistencies we found that studies with stronger methods (higher MERSQI scores and two-group designs) had lower effect sizes, and those with more precise estimation of effect size had higher effect sizes, suggesting that these methodological differences could account for some of this inconsistency. More importantly, we note that this inconsistency varied in the magnitude but not the direction of benefit, suggesting that SBME is consistently effective but that some interventions are more effective than others.

ORDER A PLAGIARISM-FREE PAPER NOW

Funnel plots suggested the possibility of publication bias among the studies, and adjustments attempting to compensate yielded smaller effect sizes than those suggested in the overall meta-analysis. However, methods for detecting and adjusting for publication bias are imprecise.16

Integration with Prior Research

To our knowledge, there are no previous systematic reviews focused solely on teaching cardiac auscultation skills. Our results are consistent with previous reviews that show large benefits from SBME with a preponderance of comparisons with no intervention.5,7

The subgroup analyses we chose to explore were based on a previous narrative review of effective SBME design features.10 Although subgroup analyses should be interpreted cautiously, as they reflect between-group rather than within-group analyses and are thus susceptible to confounding,37 it appears as though direct contact (hands-on practice) with the simulator is an important teaching technique and that the teaching of cardiac auscultation can be situated anywhere within the curriculum. Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay.

Unfortunately, only one study provided learners with high levels of feedback, which we defined as a substantial component of the educational intervention with feedback coming from more than one source. This result came as a surprise, since feedback is generally considered a key element of effective SBME instruction.10 More broadly within medical education, the presence of feedback has an independent positive effect on clinical performance,38 and based on this literature educators might consider how to augment the feedback that learners receive during cardiac auscultation training. This instructional design feature warrants further study in SBME. Other instructional design features, such as mastery learning, were present in too few studies to be analyzed. Multi-Center Development and Testing of a Simulation-Based Cardiovascular Assessment Essay.

start Whatsapp chat
Whatsapp for help
www.OnlineNursingExams.com
WE WRITE YOUR WORK AND ENSURE IT'S PLAGIARISM-FREE.
WE ALSO HANDLE EXAMS