Direct observation of medical students and residents provides educators with evaluative information about the learners' knowledge base, clinical competency, and behavioral practices. This JAMA article is a systematic review of direct observation tools and their validity. Over 10,000 studies published during 1965-2009 were assessed.
If you are trying to develop your own direct observation tool, take a look at this article -- It may help you find a template that you might be able to morph into something applicable to your practice.
Total # of tools identified = 55
21 focused on medical students, 32 focused on residents/fellows, 2 focused on both
Only 11 had evidence of internal validity and validity based on relationship to other variables
Strongest validity evidence has been established for the Mini Clinical Evaluation Exercise (mini-CEX)
The authors assessed 5 areas of validity in each of these tools
Content validity - did the instrument measure the intended domain of content?
Response process - were the raters properly trained?
Internal structure (reliability) - consistency of the results using the same instrument
Relationship to other variables
Bottom line Very few tools were thoroughly evaluated and tested for validity. Most tools assessed the learners' and observers' experiences with the tool rather than learning or clinical outcomes.
Furthermore, only a few studies adequately performed and documented the observer training process. This is just as important as the structure and content of the evaluation tool. If you aren't accurately capturing the data that you intended, your results are useless. Garbage in, garbage out.
Author comments This was an article presented at "Education Journal Club" at the recent 2010 CORD Academic Assembly that Dr. Sorabh Khandelwal (the Ohio State Univ) and I co-ran. Sorabh contacted the author, Dr. Jennifer Kogan (Univ Penn), informally to solicit post-publication insight and comments. Dr. Kogan graciously shared some of her thoughts.
This area of educational research in developing direct observation tools may be "stuck" at this time.
There is no perfect direct observation tool.
Faculty development needs to include how to use a direct observation tool. This is critical. The training process needs be a drip rather than a bolus approach. Frequent and periodic re-calibration feedback should be provided to the faculty members to minimize inter-rater variability.
Reference Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees. JAMA. 2009;302(12):1316-1326