Assessing Doctors at Work — Progress and Challenges

Daniel Klass, M.D., C.M.
The New England Journal of Medicine
http://content.nejm.org/

A fair amount of scrutiny has been given recently to the assessment of medical students’ competence before they enter practice. In this issue of the Journal, Epstein provides a timely summary of advances in this arena.1 In contrast, little attention has been paid to the assessment of doctors who are already in practice. As Epstein points out, far from being a fixed attribute or trait, competence comprises multidimensional sets of behaviors that are dependent on both environmental and individual factors.2,3,4 As a result, the assessment of competence must go beyond the identification of who practitioners are, on the basis of evidence of their personal attributes or dated credentials, to capture what they actually do in the context of contemporary practice.5

Four main frames of assessment of physicians’ competence can be distinguished. The first frame comprises the familiar assessments undertaken before actual practice: achievement tests and simulations, including practice under supervision, which permit evaluators to predict a trainee’s future competence. The second frame infers competence in practice from participation in continuing medical education and training programs or related achievement tests. The third frame encompasses measures that examine physicians’ work processes — for example, peer reviews of medical records,6 surveys of coworkers and colleagues about a physician’s communication skills and collaborative practices,7 and assessments that use data from standardized patients, diaries, or portfolios to add contextual detail about work activities.8,9 In Ontario and other Canadian provinces, regulatory authorities have implemented systematic, peer-based assessments using many of these tools in order to improve practice quality.10

The final frame includes assessments of the outcomes of doctors’ work, including patient-satisfaction surveys, complaints or malpractice claims, specific markers of patients’ outcomes or wellness, and data on mortality and morbidity. Examples of this form of assessment can be found on the Web pages of some state regulatory authorities and in hospital or health system “report cards.” Ensuring that the current hodgepodge of data from these four frames of assessment is made both meaningful and consistent for public and professional purposes represents a major challenge to the medical profession.

Physicians who practice for many years should have their competence reassessed and reaffirmed periodically, although views differ on appropriate objectives and tools for such reexamination. The most contentious question is not whether such reassessments of practice are needed but rather how they should be linked to licensure, certification, or employment.11 In the United Kingdom, where the National Health Service looms large in the everyday life of doctors, employment-based assessment predominates. In Canada, a major role is being established for assessments by regulatory authorities and specialty societies. In the United States, combinations of assessments by specialty societies, state medical boards, and provider organizations or payers are evolving. Whereas some view reassessment primarily as a filter to protect the public from doctors who perform poorly, others argue that valid practice assessments serve predominantly educational needs and should be integrated into individualized programs of continuing professional development.

In a global marketplace, there is a need for assessments of migrating doctors whose credentials may be outdated, difficult to interpret, or not aligned with local standards. Currently, most tools used to assess doctors who are beginning to practice in a new location are derived from examinations for initial entry to practice and do little justice to the abilities that seasoned practitioners have gained through experience. Reentry to practice after a career interruption also justifies a fresh assessment of competence, as does any major change in the scope of a physician’s practice. Some regulatory authorities are instituting policies to ensure the competence of doctors in these circumstances.

Any valid assessment of competence in practice must be relevant to the actual scope of practice. Generalizations made on the basis of individual characteristics alone are insufficient, since they are difficult to anchor in the realities of practice. Therefore, less attention should be paid to the nature of doctors themselves, and more to the nature of their work. Given the particularity of individual practices, however, it is difficult to aggregate the main elements of practice into coherent types and thus to tailor tests and educational programs to fit different practices. We need to have a better understanding of the relationships between the processes and outcomes of practice in order to develop assessments based on accurate models of practice.12

Given the actual nature of doctors’ work, it is clear that what is often considered individual competence depends critically on multiple working relationships — including the primary ones with patients, but also those with coworkers, colleagues, consultants, and others. Health care outcomes reflect the processes followed at the individual, organizational, and system levels. The logical starting point for the effective assessment of an individual doctor’s performance, therefore, is the recognition that the era of the doctor as a “lone ranger” is over.13 Assessments must acknowledge the importance of the context of health care delivery, including the crucial element of teamwork.

Because a strong case can be made that only someone with knowledge of a similar scope of practice is qualified to judge competence, doctors reasonably expect to be assessed by their peers. Peer assessments are accepted components of the quality-management programs of Canadian regulatory authorities,6 which may serve as models for other jurisdictions that are considering the broader use of professional peer assessment. In planning assessment and review programs, it is critical to distinguish summative or forensic assessments, such as those triggered by patients’ complaints or previously identified problems in a physician’s practice, from those that serve educational purposes, such as randomly organized quality improvement reviews.

Finally, as performance assessments are used more frequently to determine competence and in high-stake situations (e.g., as a basis for pay-for-performance schemes or determination of employment status), realistic and defensible performance standards must be set. The need to resolve individual complaints and specific malpractice claims has led to the construction of an adversarial framework for the adjudication of matters of clinical performance. As a result, general standards of competence for particular scopes of practice are not well established, and we have little experience in the use of such standards for either educational or disciplinary purposes.

North American medicine has been well served by the formal linkages among accredited education, standardized assessment, and licensure on entry to practice. A sharper focus on the quality of performance in practice clarifies the challenge to develop a system of ongoing practice assessment, harmonized with continuing professional education and linked to continuing licensure.11 In constructing the links of accountability and education, there is a special need for peer-based assessments that target actual performance profiles and meaningful practice outcomes and that focus as much on systems-based quality as on personal professional achievement. Addressing this need may bridge the gap between the traditional professional preserve of competence and the societal concern with performance and its outcomes.

No potential conflict of interest relevant to this article was reported.

Source Information

From the College of Physicians and Surgeons of Ontario and the Department of Medicine, University of Toronto, Toronto.

References

1. Epstein RM. Assessment in medical education. N Engl J Med 2007;356:387-396. [Free Full Text]
2. Wenghofer E, Williams P, Klass D, Faulkner D. Physician-patient encounters: the structure of performance in family and general office practice. J Contin Educ Health Prof 2006;26:285-294. [CrossRef][Medline]
3. LaDuca A. The structure of competence in health professions. Eval Health Prof 1980;3:253-288. [Abstract]
4. Kerr EA, McGlynn EA, Adams J, Keesey J, Asch SM. Profiling the quality of care in twelve communities: results from the CQI study. Health Aff (Millwood) 2004;23:247-256. [Free Full Text]
5. Norcini JJ. Current perspectives in assessment: the assessment of performance at work. Med Educ 2005;39:880-889. [CrossRef][ISI][Medline]
6. Norton PG, Faulkner D. A longitudinal study of performance of physicians’ office practices: data from the Peer Assessment Program in Ontario, Canada. Jt Comm J Qual Improv 1999;25:252-258. [Medline]
7. Lockyer J. Multisource feedback in the assessment of physician competencies. J Contin Educ Health Prof 2003;23:4-12. [CrossRef][Medline]
8. Luck J, Peabody JW. Using standardised patients to measure physicians’ practice: validation study using audio recordings. BMJ 2002;325:679-679. [Free Full Text]
9. Campbell M, Parboosingh J, Fox R, Gondocz T. Use of a diary to record physician self-directed learning activities. J Contin Educ Health Prof 1995;15:209-216.
10. Goulet F, Jacques A, Gagnon R. An innovative approach to remedial continuing medical education, 1992-2002. Acad Med 2005;80:533-540. [CrossRef][ISI][Medline]
11. Handfield-Jones RS, Mann KV, Challis ME, et al. Linking assessment to learning: a new route to quality assurance in medical practice. Med Educ 2002;36:949-958. [CrossRef][ISI][Medline]
12. Melnick DE, Asch DA, Blackmore DE, Klass DJ, Norcini JJ. Conceptual challenges in tailoring physician performance assessment to individual practice. Med Educ 2002;36:931-935. [CrossRef][ISI][Medline]
13. Shine KI. Health care quality and how to achieve it. Acad Med 2002;77:91-99. [Medline]
see original

You may also like

Legislative panel approves medical malpractice bill
Read more
Urgent-care centers: Illinois numbers grow as time-pressed families seek low-cost option to ERs
Read more
Global Center for Medical Innovation launches
Read more

Recent Posts

Malpractice Insurance 101: Reputation Protection

Filed Ballot Initiatives Ask Colorado Voters to Decide Medical Malpractice Rules, Damage Cap

Florida Looks to Impose Noneconomic Damage Caps, End ‘Free Kill’ Law

Popular Posts

Malpractice Insurance 101: Reputation Protection

PIAA 2017: Current Trends & Future Concerns

2022 Medical Malpractice Insurance Rates: What the data tells us

Social Media: Professional Don'ts!

Start Your Custom Quote Process™

Request a free quote