Anyone working in family medicine graduate medical education knows that the bar is being raised by the ACGME, as well as by society in general, to demonstrate that our residency graduates are truly prepared to show “sufficient competence to enter practice without direct supervision.” This is not a new standard for graduation, but the rigor with which we must prove residents’ ability is getting tougher.
Some of us (like me) were in the field when Dr David Leach first announced the six competencies in the late 1990s. We were warned that requirements would be progressively increased and that proving competence would require more than verifying time and clinical exposure, with faculty sitting around a table venturing abstract opinions based on recall and the group dynamics of the moment.
However, there is good news in all of this. We are being challenged to prove we know what we are talking about when we say a resident is ready to graduate. Being challenged to defend one’s beliefs is nearly always a good experience, since it requires reassessing assumptions and asking ourselves why we believe what we assert. And to build on that good news—assessing competency is not as tough to do as one might think. I am now a year into chairmanship of the Residency Competency Measurement Task Force, chartered by the Council of Academic Family Medicine and administrated by the Society of Teachers of Family Medicine. I came to the role with a lot of leadership experience but not much competency in competency measurement and tools. Fortunately, I have learned a lot from the other task force members and read tons of books and articles. I have decided that this can be done.
So, what is it going to take?
We have created a web-based Resident Competency Assessment Toolkit. I think it does a good job of walking faculty and directors through the tools available for competency assessment and how to use them.
A couple of general observations:
- The tools can be simple to use.
- One tool can be used to measure more than one competency. For example, I can use direct observation, (watching a resident care for a patient), to analyze medical knowledge, patient care, communication, and professionalism in one sitting.
- You can decide how many tools you want to use to measure any given competency.
- Multiple faculty watching a single resident provide care to different patients over different times accumulates a body of information on competency that is both valid and reliable, especially if forms are used to record findings and faculty have been trained together to create common standards.
- Feedback is valuable when received from sources outside the faculty: staff, patients, peers, and students.
- Learners love feedback, and more of it more often “normalizes” it and takes away the stress.
- There is no perfect form; do not spend time searching for “the holy grail” of perfect forms. Find one already in use elsewhere, modify as you must, and then start using it—frequently and by many assessors.
- Faculty need to work together to reach common definitions of competency. Otherwise, faculty assessing the same clinical events will reach very different conclusions.
GO FOR IT!!