May 29, 2019

Setting the CIP v2019 Passing Score

How time flies! As I noted just about three years ago, one of the final steps in the development of any certification is setting the passing score. There is a widespread misconception that the passing score "should be" a certain score such as 70% - 75%. This is akin to setting the retention for some or all of your records at 7 years: Nobody really knows how they got there, and it's not defensible, but everyone else is doing it so it must be OK.

In order for a passing score to be defensible, it needs to be criterion-based. This is typically done through some sort of standard-setting study. There are a number of ways to do this; a common way used for certification exams is modified Angoff scoring. This is the approach we've always used to set the CIP passing score. 

The way Angoff scoring works is that subject matter experts, who themselves are representative of the target audience, take the exam in an unproctored, untimed, and unscored setting. As they go through the exam, they rate the likelihood of a candidate like them getting that question correct. The harder the question is perceived to be, the lower that percentage will be. So for example, a super-easy question might be given a 95% rating (because people still pick B accidentally instead of A), while the lowest grade, 25%, represents a pure guess on a question with 4 possible answers.

Once the SMEs finished their ratings, we had a call to discuss them. Each item had a range of ratings and we discussed the individual ratings of those items with large ranges. We looked at the complexity of the individual item, how the beta testers answered, and how well those questions discriminated - in other words, did good scorers tend to get them right, while poor scorers tend to get them wrong? SMEs were allowed to change their ratings after discussion and many did on many items. This ended up with each item having a difficulty rating and a statistical validity associated with that rating. 

We then took all the individual items and assembled the final passing score range of 56% - 64%. We then set the passing score right in the middle of that mathematically determined, defensible, range. So the final passing score for the updated CIP exam remains at 60.00%

But a passing score of 60% seems quite low, right? It's exactly the opposite: a 60% passing score reflects that the exam is actually pretty challenging. Had we put the passing score at, say, 70%, only about half the beta candidates would have passed, many of whom are superior candidates compared to the 5+ year candidate that the updated CIP exam is targeting. 

I hope this information underscores my, and AIIM's, continuing commitment to doing the CIP the right way, not simply throwing together a bunch of questions and setting an arbitrary passing score. Questions or comments? Ping me at jwilkins@aiim.org. 

No comments: