July 6, 2016

The CIP 2016 Passing Score

Over the long weekend we notified all the CIP 2016 beta candidates as to their total scores, their individual domain scores, and whether they passed or failed. I heard from a few beta testers that a 60% passing score seems quite low, and why are we making the test easier, and won't that compromise the overall perception and quality of the CIP?

One of the key steps in the development of any certification is setting the passing score. There is a widespread misconception that the passing score "should be" a certain score such as 70% - 75%. This is akin to setting the retention for some or all of your records at 7 years: Nobody really knows how they got there, and it's not defensible, but everyone else is doing it so it must be OK.

In order for a passing score to be defensible, it needs to be criterion-based. This is typically done through some sort of standard-setting study. There are a number of ways to do this; a common way used for certification exams is modified Angoff scoring.

The way Angoff scoring works is that subject matter experts, who themselves are representative of the target audience, take the exam in an unproctored, untimed, and unscored setting. As they go through the exam, they rate the likelihood of a candidate like them getting that question correct. The harder the question is perceived to be, the lower that percentage will be; a super-easy question might be given a 95% rating (because people still pick B accidentally instead of A), while the lowest grade, 25%, represents a pure guess on a question with 4 possible answers.

This was the approach we used to set the CIP passing score. Once the SMEs finished their ratings, we had a call to discuss them. Each item had a range of ratings and we discussed the individual ratings of those items with large ranges. We looked at the complexity of the individual item, how the beta testers answered, and how well those questions discriminated (good scorers tended to get them right, poor scorers tended to get them wrong). SMEs were allowed to change their ratings after discussion and many did on many items. This ended up with each item having a difficulty rating and a statistical validity associated with that rating. We then took all the individual items and assembled the final passing score range of 47-51 items which equates to a 55-60% passing score and set the passing score at the top of that mathematically determined, defensible, range.

So back to that 60% passing score: 60% seems quite low, right? But it's exactly the opposite: a 60% passing score reflects that the exam is actually harder compared to the previous CIP. Had we put the passing score at 70%, only about half the beta candidates would have passed, many of whom are superior candidates compared to the 3-5 year candidate that the CIP has targeted since its inception.

And in part because the exam is more challenging, we've already developed an in-depth CIP study guide and an instructor-led classroom prep workshop to help candidates prepare to succeed on the exam. The study guide is free for AIIM Professional members and $60 for non-members. The revised CIP is also more closely aligned to existing AIIM training programs; taking one of them will also help prepare candidates for the relevant portion of the CIP.

We will definitely monitor the performance of the CIP, and if the passing score needs to be tweaked  in 6 months or a year we have a process for doing that as well. But I hope this information will underscore my, and AIIM's, commitment to doing the CIP the right way, not simply throwing together a bunch of questions and setting an arbitrary passing score. 

No comments: