In order for a passing score to be defensible, it needs to be criterion-based. This is typically done through some sort of standard-setting study. There are a number of ways to do this; a common way used for certification exams is modified Angoff scoring. This is the approach we've always used to set the CIP passing score.
Once the SMEs finished their ratings, we had a call to discuss them. Each item had a range of ratings and we discussed the individual ratings of those items with large ranges. We looked at the complexity of the individual item, how the beta testers answered, and how well those questions discriminated - in other words, did good scorers tend to get them right, while poor scorers tend to get them wrong? SMEs were allowed to change their ratings after discussion and many did on many items. This ended up with each item having a difficulty rating and a statistical validity associated with that rating.
We then took all the individual items and assembled the final passing score range of 56% - 64%. We then set the passing score right in the middle of that mathematically determined, defensible, range. So the final passing score for the updated CIP exam remains at 60.00%.
But a passing score of 60% seems quite low, right? It's exactly the opposite: a 60% passing score reflects that the exam is actually pretty challenging. Had we put the passing score at, say, 70%, only about half the beta candidates would have passed, many of whom are superior candidates compared to the 5+ year candidate that the updated CIP exam is targeting.
I hope this information underscores my, and AIIM's, continuing commitment to doing the CIP the right way, not simply throwing together a bunch of questions and setting an arbitrary passing score. Questions or comments? Ping me at jwilkins@aiim.org.