Myers-Briggs is debunked. So are all “personality” tests bunk?
About once a year articles are published debunking Myers-Briggs. The arguments by this time are familiar:
- The authors had no formal training in psychology and there’s no science behind their questionnaire
- The system has built-in “fudge factors” to compensate for differences in gender where they should be gender neutral
- Half the people who retake Myers-Briggs get different results even a few weeks later
So, upon reading this, I typically receive an email from a client or two asking, “Can we trust the STM assessments?”
The answer is yes, and here’s why…
We don’t use Myers-Briggs. We use a DISC assessment, which is similar, but more importantly, the DISC assessment is only one of three assessments we employ. We use two additional assessments, which yield key pieces of information, and make the STM system more accurate and useful for hiring selection.
Behavior assessments, which is the category both Myers-Briggs and DISC belong to, are very effective for determining how one is perceived by others. It’s fun to do this with a group, and they are both quite accurate—as far as they go. Companies that use behavior-based assessments apply them for the serious business of hiring selection when they are essentially worthless for this function. This is a problem.
Let me explain.
Think of a job that you know inside out. Maybe an accountant, for example. Now, I bet you can think of a really good accountant who is quiet, serious, and introverted (to use a term familiar to both Myers-Briggs and DISC users). Now, can you think of at least one accountant who is also very good but extroverted and outgoing? No matter the position, people always agree that people from opposite ends of the behavior spectrum can be good at the job. While the Myers-Briggs and DISC assessments may describe an introvert and an extrovert very well, that assessment alone cannot predict who will be the better accountant…or sales person, or COO, account manager, etc. This is why behavior assessments are only 40-50% accurate at predicting hiring success.
And this is why we utilize three assessments from three completely different systems. They’re not related in any way, measure very different factors, and, critically, meet all of the EEOC requirements for validity, lack of bias, and so on.
Our second assessment is a motivators assessment, which measures factors highly relevant to predicting success in a position. Top performers for any position will typically have two or three motivators (or values) in common. Using our accountant example, we typically see that they are motivated by return on investment, knowledge and discovery (learning), guiding principles (doing things right) and sometimes helping others (supportive and team-oriented). In our experience a motivators assessment, combined with a DISC assessment, can be 80-90% accurate.
Our third assessment is a competency profile, based on a clinical instrument, and it’s very effective at measuring attitude, talent, energy, and drive. When we add this to the mix, our accuracy climbs past 90%, but the major contribution is information. Before we added the competency profile, we only had a behavior and motivators instrument for a total of 22 unique factors. I often got the message from clients that the assessments were repetitive, and of course they were. Our data was limited.
Today, we often get the opposite feedback. We provide 97 unique factors from three different assessment systems as opposed to 16 Myers-Briggs types. The raw assessment can be over 90 pages which is why we create a shorter summary report that interprets key data and provides our clients with specific and actionable information.
Myers-Briggs has its uses, but one of them shouldn’t be hiring selection. If you want accurate, actionable, and useful information about a job candidate or employee, you need to dig much deeper.