Simulations and Selection Science: Interview with Mike Hudy, Ph.D. Part One

Mike Hudy is an Industrial/Organizational (I/O) psychologist and principal of Shaker International. He began designing custom simulations for pre-employment testing in 1997. His work is marked by innovation in developing high-fidelity, online work samples and interactive evaluation experiences that expand the science and art of the profession.

In what ways have simulations for pre-employment assessments changed the way I/O psychologists think about measurement science for the hiring process?

Psychologists have to apply traditional psychometrics to a more complex playing field. In developing a simulation, you have to capture the core elements of the job in a manner that is not overly complex yet still accounts for traditional psychometric principles. Now I/O psychologists have an opportunity and challenge to be better at balancing art with selection science.

Tell me more about ?the art.?

The art is the process through which we gain an understanding of a job and devise a way to represent or recreate aspects of the position in an internet-delivered simulation. Simulations collect a work sample through an informative and interactive candidate experience. This method captures a level of data a traditional Likert scale or multiple-choice assessment can never achieve. The art is to obtain some of the complexity without making it overly intricate. The candidate needs to be able to proceed with minimal instruction to complete the exercises. Moreover, the task needs to be job relevant.

Is that where the power of face validity comes into play?

Yes, it is the goal is to invite the candidate to step into the role and perform elements of job which measure attributes critical for success. We create and deliver candidate evaluation in a way that the individual does not feel like they are being tested. They know what is going on; however, the link to the job is so strong and clear. The feedback we get from candidates strongly suggests they appreciate being afforded the opportunity to complete the Virtual Job Tryout. They come away with a better understanding of the career opportunity they are considering. Exposure to the role through well balanced realistic job preview and concrete elements of job demands puts the candidate in a better position to decide if the job is right for them. When we accomplish that, we know the art has achieved its purpose.

The psychometric challenge is to get a good, reliable measurement of the construct you are trying to tap into without introducing too much noise into the exercise. What I mean by that is simulations can add many more moving parts into the measurement experience. With that, the risk is the moving parts or elements of the simulation could have an unintended impact on what it is you are trying to measure.

Can you give me an example of this?

A good example is we developed simulations for two different call center jobs. One of them more closely resembled the actual problem solving on the job. It simulated searching for, finding, and using the information to solve problems by looking for information in a multi-layered database.

The second problem-solving simulation was much more straightforward. It eliminated the need to search for and find information and dealt exclusively with the ability to use technical information to address customer issue and resolve problems.

While the first simulation more closely resembled the actual job, we achieved better results predicting on-the-job performance with the more straightforward, second simulation.

By introducing the searching and identification task, it became a distracter, and we limited our precision in assessing the actual problem-solving ability.

How does that difference in complexity impact the way the candidate responds?

Candidates appreciate engaging, exciting and interactive exercises. Not all applicants appreciate increased complexity in their candidate experience. Also, they let us know about it in the feedback.

So, how do you determine the level of complexity that is appropriate?

That is the intersection of Art and Science. The key is to continually take off your I/O hat and view it from the candidate's perspective, through the test takers eyes. At Shaker, we do this through defined roles in our project teams. It includes peer review, end-user advocate review and then a significant population of incumbents during the validation phase. We learn more from each perspective and refine the exercises. In developing a Virtual Job Tryout, at least four I/O psychologists will critically evaluate the experience through the eyes of the candidate. Our programming team has over 20 years of experience designing graphically rich user interfaces and technology-based training. Each layer of feedback impacts the design. Ultimately, the data from our HR analytics will tell us if we have it right or not.

In what ways do simulations increase the power of the selection science?

Human behavior is complex. What defines success in any given job is involved. Simulations allow us to measure a range of capabilities that do not lend themselves to be readily measured with traditional evaluation tools. For example, let?s consider multitasking. That is the ability to split attention between numerous competing tasks.

Measures such as personality, cognitive ability, and biodata are not able to accurately assess this construct. Thus we developed a multitasking simulation that places candidates into situations where they must divide their attention between a variety of tasks that simultaneously compete for their attention. Individuals who perform well in this exercise perform better in environments that indeed demand those skills. In call center agents, proficiency in this construct correlates to more efficient after call work and better handle times.

With simulation, we can capture more robust work samples such as speed accuracy, the latency of response, navigation accuracy, and learning from repetition in one exercise. Traditional and static measures such as personality and critical thinking are just not able to zero in on the subtle complexities of certain job performance domains.