Are NZ businesses obligated to pay for minimum hours, even if rostered time falls short? Legal implications of guaranteed pay in employment contracts.
Published 26 April 2022 | 12 min read
I am biased. You are biased. All humans are biased. Not buying it yet? Consider the research of Daniel Kahneman, a psychologist who was awarded the Nobel Prize for his groundbreaking work applying psychological insights to economics. In his research, Kahneman demonstrated one simple truth: the vast majority of human decisions are based on biases, beliefs, and intuition, not facts or logic.
This is part of why people tend to bring bias into the performance review process, even when they’re going into it with the best of intentions. Bias is an error in judgment that happens when a person allows their conscious or unconscious prejudices to affect their evaluation of another person. And when it comes to performance reviews, biases have a huge impact.
Biases can lead to the inflation or deflation of employee ratings, which can have serious implications in high-stakes situations directly affected by performance assessments - such as promotion, compensation, hiring, or even firing decisions. Given the weight of these decisions, it’s critical to ensure that performance reviews are as fair and objective as possible.
So, what can your and your organization do to ensure that performance review processes are as bias-free as possible? Incorporate bias blockers into each step of the process. Once you’re aware that these biases exist, you can use various strategies (and a good dose of self-awareness) to minimize their effects.
Top 10 biases affecting performance reviews
1. Recency bias
Definition. Recency bias is the tendency to focus on the most recent time period instead of the total time period.
We also call this the “what have you done for me lately?” bias. If someone recently rocked a presentation or flubbed a deal, that recent performance is going to loom larger in a manager’s mind. Why? Because it’s easier to remember things that happened recently.
Example of recency bias. Imagine there is an employee named Jamie. At the beginning of the year, she landed a huge deal for the company and received a ton of recognition as a result. But in the last two months, her performance has slipped. Unfortunately, Jamie’s manager focused only on the recent events of the past few months during Jamie’s performance review and didn’t acknowledge Jamie’s incredibly valuable contributions from earlier in the year.
Prevention strategy. To limit the impact of recency bias on your performance data, develop a habit of collecting feedback on employees at different points in time throughout the year. Did someone just complete a 3-month project? Great, send their peers a request for feedback so you can get some data on how well they did. Did someone just complete internal training? Awesome, request feedback from the instructor about their participation. This way, you have more frequent data points from throughout the entire time period at the end of the year.
2. Primacy bias
Definition. Primacy bias is the tendency to emphasize information learned early on over information encountered later.
In performance reviews, managers often fall for primacy bias when they let a first impression affect their overall assessment of that mentee.
Example of primacy bias. Dr. Heidi Grant Halvorson of Columbia Business school describes the following scenario:
If I’m a jerk to you when we first meet, and I buy you a coffee the next day to make up for it, you are going to see that nice gesture as some sort of manipulative tactic and think, “This jerk thinks he can buy me off with a coffee.” However, if I make a great first impression, and buy you a coffee the next day, then you’re likely to see it as an act of goodwill and think to yourself, “Wow, that Kevin really is a nice guy.”
Prevention strategy. Preventing primacy bias is similar to preventing recency bias. By putting together a dossier of performance snapshots that include feedback from multiple points in time, you can dampen managers’ tendency to weigh their first impressions too heavily.
3. Halo/horns effect bias
Definition. The halo/horns effect bias is the tendency to allow one good or bad trait to overshadow others (i.e., letting an employee’s congenial sense of humour override their poor communication skills.)
After all, we all have our own pet peeves and turn-ons. Sometimes those quirks and inclinations can overshadow our ability to assess people overall. For example, this bias is why attractive people are much more likely to be rated as trustworthy. At the same time, if or when attractive people fail to live up to those higher expectations, they also suffer a penalty for not living up to the presumptions of others.
Example of halo/horns effect bias. For example, a particular manager may have a soft spot for proactive, outspoken individuals. If one of their mentees tends to be quiet and withdrawn during meetings, the manager may end up giving that mentee a lower score, even if the mentee offers other valuable qualities and contributions.
Prevention strategy. To dampen the effect of the halo/horns effect bias, evaluate performance on multiple dimensions of performance instead of leaving it open to interpretation. Are you rating individual achievement but failing to look at the way people contribute to the success of others? Does this person happen to have a particular set of highly sought-after technical skills but don’t finish their work on time? To get a holistic view, make sure to assess at least 2-3 different aspects of performance so that one awesome or awful trait or skill doesn’t overshadow everything else.
4. Centrality/central tendency bias
Definition. Centrality bias is the tendency to rate most items in the middle of a rating scale.
While moderation is great in most things, high-stakes situations like performance reviews often require taking a stand. When everyone receives the same rating, it’s difficult to distinguish the low-performing employees from the top-performing employees.
Example of centrality bias in action
A manager hands in his annual performance evaluations, but almost everyone on his team scored in the middle of the scale. If the company is working on a 5-point rating scale, that means most employees received a 3. This is a common occurrence as many managers don’t like being extreme and trend moderate in their reviews.
Prevention strategy. Centrality bias can be overcome by taking a flexible approach to the way scales are designed. The simplest way is to eliminate the neutral option from the rating scale, such as switching from a 5-point scale to a 4-point scale. This way, evaluators have to make a choice one way or the other.
5. Leniency bias
Definition. Leniency bias occurs when managers give favourable ratings even though they have employees with notable room for improvement.
Like many biases, leniency bias can weaken the objectivity of the data. The truth is, that some employees do outperform others. Giving everyone a 4 out of a 5-point rating scale makes it challenging to distinguish who the truly top-performing employees are. On top of that, it becomes difficult to identify who deserves a promotion or raise, and it can leave your top talent feeling disgruntled.
Example of leniency bias. Alex and Jamie are both managed by the same person. Alex consistently produces average-quality work. While it’s not bad, he rarely goes above and beyond what’s asked of him. Jamie, on the other hand, is consistently one of the top-performing employees. She goes the extra mile on her projects, always raises her hand to take on more responsibilities, and delivers outstanding outcomes. Despite these differences, their manager gives them the same rating of “above average” on their performance review to avoid hurting anyone’s feelings. As a result, Alex and Jamie stay on similar career trajectories - leaving Jamie to wonder if anyone notices her hard work.
Prevention strategy. Instead of making “above average” the top possible rating, try using a rating scale that reflects the way people actually talk about and think about their team members. If you want to create more spread to identify your top people, build that spread into the rating labels.
For example, you could have a scale where the top rating is above average.
However, if you wanted to give managers giving more opportunities to identify stellar performers, you could create a scale with above average as the middle rating and top performer as the top rating.
6. Similar-to-me bias
Definition. Similar-to-me bias is the inclination to give a higher rating to people with similar interests, skills, and backgrounds as the person doing the rating.
Simply put, we tend to like people that are like us. In addition to making performance reviews tricky, similar-to-me bias can make your workplace feel less inclusive and may even affect how diverse the overall makeup of the organization is.
Example of similar-to-me bias. Imagine there is a manager that attended a top-ranked school that they loved. When conducting a performance review for someone who went to the same top-ranked school, the manager may rate them higher because of their inflated impression of the school and people who graduated from that school.
Prevention strategy. Reduce the effect of similar-to-me-bias by requiring specificity in manager’s assessments. In three separate studies, Yale researchers found that when you first agree to the criteria used in an assessment and then you make the evaluation, you are less likely to rely on stereotypes, and your assessments are less biased.
7. Idiosyncratic rater bias
Definition. Idiosyncratic rater bias occurs when managers evaluate skills they’re not good at highly. Conversely, they rate others lower for skills they’re great at. In other words, managers weigh their performance evaluations toward their own personal eccentricities.
In fact, one of the largest studies on feedback found that more than half of the variance associated with ratings had more to do with the quirks of the person giving the rating than the person being rated. Rater bias was the biggest predictor, holding more weight than actual performance, the performance dimension being rated, the rater’s perspective, and even measurement error.
For that reason, idiosyncratic rater bias presents a huge problem in performance data because the score given often tells us more about the rater than the person being rated.
Example of idiosyncratic rater bias. Let’s say there’s a manager who excels at project management but knows very little about computer programming. As a result, she unknowingly gives higher marks to those who are good at computer programming and lower marks to those who are good at project management or other skills that are similar to her own.
Why? Because the manager is good at project management, she’s more likely to have higher standards for this skill and compare the employee to herself. On the other hand, since she’s unfamiliar with computer programming, she’s less familiar with the standards for performance and is more likely to be lenient. In other words, her feedback reflects more on her own skills than her employee’s.
Prevention strategy. It’s not easy for people to rate others on things like “lateral and strategic thinking” (whatever that means). But, as one researcher put it: “People might not be reliable raters of others, but they are reliable raters of their own intentions.” So consider rewriting some of your performance questions to reflect your team's actual decisions and intentions.
Here are some examples from the Culture Amp platform:
- I would always want this person on my team
- I would award this person the highest possible compensation increase and bonus
- I would hire this person again
- This person is ready for promotion today
- If this person resigned, I would do anything to retain him/her/them
8. Confirmation bias
Definition. The tendency to search for or interpret new information in a way that confirms a person’s preexisting beliefs. Confirmation bias is pretty similar to primacy bias but can tend to go much deeper.
On one level, confirmation bias makes it easier to believe people who align with you on specific facts, beliefs, or stances. It’s also why you’re more likely to be skeptical of people who disagree with you. While this is a normal human tendency, it can skew the interpretation of valuable performance data.
Have you ever had a question about something and went to the internet to search for the answer? If you’re like most people, your search terms will probably pull up web pages that confirm your existing beliefs. For instance, if you love beans and want to prevent cancer, you might Google “beans help fight cancer.” But, on the other hand, if you can’t stand beans, you might search for “beans cause cancer.” Sure enough, you will find millions of results for both searches. Similarly, if you initially think someone might be a bad apple, you are much more likely to seek out (and find) information that confirms your initial suspicion.
Example of confirmation bias. Imagine an employee who is highly productive, technically skilled, and a pleasure to work with. The manager of that employee may receive feedback that supports these beliefs, which they’re going to believe. However, when that manager receives feedback contrary to their beliefs about the employee, they may discount or even ignore that valuable information.
Prevention strategy. To overcome confirmation bias, think like a scientist. When researchers ask questions, they try to form their hypotheses that seek to disconfirm rather than confirm their initial beliefs. Every time you have an impression about someone, go out and seek evidence that they are the opposite or entirely different from what you suspect. When collecting feedback from others, pay close attention to the feedback that goes against your beliefs.
9. Gender bias
Definition. When giving feedback, individuals tend to focus more on the personality and attitudes of women and feminine-presenting individuals. Contrarily, they focus more on the behaviours and accomplishments of men and masculine-presenting individuals.
Priya Sundararajan, Culture Amp’s Senior Data Scientist, reviewed 25,000 peer feedback statements across a performance cycle of nearly 1,500 employees. She discovered the following:
For male employees, peer feedback provided by both male and female reviewers tends to focus equally on work- and personality-phrasing (for example, “Nick should gain more technical expertise in nonparametric ML models”).
Female employees, on the other hand, are nearly 1.4x more likely to receive personality phrases from male reviewers (such as “Sue is a great team player and very easy to work with’) and less likely to receive work-related phrases”.)
Gender biases like these exacerbate gender bias, growth/promotion opportunities, and the pay gap.
Example of gender bias. Imagine there are two employees - Nick and Susan - who are up for promotions. They’re both highly qualified, have similar years of experience, and received many positive accolades. They also received constructive feedback from their managers that needs to be taken into account for the promotion:
“Nick could work on his technical expertise.”
“Susan is challenging to work with.”
As you can see, Nick’s feedback is based on his skill set - which can easily be improved with the right guidance and training. On the other hand, Susan’s feedback is based on her work style. This not only raises doubts about her personality but also seems like something that can’t be “fixed.” As a result of this feedback, Nick gets the promotion, and Susan doesn’t. This situation is all too common and contributes to the gender pay gap and unequal growth opportunities experienced by women.
Prevention strategy. Sometimes, unstructured feedback allows bias to creep in. Without some set criteria, people will likely reshape the criteria for success in their own image. As Stanford researchers have put it, the big takeaway is that open boxes on feedback forms make feedback open to bias. That’s why it helps to take a “mad libs” approach to feedback.
Help raters by giving them a format and then allowing them to fill in the blanks. Additionally, nudge managers into specifically talking about situations, behaviors, and impacts rather than personality or style.
Quick but important reminder: Gender biases can have a huge impact on the experiences and assessments of nonbinary and/or transgender folks. Although these biases may manifest themselves in a slightly different manner, it’s important to stay alert and keep your eyes peeled.
10. Law of small numbers bias
Definition. The law of small numbers bias is the incorrect belief that a small sample closely shares the properties of the underlying population.
Example of the law of small numbers bias. Imagine a stellar team full of top performers, with one person doing the work of four others. Naturally, you rate that person as higher than the rest and the others a bit lower. However, it turns out that even the lowest performer on your team is among the best in the whole company. So, when it comes time to look at performance company-wide, it appears as if your team is about average, even if they’re all exceptional.
Prevention strategy. Talent calibrations are key to overcoming this bias. Calibration is when all reviews and ratings are looked at holistically to ensure that your definition of “above average” is similar to everyone else’s definition of “above average.” This helps guarantee that people at the organization speak the same language and use the same nomenclature when conducting performance reviews.
Recognizing and overcoming our biases
Unfortunately, we’re not that good at knowing our own biases. In fact, research has suggested that the more help you need in this area, the harder it is to recognize that you need help. People underestimate their own bias, and the most biased among us underestimate it the most.
So, one step is to check yourself through some unbiased means. One method that researchers at the University of Washington, University of Virginia, Harvard University, and Yale University have used is the Implicit Association Test, which is freely available to everyone. Fair warning, though: you might not be comfortable or agree with the results, but that’s probably just your bias talking.
Next, give yourself permission to be human and recognize the limits of your own understanding. Just being aware of your biases will not, in and of itself, enable you to overcome your biases. This doesn’t mean that we should ignore our biases or give into them. Instead, we need to set up systems, processes, procedures, and even technology, that enable us to make better decisions. Ask for help. Get feedback from others. Set firm criteria and be consistent. Most of all, keep an open mind.