Selecting Raters

Various research studies indicate that, as noted above, the source of the feedback is often the most important factor in determining whether the recipient accepts or rejects it. The critical characteristic of a reliable source is credibility, which is based on two key attributes—expertise and trustworthiness. The recipient must believe the rater is familiar enough with his or her role and work, and with his or her performance of them, to make an accurate judgment. The recipient must also trust the source’s motives—is the feedback intended to be constructive, or does the rater have an ax to grind? Are the recipient and the rater in some sense competitors? In general, the more a recipient believes in raters’ credibility, the more likely it is that he or she will accept the feedback and use it to change.1
For that reason, involving people in the decision about who provides feedback and spending time helping them make the best choices will have a tremendous pay-off later on. As one manager said after he looked at his feedback report, “If I had known how useful this was going to be, I would have been much more careful about choosing people to complete the questionnaires.”

Guidelines for Selecting Raters

To aid participants in making the best possible choice of raters, explain that they should focus on potential raters’ history and experience with them and offer the following guidelines:
• Has this person worked with you long enough to have observed you in a variety of situations?
• Do you depend on this person to get work done now?
• Will you feel comfortable discussing your key learnings from the feedback with this person—will he or she be willing to engage in honest, reflective conversation about it?
• Does this person understand the nature of your work and the challenges and opportunities you face?
 
Suggest that they narrow the pool of raters by selecting people
• From various rater groups with a variety of perspectives (for example, colleagues, boss, direct reports, internal customers, external customers). They should choose no fewer than three (except for their boss) from each group to ensure anonymity.
• With whom they have a range of relationships: some with whom they get along well and some with whom they don’t get along so well.
 
While many organizations allow feedback recipients to choose their own raters, some prefer to pre-select respondents to ensure an unbiased and representative distribution. In these organizations they are skeptical about the possibility that people might be able to manipulate their profiles by selecting raters they know are favorably disposed towards them. They wonder to what extent the ratings people receive are consistent with their supervisors’ view of them.
Pre-selection of respondents can, however, make participants feel that they have less control over the process, which decreases their commitment and the likelihood that they will accept their feedback and feel motivated to change. While allowing people to choose their own respondents can sometimes mean that participants will distribute questionnaires only to their friends, if the feedback is confidential, even friends will usually provide honest responses that indicate where improvement is needed.
As it turns out, there is evidence that supports this point of view. During the 1980s, a consulting firm conducted a series of validity studies for Disney that attempted to determine the fairness, accuracy, rater bias, and popularity of the 360-degree feedback process that had been used at such Disney properties as Disneyland, Disney World, Corporate, Disney University, EPCOT, WED, and Disney Studios.
At the time that the viability and fairness studies were being conducted at EPCOT, the general manager there was a Ph.D. statistician. He was unimpressed by others’ perception of the fairness of the process and argued that its validity had to be flawed by cast members’ involvement in the selection of their own evaluators.
Based on this concern, the consulting firm conducted another multi-source assessment project at EPCOT, specifically designed to test the general manager’s hypothesis. A total of twenty-two cast members selected two different evaluation teams, one composed of their friends and the other of “grumpies.” The idea was to see how much the two sets of raters varied in their evaluations.
To the surprise of the skeptics, the variations in the two sets of profiles that emerged were remarkably small. Only two profiles showed more than a 7 percent difference between one evaluation team and the other. The clear indication was that the 360 process was sufficiently robust to allow feedback recipients to be involved in respondent selection without skewing the results.2

How Many Sources of Feedback Are Enough?

Regardless of who selects the raters, we recommend setting minimum and maximum limits for how many are chosen. These limits depend on the importance of rater anonymity to the organization and on the kind of time and resources it is willing to devote to administering the process. In most companies, a minimum of three people from any single rater group (such as direct reports or colleagues) and a maximum of ten feedback givers is ideal. With this minimum, individual raters’ responses will be more difficult to identify than if only one or two raters were allowed; it also ensures an adequate sample size. Setting limits on the maximum number of raters decreases the likelihood that any one individual will be hit with multiple requests for feedback, which can result in low commitment and less accurate data.

Who Should Provide Feedback?

In addition to the individual’s self-ratings, 360-degree feedback usually includes data from a manager’s boss, peers, and direct reports, with customers inside and outside the business sometimes asked to participate. A general rule is that data should be collected from people on whom we depend to get our work done or people who depend on us to get their work done. For our 2008 survey, when we asked HR professionals “from whom do participants collect data” and to check all that apply, 69 percent checked “self,” 74 percent checked “direct reports,” 66 percent checked
“colleagues/peers,” and 55 percent checked “boss.” Where “enhanced” 360-degree feedback is the chosen approach, data are gathered not only from these sources but also from the person’s family members and friends, psychological profiles, early work history, and childhood experiences.3
Customer feedback is widely regarded as extremely valuable and powerful, since the customer’s perceptions and expectations are of key importance for many organizations. However, only 33 percent of respondents in our survey indicated that “external customers and partners” were included as raters in their 360 feedback process. This is not too surprising given that eliciting customer feedback can lead to a number of administrative problems. For example, since the behavior an external customer has a chance to observe may be different and more limited than that seen by the person’s immediate co-workers, different questions are needed, and the resulting data have to be compiled and presented separately. There is also a risk that the customer will perceive requests for feedback as onerous, time-consuming, and disruptive to his or her business, especially if the request is not made tactfully, or the benefits to the customer are not made clear. And involving the customer requires the feedback recipient to be diligent about following up on next steps and relevant joint actions.4
For one senior manager, we were asked to design a process to include bosses, direct reports, peers, and customers in his data gathering. We used a similar tool for the bosses, direct reports, and peers, but totally customized the data gathering format for customers. The overall purpose was to identify ways for the manager to improve his strategic impact on the business, so we were able to focus on his strategy development and team leadership internally and focus on how he communicated and executed that strategy externally. In addition to clarifying what he was doing effectively and what required change inside the business, he was able to work in partnership with his customers to improve relationships and establish stronger ties between their respective organizations. One customer described the process as the best form of customer engagement he had ever experienced.
Because raters will be providing feedback on what they have observed about a person’s behavior, their data will be more accurate and appropriate if they have worked with that person for a reasonably long time. Therefore, it is a good idea to require that each rater have worked with the recipient for at least four to five months.

Rater Anonymity

The issue of rater anonymity is approached differently in different companies. Most companies work very hard to protect the anonymity of the people who provide feedback. In the case of organizations where advanced, self-directed, team-based cultures are prevalent, recipients of feedback know which individuals have provided specific comments and ratings, but there is still a guideline that prevents any negative feedback from making it into an appraisal unless the rater has previously given that feedback to the recipient. This type of open system, however, requires a high degree of maturity and mutual trust on the part of all participants and is likely to succeed only in the most advanced team-based organizations. We do not recommend it to organizations using 360-degree feedback for the first time.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset