Approach I—Creating Awareness

The first type of workshop includes three components: presenting the feedback, coaching group activities, and preparing for a sharing and clarifying meeting.

Presenting the Feedback

This segment of the workshop begins with a brief overview of the research and model upon which the questionnaire is based. Participants learn that they will receive information on how important each practice is to the effective performance of their jobs (as reported by their bosses), how frequently they currently use each practice (as reported by direct reports, colleagues, and their bosses), and how frequently people would like them to use the practice.
Before participants receive their individual feedback reports, they are given a few key pointers for getting the most out of their feedback. These include:
 
Pay Attention to Your First Impression of the Data. We typically have an immediate reaction to the data we see. They either make us feel great (“Hey, I didn’t know I was doing that!”) or terrible (“I can’t believe people see me that way!”). There is no real reason to fight the feeling, but we do need to move through it in order to see the data for what they represent.
 
Focus on the Messages, Not Just the Measures. Recipients must look at the data in the context of their jobs—the nature of their work, the goals they are trying to achieve, and the skill and experience of their team members. High ratings are not always good, and low ratings are not always bad. For example, low frequency ratings in “monitoring” may not be bad news for someone working with an experienced team in a reasonably stable environment. However, high frequency ratings in the same situation may raise concerns of micro-managing or lack of confidence in others’ ability. Also, to go beyond the averages or the numbers per se, recipients should look at the relative highs and lows, as well as patterns that might appear either within or between the scales.
 
Appreciate the Perceptions of Others. Recipients may believe that other people’s perceptions of them are incorrect—that their raters neither understand their jobs nor understand the demands and constraints they must work with—and they may even be right. Unfortunately, in this case, being right is not worth much. Ratings reflect the manager’s effect on others, not his or her intent. The questions recipients need to ask themselves, therefore, are, “What am I doing that causes people to see me differently than I see myself?” or “What is going on in the organization that could affect people’s perceptions of me?”
 
Importance Feedback. The feedback is presented in blocks of information, which allows people time to process and assimilate the data. The first block of data is the importance feedback (see Exhibits 7.2 and 7.3). Ideally, a report should provide two views of this information: how the recipient and the boss rated each practice on a scale of 1 (not important) to 5 (absolutely essential), and the four practices the recipient and the boss identified as most essential practices for effectiveness on the job.
As people review this part of the report, we ask them to consider the following questions to guide them in their analysis and interpretation:
• Which practices do you and your boss agree are the most important to the effective performance of your job? On which practices do you disagree?
• What issues need to be clarified or discussed with your boss?
• Based on this information, which practices would you say are most critical for the effective performance of your job?
 
Feedback on Frequency of Use. When recipients analyze the data on how frequently raters perceive that they are using specific practices and behaviors, they also have an opportunity to compare their own self-ratings with the ratings of their evaluators. The report (see Exhibit 7.4) should display the data using average scores, frequency distributions, and comparisons to national or industry norms (percentile scores).5
People are then asked to review this information, using the following questions as guides:
• Whom did you ask to provide you with feedback? Are they in a position to observe and evaluate your performance? To what extent do you depend on them to get work done? How important are these relationships?
Exhibit 7.2 Importance Ratings
008
Exhibit 7.3 Importance Ratings—Most Important Practices
009
• How consistent are the responses across rater groups (boss, colleagues, direct reports)? How consistent are the responses within each rater group?
Exhibit 7.4 Scale Scores and Frequency Distribution
010
• How consistent should they be? Are you trying to treat each rater the same, or have you been working with each rater differently based on his or her needs and the situation?
• What patterns emerge within each scale? Are there any patterns across the scales?
• To what extent do you agree with the opinions of those who completed the questionnaire on you?
• How do you compare with the database?
Recommendations. The third kind of information that should be provided consists of recommendations: how frequently people feel the recipient should use each practice in order to be most effective (Exhibit 7.5).6 Here, managers have the opportunity to learn how many of their raters want them to use a particular practice less, more, or as often as they currently do. Coupled with the information on the importance and the frequency of use of the practice, this helps them zero in on strengths and weaknesses.
Exhibit 7.5 Recommendations
011
The following questions are provided as guidelines for interpretation:
• What are your strengths (high frequency of use, top quartile compared to database, the majority of raters recommending using the practice as much as you now do)?
• What areas require further development (low frequency of use, bottom quartile compared to database, the majority of raters recommending using it more or less)?
• How does this feedback fit with feedback you’ve received before? What surprises did you receive? What was confirmed?
• What have people recommended you do more, the same, and less to improve your effectiveness?

Coaching Group Activities

After recipients have taken an initial pass at analyzing their feedback, the coaching group exercise provides an opportunity to discuss specific skills with other workshop participants. In our experience, the most effective format is one that includes a structured, small-group discussion and a development guide for each practice—an easy-to-read supplementary set of materials that includes additional information about the practice, suggestions for when to use it more or less, and tips and pointers for using it more effectively on the job.

Preparing for a Sharing and Clarifying Meeting

The last component of the workshop we designed provides people with tools and techniques for finalizing development targets. These segments, although begun during the workshop, provide the foundation on which to build follow-up activities that help clarify feedback messages and ensure meaningful action back on the job. During the consolidation process, people are asked to isolate key strengths, weaknesses, and areas that need clarification.
As noted before, it is crucial to focus on strengths as well as weaknesses during this activity. The consolidated data become the basis for the sharing and clarifying meeting. We recommend that, before development targets are finalized, recipients meet with their raters to confirm their findings, clarify messages that were confusing, and hear suggestions for actions that would improve effectiveness. Pointers on holding effective meetings with raters should be offered during the workshop. (Advice on consolidating data and on meeting with feedback providers is offered in the next chapter.)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset