Ask An Expert
Q: Our leadership development consulting firm is part of a year-long initiative working with about 100 of our client’s top leaders. The kick-off included The Leadership Challenge® Workshop and completion of the LPI®: Leadership Practices Inventory®, with the plan to retake the LPI at the 12-month mark. We have conducted the initial LPI debrief and coaching based on goals established up-front by the client. We also will be providing coaching after the follow-up assessment and want to be sure we effectively manage expectations of both the leaders and the broader organization regarding progress, as we’ve heard that ratings from the second assessment sometimes actually stay flat or even decline. If this phenomenon is something you see, can you help us understand why this might be the case and how we might go about explaining this to our client and its leaders?
A: It’s true that my co-author Jim Kouzes and I often caution coaches and others working with the LPI that leaders may, in fact, see their ratings stay the same or even experience a slight dip when comparing results from their first assessment to the next. We often find that over time leaders don’t always recognize their behavioral changes to the same extent as it is observed by others, or that changes seem more incremental to one's self than to others. Proving significant change, empirically, is nearly impossible on an individual basis because LPI test-retest scores are only from a single person while the observer test-retest scores come from numerous individuals. A basic psychometric principal is that the more data you have the more reliable the measurement. In other words, it is difficult to describe a phenomenon with only a single data point—open to multiple interpretations —while it is easier to "see" a pattern when there are more data points.
Scores over time from observers are often of most interest, especially considering that typically they don’t come from the exact same people forming this group 12 months after the first administration. This suggests that the observers are not attempting to compare their responses to what they reported at time #1 but are simply describing what is happening at time #2, which results in a more objective (vs. subjective) perspective. Let me add one further consideration. In making these comparisons (time #1 vs. time # 2) we don't often have a control group. That is, a group of similar respondents (leaders and/or observers) that have not had the "treatment" (some leadership development activity like the workshop and/or feedback on the LPI along with coaching). As a result, we don't have any empirical basis on which to say that the treatment made a difference (think here about how pharmaceutical companies test the effectiveness of some prescriptive drug vs. a placebo drug). Over time there can be lots of reasons for changes in behaviors and we want to know whether the treatment group changed more than the control group. Again, this is impossible to do at an individual level (one person vs. a group of people).
Here’s what this might look like, from a study we conducted about the impact of a year-long leadership course for business school students at one university. These students were compared with non-business school students at the university both before the course began and three years later. At time #1 there were no statistically significant differences between the two groups in their use of The Five Practices. However, at time #2, while both groups indicated using The Five Practices more frequently than they did before starting college, the average scores from the business group (treatment) were significantly higher than those reported by the non-business student group (control).
In my own experiences with pre-and-post administrations of the LPI, for developmental purposes I've found that the treatment (leadership development experiences), in a classic medical sense, "did no harm" to the individuals. That is, leaders’ scores did not decrease and were generally the same or slightly higher in frequency. But what did change, often dramatically, were ratings from observers. Most of the observers at time #2 reported more frequent use of The Five Practices than the observers did at time #1 (and, again, no scores went down so dramatically as to suggest that folks were "harmed" by the treatment).
At an individual level, and as a coach, I don't use the LPI so much in a research or empirical fashion, as in asking, “Were there changes from time #1 to time #2 and how significant were they?” Rather, I suggest that the follow-up be more personal. For example: "You said you were going to do x more frequently with your direct reports at the next two staff meetings. Do that and I'll check in with them to see if they have noticed any change in your use of x. Then we'll talk about what they experienced and how you felt about this new behavior, and see where we need to go next."
Doing a physical exam annually is good practice as is getting the oil changed in your car every 3,000-5,000 miles regardless of how you felt a year ago or how many miles in total your car has. In the same way, I believe it is always a good idea to periodically check (perhaps annually or more often as appropriate) about how one is being perceived as engaging in these most important leadership behaviors—knowing that it is generally the case that the more frequently one is engaging in The Five Practices the more effective (productive, successful, etc.) that individual will be.
Barry Posner is the Accolti Endowed Professor of Leadership at the Leavey School of Business, Santa Clara University, where he served as Dean for 12 years. Together with Jim Kouzes, he is author of The Leadership Challenge—now in its fifth edition—and over a thirty other books and workbooks on leadership and leadership development.