SEC Performance Ratings Reflect Management View that SEC Managers Significantly Outperform Frontline Staff

02/11/2013

2/12/13: This week the SEC released these FY 2012 performance management system results. The numbers reveal a sharp disparity between how SEC managers rate their own performance versus how they rate the performance of their subordinates. The union has fielded a number of inquiries from staff across the nation about this odd and unexpected trend. In a credible system, achieving a particular rating should be no more difficult for one group of employees as compared to any other. However, the SEC's own data suggests that under its so-called "evidence based" performance management system, this is not the case. If you are a non-management employee it is much more difficult for you to earn a higher rating.

Under the rating system at the SEC, employees can receive one of five possible ratings:

1: Unacceptable

2: Needs Improvement

3: Meets Expectations

4: Exceeds Expectations

5: Greatly Exceeds Expectations.

When the performance management system is ultimately linked to pay, only employees who receive a 3, 4 or 5 rating will receive merit raises. These 3, 4 and 5 ratings roughly correspond to grades of C, B or A, respectively. If an employee receives a 2, it is the functional equivalent of getting a D.

According to the numbers, SEC managers gave themselves high marks. Overall, 72% received a B or an A, and only 25% received a C. In some offices the numbers were more striking. In the ARO, for example, 94% of managers received a B or an A. In the BRO, 96% of managers received a B or an A. Approximately 80% of the managers in OCIE, Enforcement, Corp Fin, Trading and Markets, Chicago, Denver, Fort Worth, New York, Los Angeles, Philadelphia, San Francisco, OEC, OFM, OIA and OS, received a B or an A.  And, as previously reported, this year SEC managers gave themselves merit pay based on these high ratings, over the objection of the union.

At the same time, however, these same SEC managers utilized the same rating system to give non-managers much lower scores. Overall, less than half the frontline staff of the SEC received a B or an A. Shockingly, the majority, 54%, received a C or D. Non-managers were almost three times less likely to receive an A as their managers, and non-managers were twice as likely to receive a C as their managers.

In some offices, the disparities were particularly odd. In Atlanta for example, 44% of the managers received an A, but only 3% of the frontline staff received an A. In Boston, managers rated 92% of their fellow managers as a B, but only 51% of the frontline staff who work for them received a B. Similarly, in New York, 77% of the managers were doing B work, but only 36% of the non-managers got a B. Similar trends appear in almost every office and division of the agency.

Such wide disparities in results raise further questions about the fairness and credibility of SEC management’s so-called “evidence based” performance management system. Managers hire the frontline employees, they assign them their work, and they are in charge of their training and professional advancement opportunities. Most importantly, the managers manage the actual work of the frontline staff – managing the Enforcement investigations, the OCIE examinations, the Corp Fin reviews, etc., performed by the frontline staff every day. They are responsible for getting the most out of their staff. How is it that the managers seem to be excelling at “managing” all of this work, and presumably also for ensuring that it is top quality work deserving a B or an A, but somehow simultaneously the majority of the employees that they manage are C-level performers at best?

Furthermore, how is it that so many managers can be so highly rated, and yet the Federal Employee Viewpoint Survey results at the SEC are among the lowest of federal agencies in the nation? Doesn’t it seem that there should be some considerable room for improvement among SEC managers, when the level of engagement, motivation and job satisfaction among the employees they manage are among the lowest in the federal sector? It seems illogical that an office such as the LARO, for example, which ranks near the bottom of the SEC in FEVS scores, could nevertheless reward over 80% of its managers with a B or an A. The lack of accountability reflected in these results calls into question the SEC's basic approach to managing its workforce.

These disparate results reflect the highly subjective, discretionary nature of the SEC’s performance management system. With such a subjective evaluation tool, one would expect the ratings to reflect the biases of the raters, and that appears to be happening.

Several weeks ago, the union requested from the SEC demographic information about the ratings for FY 2012, including race, gender, age, position and grade. We will be publishing that data when it is provided to us.