Friday, June 29, 2007

Evaluating a CAO's performance?

Since JBU is starting a new faculty evaluation system predicated in large part on what the faculty said was important in their work, that's got me thinking about what's important in my work as a CAO at a small, Christian, liberal arts university. Attending a recent department chair conference at Gordon College, reading through a bunch of organization books, and preparing for an assessment conference have me contemplating similar ideas.

Most of the material I've been exposed to has emphasized outcomes instead of inputs, responsibilities instead of job descriptions, real performance instead of rhetoric, and measuring everything as much as possible. In my mind, that means that it should matter less what the percentage is of faculty we have with terminal degrees, what our faculty-student ratio is, what our average ACT scores are, what our mission statements are, how our annual reports are structured, and so on. What counts is whether any of these things actually affect performance.

So what are some of the performance areas for which a CAO should feel responsibility? I've listed these in my own priority order, though I'd welcome feedback from others.

1) Student learning in the classroom, most likely as measured by our overall IDEA student evaluation index. Our current undergraduate index incorporates the overall rating, the difficulty factor, and the integration factor for all of the courses that we evaluate each year using the IDEA forms. In the new system, we'll be able to produce an index for Grad and Professional Studies as well. And all of these indexes will be more accurate because we'll have more courses being evaluated. Since this combined rating balances all three aspects of our teaching evals (overall, difficulty, and integration), I would say that this is probably the best single indicator that I should be evaluated on. It is our main "product," good teaching.

2) Development of knowledge, critical thinking, maturity, etc. as demonstrated by results on CLA (if we started using that instrument), the SRA, various pre-post tests (if we wanted to do more with that mechanism), various exit/entrance exams (MFAT, GMAT, GRE, etc.), success in various competitions, admission by grad schools, and so on. This area probably needs the most work (something Rob Norwood has indicated also), but it's probably one of the most important in terms of "outcomes." If I had to pick just one index for this area, from what I know of it, I'd probably pick CLA, so perhaps we need to investigate further on moving to that system at JBU. But that doesn't apply to G&PS. A pre-post test system would work better in these contexts, but there aren't any ready-made, so I've asked Rick Ostrander to pilot a pre-post test concept in Gateway as a possible alternative means to start addressing this topic. I'm not quite sure how we'd use the SRA, but since it's in-house, perhaps we should explore that one a bit as well.

3) Completion of programs as measured by actual graduation rates compared to expected graduation rates. My understanding is that we could develop something along these lines (Washington Monthly uses exactly this calculation as one of the main indicators in their rankings system), but I've deferred to others on the details. My other problem is that I'm not sure how much this topic falls into my bailiwick or into Student Development. Maybe it's a number we "co-own"?

4) Extraordinary faculty achievement, most likely measured via our overall scholarship index for "scholarship" and via our service component of the faculty evaluation system for "service." There are lots of weaknesses in this data, especially since there isn't any external validation of excellence. There are citation indexes that R-1 institutions use to measure the relative value of scholarship, but from what I've heard, I don't see how those instruments could be used in our context. It might be worth some exploration, however.

5) Constituent satisfaction as measured by the faculty climate survey for faculty, NSSE, SSI, and alumni surveys for students, and probably some kind of personnel evaluation for anyone else. If I had to pick just one, I would pick the faculty climate survey because student satisfaction with academics is mostly covered in the IDEA evals, the alumni survey is not very reliable, and the SSI and NSSE deal with a lot of other issues besides academic ones. Furthermore, the faculty are my main constituency after the President (and cabinet). But if "faculty climate" isn't enough of an "outcome," I could be persuaded to pick just the NSSE, and maybe some key questions in NSSE, as the way to track constituency satisfaction with academics at JBU. We'd need something similar for G&PS, which we're apparently considering.

That's about as far as I've gotten at this point. I've asked our assessment and IR people to help put together reports on #1, #3, and #4 and Rick Froman to put together a report on the faculty climate survey. Once I see that data, I can start creating my own CAO "performance weighting" along the lines of what we now have for faculty in the evaluation system and for divisions in the ancillary budget process. That would give us a "balanced scorecard" (to use Business lingo) for all academic areas except the academic staff.