Tuesday, August 31, 2010

Review of "Who: The 'A' Method for Hiring"

The basic argument of the book is the typical social science critique, that shows up in particular in Dan Ariely’s “Predictably Irrational,” that we are much less rational and much less capable than we typically think. When it comes to hiring, for example, we say that “personnel is the key,” but then we in practice mostly ignore that advice by waiting until there’s an opening to put out a “standard” advertisement that gets only a few decent candidates that we then put through a mostly hit or miss screening and interview process predicated on a few people’s personal judgments. Because we’ve run a “process” and because we trust our own personal judgments much more than we should, we fail in the hiring process more often than we would like to admit.

To give a JBU example, I personally would like to think that we have a fairly rigorous screening and evaluation process for faculty candidates at JBU (more so than we do for academic staff, for example), and I would like to say that we’ve done a pretty good job in hiring people in the 8 years I’ve been in this role. And it turns out that we are indeed following some (much?) of the advice in this book. Nevertheless, our success rate (defined as faculty who stay in their roles for more than 3 years and who score above our teaching and OCR goals) is at best 50%. That’s pretty much what all of the studies on hiring would indicate, that the typical hiring process is about as good roughly as flipping a coin (50% success). Sigh.

So what can we do better here at JBU? Here’s the advice from this book (with accommodations for the “business speak” which almost killed the book for me).

1) “Who” matters more than “what.” In theory, this is just another example of the “personnel is the key” argument, but in practice, very few of us really believe this (we talk about “mission,” for example, way more than we do “people”), and it’s taken me 20 years in higher education and 7 years in this role to take this idea really seriously and to begin to study the issues in more depth. The authors suggest that you should shoot for candidates “who have a 90% chance of achieving a set of outcomes that only the top 10% of possible candidates could achieve.” My bar has been lower than that, and I know that my bar has typically been higher than the faculty on the search committees (primarily because of the usual pressures to hire “someone” before the year begins).

2) So if we really wanted to aim higher in our hiring process (especially our faculty hiring process), what might we do differently? The authors suggest a 4-step approach: scorecard, source, select, and sell. I’ll take them each in turn, but to get away from the usual approach of waiting to “source” until the search has already begun, I’m reversing the order of the first two items and putting “source” first.

3) “Source” just means creating and maintaining networks. For example, the Bible and Business divisions probably do this better than the rest of us, and those two divisions have scored the best on our various measures over the last five years in part as a consequence. Again, we all know the theory here, but few of us probably do the kinds of things that this book suggests.
- How many of us keep a list of possible people we’d like to consider hiring someday and then actively keep in touch with those people?
- How many of us ask our colleagues at other institutions “who are the best people in your organization” and then keep notes for later reference?
- How many of us encourage our subordinates (with pay bonuses, for instance) to feed us the names of the people in their fields who we should keep an eye on for potential hires at some point?
- How many of us actively try to develop those who work in our areas (which also means saying “no” to some others) instead of just allowing the seniority system to take its normal course?
- How many of us actually reward (financially or otherwise) those employees or those colleagues at JBU or at other institutions who’ve given us a referral that has eventually panned out?
- How many of our units have created advisory boards, one of whose main purposes it to create connections (to students, to alums, AND to possible future employees)?
- How many of us know the “connectors” in our fields who “know everyone” and how many of us have been tapping those people for possible referrals?
In short, how many of us actually do the kinds of things that we know we should if we really thought that “personnel is the key”? My own personal take-away is that I need to create a better tracking system for potential recruits to JBU and to follow up more diligently in looking for these individuals. Whenever I hear a presenter I like or see someone interesting on a list of people who’ve attended a CCCU/CIC leadership event, I need to get that person on my list and follow up with them whenever possible. For example, there were a couple people at the CCCU Forum last spring that in retrospect, I really should have said hello to and gotten to know better.

4) “Scorecards describe the mission for the position, outcomes that must be accomplished, and competencies that fit with both the culture of the company and the role.” As a mission driven institution with a fairly detailed evaluation system, we probably have some of these pieces in place, at least on the faculty end of things. We even have an actual scorecard that I fill out for each faculty candidate with various groups feeding into this system along the same lines as what we do with our formal evaluation system. That’s a relatively new effort on our part, but it’s very consistent with what this book suggests, i.e. to create a detailed set of quantifiable outcomes for each position that everyone has agreed on before the search begins and that each candidate will be judged on. I’ll be looking to see over the next few years whether our judgments during the search process turn out to be on the mark or not when people finish up their 3-year evaluations. From two years of data, I can already tell that our judgments during the search process tend to be higher than our judgments after someone’s been here awhile. With this book’s conclusions in mind, we might tighten up this faculty “scorecard” practice even more and, once we have a more consistent staff evaluation process in place, perhaps apply some of this thinking to this area as well. We’ll see.

5) “Select” basically refers to “winnowing the candidates that you have found through your sourcing process.” This is the section of the book that I learned the most from. The authors suggest a four-part “structured” interview process that seems to comport well with the arguments I’ve read in other books and articles on hiring.

- “Screening” interviews are short phone calls using a standard list of questions. The authors give lots of suggestions for what to look for in these conversations and how to follow up on points of potential interest with “what, how, and tell me more” questions. The screening process is designed to winnow down the field to no more than 3 and maybe just to 1 or 2 candidates.
1) What are your career goals?
2) What are you really good at professionally?
3) What are you not good at or not interested in professionally?
4) Who were your last five bosses, and how will they each rate your performance on a 1-10 scale when we talk to them?

- “Topgrading” interviews (pardon the abysmal business speak) are the main interviews of 2-3 hours. Again, these are very structured interviews with a small group (perhaps our search committee?) in which you look for patterns of behavior instead of just trying to “get a feel” for an individual (which is what most of our interviews do). It’s essentially a chronological walk-through of a person’s career in which you ask related to each job (or “chunks” of jobs if the person has moved around a lot) the following 5 questions. And again, the authors offer lots of suggestions for how to maneuver through each question including a sample script for the beginning of the interview.
1) What were you hired to do?
2) What accomplishments are you most proud of?
3) What were some low points during that job?
4) Who were the people you worked with? Specifically . . .
a) What was your boss’s name, and how do you spell that? What was it like working with him or her? What will he or she tell me were your biggest strengths and areas for improvement?
b) How would you rate the team you inherited on an A, B, C scale? What changes did you make? Did you hire anybody? Fire anybody? How would you rate the team when you left it on an A, B, C scale?
5) Why did you leave that job?

- “Focused” interviews are chances for wider input on specific elements of the scorecard, especially regarding cultural fit. We already do much of this by having the candidate meet with the faculty status committee (regarding the “spiritual modeling/cultural fit” outcome), a class (regarding the “teaching” outcome), and the president, VPAA, and Dean (regarding multiple outcomes but again primarily regarding “fit”). I’m not sure if we want to modify our system much to get even more “focused” feedback, but here’s what the authors might suggest if we were to consider heading more in this direction. We’d again have a structured list of questions as follows that a few small groups would each ask (for up to an hour) regarding one of the outcomes and/or a couple of competencies on the scorecard.
1) The purpose of this interview is to talk about (fill in the blank regarding the outcome and/or competencies).
2) What are your biggest accomplishments in this area during your career?
3) What are your insights into your biggest mistakes and lessons learned in this area?

- “Reference” interviews are, interestingly, done after the main interview and not before. After comparing notes among the team based on the interview day (how well does this person match the scorecard), you may decide to continue with this candidate. At that point, you need to pick the references, especially those not given to you initially by the candidate but instead those that came up during the interview process. Then you need to ask the candidate to set up the reference checks (apparently for legal and “transparency” reasons). Finally, you need to call 4-7 of these references—three past bosses, two peers, and two “subordinates” (assuming the person is in an administrative capacity). You can divide these checks between the small group running the search. Again, there’s a structured list of questions to ask. I was particularly intrigued by their tips for the “code” to look for in risky candidates because I’ve said or heard many of these things (“if . . . then” statements in particular) and hadn’t always recognized the “code.”
1) In what context did you work with the person?
2) What were the person’s biggest strengths?
3) What were the person’s biggest areas for improvement back then?
4) How would you rate his/her overall performance in that job on a 1-10 scale? What about his or her performance causes you to give that rating?
5) The person mentioned that he/she struggled with (fill in the blank) in that job. Can you tell me more about that?

6) “Selling” is something I think we have a fairly good sense of, living in rural Arkansas as we do, but we might learn something from the author’s breakdown of the things to talk about. That list includes fit, family, freedom, fortune, and fun. Okay, we can’t sell “fortune,” but the others we can. Again, there’s lots of follow up here, but notice that “fit” is the first item, even in a profit-driven context. The other thing to note is that “selling” takes place throughout the entire process, not just at the end. Basically, persistence pays off.

So, what does all of this mean for how we might function here at JBU? I’m sure some of you can see potential applications in your areas, but I’ll just focus on how we might run our faculty candidate process a bit differently in the future as a consequence of these ideas.

1) I’m going to be more proactive about networking and keeping notes about prospects.

2) I may refine the faculty scorecard process, perhaps by making that scorecard more transparent to all parties, including the candidates.

3) I’ll be more involved in the screening process than I’ve been in the past, including participating in the screening phone calls, which I’ve only done on occasion up to this point. And I’ll probably use this list of questions for those screening interviews, perhaps along with a couple JBU-specific questions.

4) For the interview day, I’d like to combine the search committee and VPAA interviews into a 2-hour “chronological review” interview more along the lines of what this book is suggesting, again by using the list of questions included here. Rob (or Dick) will probably still do a short “welcome and review the day” meeting, perhaps over breakfast, and the rest of the interviews will probably stay pretty much the same.

5) I’ll probably move the reference checks to after the interview day and follow more of the format noted here.

6) “Persistence” has never been a problem for me, so I’m not sure how much I’ll change here. 

7) I do want to follow up on some of these ideas for a possible staff hiring and evaluation process that Darrin and I are likely going to work on this year.

8) I would encourage those on the academic side of the house to consider adopting some of these ideas, as appropriate, when hiring adjuncts and staff members in your areas.

If you’re still reading, thanks for hanging in there. As usual, if you have any thoughts, feel free to send them along.

Thursday, August 19, 2010

It's the teacher, not the school, that matters

I keep seeing articles along these lines.

http://www.latimes.com/news/local/la-me-teachers-value-20100815,0,258862,full.story

Teach for America has made the biggest splash using this argument, but others seem to be reaching similar conclusions. Here’s a synopsis of the new TFA book. Perhaps something to talk about in our faculty development and education groups?

http://www.teachforamerica.org/the-corps-experience/becoming-an-exceptional-teacher/

Here's a link to an earlier post I made on this topic.

http://triple-e-education.blogspot.com/2010/01/teaching-tips-from-teach-for-america.html

Wednesday, July 28, 2010

I-Pad goes to college

We'll know a lot more in a year or two about the possibilities of using this new technology in an academic context, but it certainly has more promise than the Kindle DX (to which I can personally attest).

http://www.cnn.com/2010/TECH/mobile/07/26/ipad.university.ars/index.html?hpt=Sbin

Thursday, July 22, 2010

Review of "Checklist Manifesto"

As usual when I read something that might have some practical implications for how we do our jobs, I pass along some of the basic ideas. This time, the book was “Checklist Manifesto.” It’s written by a doctor, so it’s mostly about taking the “checklist” systems prevalent in the construction and aviation industries and applying those concepts to the medical world. But there are implications for any organization.

The basic concept is that “the volume and complexity of what we know has exceeded our individual ability to deliver its benefits correctly, safely, or reliably.” “Checklists seem to provide protection against such failures. They remind us of the minimum necessary steps and make them explicit. They not only offer the possibility of verification but also instill a kind of discipline of higher performance.” “The philosophy is that you push the power of decision making out to the periphery and away from the center. You give people the room to adapt, based on their experience and expertise. All that you ask is that they talk to one another and take responsibility. That is what works.” “Under conditions of true complexity, efforts to dictate every step from the center will fail. Yet they cannot succeed as isolated individuals, either.” “Under conditions of complexity, not only are checklists a help, they are required for success. There must always be room for judgment, but judgment aided—and even enhanced—by procedure.”

“Good checklists are precise. They are efficient, to the point, and easy to use even in the most difficult situations. They do not try to spell out everything. Instead, they provide reminders of only the most critical and important steps—the ones that even the highly skilled professional using them could miss. Good checklists are, above all, practical.” “The checklist cannot be lengthy. A rule of thumb some use is to keep it between five and nine items, which is the limit of working memory.” “The wording should be simple and exact and use the familiar language of the profession. Even the look of the checklist matters. Ideally, it should fit on one page. It should be free of clutter and unnecessary colors,” and so on.

“Discipline is hard. We are by nature flawed and inconstant creatures. We are built for novelty and excitement, not for careful attention to detail.” “We’re obsessed with great components, but pay little attention to how to make them fit together.” “We don’t study routine failures in teaching . . . or elsewhere.” “But we could, and that is the ultimate point.”

While it’s clearer how this theory might apply to the world of medicine where teams of people have to get on the same page quickly in order to make highly complex decisions with very few errors, I think there might be some application to our JBU context as well. Agendas for committees are one example of a checklist. Lesson plans are another. I’ll note a few other areas as food for thought. And in many (all?) of these cases, we at JBU probably already have some type of checklist process in place, so I’m mostly just suggesting refinements and/or making explicit the kinds of things academic institutions already do to some extent.

1) Our emergency preparedness committee could revise their procedures with more of this “checklist” mindset. I’m one of the people who is supposed to be “in the know” in some of these emergency situations, but I’m fairly clueless about what I’m supposed to do or where I’m supposed to get that information. That’s probably somewhat my fault, but if, as with airline pilots, I had both an electronic and paper back-up manual with these types of short, “key steps” checklists, that might help should a real emergency occur.

2) Our advising process, especially with the new ERP in place, could probably use some updating along these lines. Could we have, for instance, a list of the key 5 things that each advisor needs to check off with each student, perhaps with a physical (or virtual) “check” besides each box, before a student can actually register?

3) With more emphasis in our new evaluation process on having lots of documents put together (and put together well), could we develop a clearer checklist for people going through the process (what needs to be done when, by whom, how, etc.)? It might be a one-page document attached to the faculty evaluation document and distributed to everyone as a reminder when they go through the process.

4) Along similar lines, could we have a “key components” syllabi checklist created by people like Holly and Mandy that we ask all faculty to follow to make sure that whenever courses are created at JBU, they meet some basic institutional and pedagogical standards? That might help people with their PERC process as well.

5) What about with any student needing special support? I had a situation last year, for example, in which a parent had requested that we develop and implement a more detailed checklist for her child. There were some coordination issues between various offices and faculty members that such a system would have helped with.

6) So to with our retention efforts in general, perhaps in combination with points #2 and #5?

7) Going even broader, any of our “comparative decision making” systems might be improved by such efforts, such as in hiring and budgeting.

As I mentioned earlier, I think JBU has already got something like a checklist for many of these areas, but those checklists could probably be improved and made more explicit. The point is that just having a “mental” checklist often isn’t enough. And just having the information buried in handbooks and manuals isn’t enough. You need to have short, explicit checklists for some key components of any process in order to make sure that among the blizzard of information, people focus on the most important things.

Tuesday, July 20, 2010

CIC CAOs and the presidency

http://www.insidehighered.com/news/2010/07/20/cao

http://chronicle.com/article/Why-Do-Few-Provosts-Want-to-Be/123614/?sid=at&utm_source=at&utm_medium=en

http://www.cic.edu/projects_services/infoservices/CICCAOSurvey.pdf

Compared to other CAOs, CIC CAOs are younger (average 57), more satisfied with their jobs, and less interested in being a president (primarily because they see the work of a president as being unsatisfying). A bit confusingly, however, they also stay in the job for less time (average of only four years), typically going back into the faculty or moving to another institution in a similar role.

Monday, July 12, 2010

Recessionary Psychology?

We've been talking a lot about the recessionary psychology as a possible explanation for why our current crop of TUG applicants is the largest and has the most ability to pay ever, yet our enrollment numbers are still a bit soft and we have even more pressure on our discount rate than is typical. In short, why are these "rich" people so much more focused on finances than just a few years ago?

This article by Newsweek economist Samuelson, quoting from a Pew study, fleshes out the argument that we've all been making about the effects of the "recessionary psychology." Namely, this recession hit the upper classes in ways that past recessions typically did not. So while the brunt of the recession has been felt by the young and the lower socio-economic groups, those groups are actually more optimistic about the future than the richer, whiter, older, and more conservative types (i.e. our typical constituents). It's the kinds of people sending their kids to our school who are particularly pessimistic and cautious at this point in time and who are therefore being especially frugal.

http://www.realclearpolitics.com/articles/2010/07/12/the_great_stranglehold_106258.html

Friday, July 9, 2010

Study about student study habits

For those of you who haven’t yet seen one of the recent stories about this study, here’s a short summary. Basically, and no surprise, students study a lot less than they used to. The drop off happened mostly in the 60s and 70s, but the slide has continued since then. As to why and what to do about it, there appears much less agreement.

http://www.boston.com/bostonglobe/ideas/articles/2010/07/04/what_happened_to_studying/?page=full