Last year we had some problems with our annual election, enough problems that clearly we needed to do a deeper look at our ‘process’ to see what was working and what wasn’t. We ended up fielding an Election Review Committee (ERC) that had it’s first meeting at the Summit last year and continued meeting by phone up through May of this year. Bill Graziano has posted a good summary of the results of that effort on his blog.
I served on the committee and it was close to grueling. It’s a challenge to meet for an hour by phone every week or every other week, keep track of where you left off, and put some time into thinking about things that aren’t clear. It was a good group, diverse opinions, willing to listen, willing to think, and most members made most of the calls, not bad at all for a six month effort. Some notes about the process:
- The biggest mistake we made was not having some from HQ or a temp on each call to take notes. It’s important to capture the notes and get them out in a timely manner, we missed this a couple times and it hurt. Especially over a long period having good notes (minutes) is important.
- This wasn’t something that could be in a weekend. I think a good weekend session early on would have been helpful, but some of the stuff we discussed needed time to bake. The same for phone calls. I think maybe 1.5 hours might leave a little more wiggle room, but longer than that would be unproductive. Reducing the scope would have been difficult,it’s a holistic process,you have to think about how x affects or mitigates y.
- We didn’t get a lot of public feedback on the stuff we posted. We had a diverse group, but part of the goal of any group is to drive toward a shared view. There’s no guarantee that you’re down in the weeds. Maybe no feedback indicates we’re on track, maybe it indicates a lack of interest, and definitely we could have made it more visible. One thing I’d like to try for stuff like this is to set up a mailing list where members opt-in and get mailed occasional surveys to provide feedback on various posts.
- Joe Webb did a nice job steering. It was a good group to work with, actually very good, but with any phone meeting you have to work to stay on track and Joe helped with that.
In the end I think we have some decent recommendations. I went into it wanting a few things; a write-in that would bypass the nomcom, a clear definition on values we were looking for, and a way to make the process of picking the final candidates less subjective.
- Write-In. I finally gave up on this for a couple of reasons. One is that the logistics of another vote were tough, another is that I think by making other changes (weighting the nomcom to be community heavy and changing how we picked it for example) it was less important. I’m ok with not having this for this year, it’s something we’ll monitor to see if more needs to be done.
- Values. I think we made some progress on this. People who lead PASS have to be vested in PASS, not just in the SQL community. That’s not unreasonable. Yet it’s hard to qualify and quantify, so we’ll see how it goes.
- Less subjective. Sort of. If you’ve ever interviewed candidates for a position you know that hiring is subjective. Imagine being locked into a score sheet that forced you to hire the one with the top score even if you just knew they would be a bad fit (or someone further down would be a better fit). Unless you get the scoring system perfect it’s a recipe for bad hires. What we did was put a bit less weight on the oral interview, and change how the final list of candidates was picked. Without having tested it I like the approach and feel like it satisfies the heart of what I wanted to accomplish.
In the past the nomcom rated candidates in about 10 areas (education, leadership) on a scale of I think 1-5, and then used that to do the interview and again rated them in various areas. Then we totaled the scores and drew an arbitrary line about who was qualified to be on the slate and who wasn’t. Very very subjective, because we had no guidance on what was a 1 and what was a 5 for education (and it’s a different conversation on whether a Phd would do better than a high school grad).
What we changed to was a ranking system. No longer can the nomcom members just write in an arbitrary score, now they have to look at the whole pack of applications, score them, and use that to help them decide on their final ranking. It’s not just picking who should be interviewed, it’s identifying right then who appears to be the #1, #2, and so on based just on their application. That’s important because we use it to put our values into the process, but we also leave some room for subjectivity. It’s also important because we’re using those rankings to decide who gets interviewed. Logistics are always a concern, and we know it’s not possible to interview everyone that applies – imagine we get 50 applications for example. We basically repeat the process for the oral interviews.
I won’t go through the entire process here, and I’ll grant you that when you look at it it seems complex, but I think it’s actually pretty easy to apply, and the upside of the revised approach is that we focus on comparing candidates to find the best candidates. It’s a process I’ve used before; score resumes and pick the top x, score interviews and pick the top x, and then a very subjective second interview of the top couple candidates to make the decision.
There is one other change that I think is important, the composition and selection of the nomcom. In all years past the Immediate Past President selected the entire committee. I don’t think that was a horrible plan, but it did mean that the nomcom had a tendency to reflect the views of the Board and the IPP. This year we’ll have nomcom members that are picked by PASS members, giving the members a deeper (and a majority) voice on the committee during the most crucial phase of the election.
Doing all of this gave me a new appreciation for how hard governing is. The tendency is to make a rule to fix every problem, a bullet point for every exception, and to root out subjectivity as an evil thing. It’s tough to know what things need to be rules and which things should be left to the committee. Too many rules and it’s just an algorithm, too few rules and its grounds for an argument or worse every time. Not easy stuff.
Will it all work? We’re hopeful. We tried hard, we tried for a sane and realistic strategy. We spent a lot of time looking at edge cases, trying to prevent the process from being abused or running amuck. I believe we should reconvene the original ERC post-election to see what worked and what didn’t, and based on that, we may need to form a brand new ERC to dig into specific areas.
My thanks to the volunteers who gave up a lot of time to help build the recommendations; Lori Edwards, Wendy Pastrick, Brian Kelley, Allen White, Bill Graziano, and Joe Webb.