The previous post had some initial, basic how-to information on assessing people and teams more effectively. The beginning point was simple: track and note behaviors. This post continues that theme and adds a second element – if you’re assessing, be clear about what (and how) the skills you’re seeking to assess are demonstrated .
We all get asked to make assessments– a more polite word for judgments – and frankly most of us aren’t very good at it. Typically the descriptive words we use are vague at best, can get easily lost in translation from one person to another, and have their basis in some assumed sense of common underpinning qualities. For day-to-day life it works mostly – until it doesn’t – and the bump can be large and noticeable (e.g. “banking credit problem”).
As an easy example of why it’s helpful to create a defined vocabulary, consider the following story. One of the pieces of work in my past took me to Albertville, Alabama and like many travelers I default, when in doubt in strange places, to familiar foods – pizza in my case. When I stopped at the local pizzeria and ordered the usual dog’s breakfast of toppings, I paused to ask what the mushrooms were like. “Real fresh” came the response, to which I added, “tell me a little more.” “Well, I opened up a new can just this week” replied the counter person. Fresh is as fresh does and though I passed on the mushrooms, it was clear that my idea of fresh (blame it on the local San Francisco foodie culture) was different than the pizza maker’s.
It should be the same deal with making assessments; until you have a common definition with others, be a little wary and recognize that you are not calibrated. Until that point it’s best to think of assessments in two parts: what’s the topical 10,000-foot reaction (“good”) AND what’s the behavior or characteristic that forms the basis of the definition.
My first job out of graduate school was working on staff and faculty at the University of Southern California in the Student Affairs area Part of the work was restructuring a staff selection process that formerly took too long (six plus months), was not efficient (it was a largely manual scheduling and doing process) and did not leverage staff (there was little room for senior staff to pass on their knowledge to junior staff except through the series of one-on-two exposure), and produced a staff that overall was pretty mediocre (subjective) not-so-diverse (the staff did not look like the campus, which was highly diverse), and otherwise not very remarkable.
As we embarked on changing the selection process, I remember joining my boss Jane Higa in an observation room that was the backside of a two-way mirror to watch some students helping out as guinea pigs on some simulations that we were constructing as part of our revamped process. Part of the change in the selection process was to collapse the six-month’s worth of one on one or two on one interviews into two weekends of intensive screening. The first step we had figured out was to be able to clearly identify the key behaviors at play with someone who is successful in the role, and accordingly know what behaviors we wanted to screen in, and what behaviors we wanted to screen out. In this simulation we focused on locking in on leadership behaviors which at a summary level we defined as “taking an idea or concept and moving it through to group acceptance.”
As we watched a group of six students work on the simulation we tracked behaviors – and were saw a young, quiet Chinese-American female student float an idea, help frame the discussion, support others as her idea got tossed around, and quietly and softly nudge the group along to accept that idea. Not the common stereotypical leadership behavior but it was clearly effective, and was certainly someone who had great leadership skills. Worse – in the old selection system of one on one interview we would never have seen her as a leader and likely would not have selected her.
Well-done assessment exercises – apart from scrimmages on sporting teams which do some but not all of the same things – are few and far apart. Opportunities to use this type of assessment method, such as selecting an employee or evening hiring someone to build a fence in your backyard, abound everywhere if you have the smarts to recognize the opportunities.
I’ll detail the ways to surface information outside of assessment center type setting in subsequent posts. Note for now that it’s now so difficult once you have these first two concepts down in your assessment tool kit.
So the second step in step in making better assessments is know what you’re looking for: what are the most important qualities that will drive success for a piece of work (as in the case of a contractor doing your backyard) or a job (in the case of an employee applying for a role), and how are those qualities / characteristics demonstrated or "evidenced."
Sound simple? It is. So simple that people can miss it.
When I ran staffing for the US at Barclays Global Investors the biggest focus that hiring managers had – beyond the fear of hiring a candidate from hell that they would get saddled with – was knowing “the right question.” I can think of any number of 30 Rock episodes that can be built on that premise but the reality is that most people – like the hiring managers at BGI – are too often thinking of the questions to ask rather than what the important behaviors are that they can either observe, or can surface through conversation and interview.
So here are the first two basics in assessment: 1) Watch and note behaviors (what did people do, and what happened as a result when they did those behaviors, and 2) when assessing something, be clear about what it is you’re looking to see demonstrated or evidenced .
More to come.