Case Study: Transforming Review Processes
¶ 1 Leave a comment on paragraph 1 0 Insofar as there’s been a single most important influence in my thinking over the last several years about what leading generously might mean in the academic context, I’m lucky enough to be able to say that it’s my own dean. Chris Long and his team in the College of Arts & Letters at MSU have focused much of their recent work around the idea that developing the faculty and staff’s abilities to tell rich, textured stories about our goals, as well as our abilities to read and interpret such stories, could enable the college to develop better processes of evaluation as well as better paths for career development for all of us. Establishing those paths and processes, however, isn’t necessarily intuitive — especially not in an environment conditioned by what a colleague of mine has referred to as “the debilitating mathematics of prestige.”1 Our entrenched systems for personnel reviews suffer from that mathematics, as does everyone subject to them.
¶ 2 Leave a comment on paragraph 2 0 Part of the problem with the traditional measures we use in our review processes is their very mathematical nature. Reducing what counts in our evaluative processes to what we can count — as if those were identical usages of the same word rather than two distinct meanings — eliminates many of the most important ways that the impact of academic work can be imagined and take shape in the world. Our reliance on counting has come about at least in part for some very good reasons: we strive to be as objective as we can in our evaluation processes, and so we want to minimize the effects of bias by restricting our attention to things for which we can present empirical evidence. Somehow we’ve decided, though, that the most neutral form of empirical evidence is numerical, a sphere in which the ordering and comparing of things becomes devastatingly natural.
¶ 3 Leave a comment on paragraph 3 3 But Long and his team have recognized that underlying this aspect of our broken processes of evaluation is a more fundamental inversion of ends and means: rather than examining the goals that we have for our work and our progress toward them, we’re instead laser-focused on a few specific means we have of working toward them — the books, the grants, the journal articles, and so forth — with the result that these means have become ends in themselves.
¶ 4 Leave a comment on paragraph 4 0 Ensuring that we’re reviewing the right things, for the right reasons, and in the right ways, has been an ongoing project that Long has undertaken along with the college’s associate and assistant deans, Cara Cilano, Sonja Fritzsche, Bill Hart-Davidson, and Scott Schopieray. Full disclosure requires me to note again that these are my deans, the folks who hired me, who support my work, and who review it regularly. My reviews have gone well, so it’s possible that I’m suffering from a form of confirmation bias; it’s not unusual to find that you approve of people and processes that have positive outcomes for you personally. But it’s also important to know that I pursued the job that I’m in, reporting to and collaborating with this team of deans, in large part because of the work they have been doing to rethink the categories and purposes of evaluation.
¶ 5 Leave a comment on paragraph 5 0 Their project has come to be known within the college as CPIL, or Charting Pathways of Intellectual Leadership.2 CPIL grew out of several different initiatives that were gradually connected. Chris Long, prior to joining MSU as dean, founded the Public Philosophy Journal, an experimental publication focused not only on doing philosophy in and with the public, but also on transforming peer review from a summative, anonymous process into a formative, collaborative engagement between reviewers and authors intended to help the work take its strongest possible shape. Similarly, Bill Hart-Davidson, Associate Dean for Research & Graduate Education, and Scott Schopieray, Assistant Dean for Academic and Research Technology, collaborated on a series of workshops and initiatives designed to support members of the college in developing their digital presence and in imagining shapes for their scholarly outputs other than the book and the journal article.
¶ 6 Leave a comment on paragraph 6 0 What these precursors to CPIL share is the desire to create new means for members of the college to understand their goals and to shape their work with those goals in mind. Goal-setting in the CPIL model asks everyone — not just the tenure-stream faculty, but all faculty and staff — to begin by considering what success looks like for them, what they’d most want to be remembered for at the conclusion of their careers. Those longest-term goals are described as one’s horizon, the thing toward which one could spend a lifetime working, the thing that keeps that lifetime’s work oriented. Along the way to that horizon, however, there are major milestones that need to be reached; those milestones — things like promotions — tend to be outside of our direct control, but they can be prepared for through the stepping stones of immediate work, the projects whose progress we can control.
¶ 7 Leave a comment on paragraph 7 0 Thinking about our intellectual pathways through this model reveals the extent to which our evaluation and review processes have become completely bound up in accounting for the stepping stones. We value publications, for instance, as if they were the goal, rather than recognizing that they are merely a step along the way toward some larger horizon. As a result, we tend to count publications as an indicator of performance and pay little or no attention to why these particular stepping stones are the ones we’ve chosen, or whether they are genuinely leading us toward the horizons we imagine.
¶ 8 Leave a comment on paragraph 8 0 Worse, because we have settled upon one particular indicator like the publication as a standardized stepping stone, we too often overlook the fact that we don’t all share the same horizons. Where your vision of a successful career might include making a lasting contribution to the discourse in your field, mine might focus on building the systems and platforms through which others can make such contributions. And someone else’s vision might center the desire to mentor the next generation of graduate students, or the ability to lift up a community through creative collaborations. Each of these can produce a compelling horizon, but each requires different stepping stones to create the path. When we restrict the forms that those stepping stones can take, we privilege the means rather than the ends, and we risk interrupting the progress that others make toward their own goals.
¶ 9 Leave a comment on paragraph 9 0 In fact, the very structure of our processes of faculty evaluation — to start there, but as we’ll see, the model can apply to everyone in the higher education context — concretizes the ends/means category error. The structures we use to describe academic work, and the forms we use to account for it, focus on the three traditional areas of research, teaching, and service, with each weighted differently based on specific institutional goals. This tight focus on the means toward our larger ends produces real challenges for faculty whose ends might look a bit different from the usual expectations of their fields. As Long describes the situation,
¶ 10 Leave a comment on paragraph 10 0 We were regularly encountering junior faculty in the tenure system coming to us with real concerns about how they were going to bring forward their research, teaching, and service work in a holistic way that didn’t require them to disentangle it and pull it apart into what felt like silos for them. A lot of this happened in conversations around, well, “our senior faculty colleagues are asking us to parse this work into… service and teaching. But you know our work is community based, and our teaching is bound up with the community work that we’re doing. And so it doesn’t even really make sense to pull them apart in that way.” … That’s not the conversation we wanted to be having, about what does this count as, what category does this fit into? We wanted to have a conversation about what’s the purpose of the work? What’s driving it, what’s animating it, what are you trying to do with your academic life?3
¶ 11 Leave a comment on paragraph 11 0 This realization encouraged the deans to begin thinking about the ways they might describe the ends, the goals, rather than the means by which they’ve traditionally been accomplished. They settled on three broad areas of academic endeavor, each of which reflects the overlaps among research, teaching, and service:
- ¶ 12 Leave a comment on paragraph 12 1
- Sharing knowledge
- Expanding opportunity
- Mentorship and stewardship
¶ 13 Leave a comment on paragraph 13 0 These areas, moreover, are conditioned by the values that the college has named as defining the qualities it strives for in all its work, including equity, reciprocity, transparency, and creativity. Together, these areas and the values that they animate not only better correspond to the variety of horizons that anyone working in the college might establish for their work, but they also produce significant openness in the forms that the stepping stones along the way to milestones like annual review or tenure and promotion might take.
¶ 14 Leave a comment on paragraph 14 0 For a model like this to produce the kinds of significant culture change that the deans are seeking, however, has required a lot of education and encouragement. Sonja Fritzsche, Associate Dean of Academic Personnel and Administration, notes that most people who work in the academy, when handed a rubric for assessment — even an expansive, values-enacted rubric — are conditioned to understand that rubric as a list of requirements. As a result, she found herself having conversations with directors of various units within the college who were struggling to understand how the work done by their teams fit the model: “They just they weren’t feeling like they could have the freedom to think outside of this rubric or make it their own, because that has never been the culture.” With deanly encouragement, however, to “take this and adapt it, take it and make it your own,” everyone within the college — faculty and staff alike, in every department or center or office — has been encouraged to think about the deep purposes of their work, the goals they’re striving to reach, and how their day-to-day activities can both better support them and be more supportively assessed.
¶ 15 Leave a comment on paragraph 15 0 Such adaptability, you might begin to suspect, threatens to transform what is now a uniformly applied process of evaluation and review into a highly individualized process, in which no two candidates can be assessed according to the same formula. That, as my friends in software development might say, is not a bug, but a feature: no two candidates have precisely the same goals for their work, or precisely the same methods of working, and no standardized system of review categories and credits can adequately account for the full range of their merits and accomplishments. As a result, the college’s evaluation processes have shifted significantly in their center of gravity. Each member of the faculty and staff is asked each year to develop a narrative that articulates their vision for themselves and their careers — the kinds of intellectual leadership that they would most like to embody — and then describes their short-term projects in light of these goals. This textured story enables supervisors and review committees and department chairs to understand more of the how and the why of an individual candidate’s work rather than simply focusing on its quantity. They are asked to treat the process as an opportunity for mentoring, focusing on the needs and goals of the person being reviewed, rather than on a standardized set of boxes to be checked. This process opens up room for a faculty member to make the case that their horizons are better served by publishing in public venues rather than traditional ones, or by participating in unexpected collaborations. It makes it possible for a staff member to describe their desires to obtain further professional development. It also encourages evaluators to keep an eye on the ways that they can support that growth. Cara Cilano, Associate Dean for Undergraduate Education, whose first encounters with the CPIL model came when she was chair of the English department, notes that this individualized model transforms the review process from a hurdle to get past into “a punctuation mark in an ongoing conversation,” one that allows evaluation “to be much more relational than transactional.”
¶ 16 Leave a comment on paragraph 16 0 That relationality rests on the bedrock of values that the college has collectively articulated and continues to re-articulate for its work, and it asks everyone to think about the process of review as a personal engagement with their own deepest values and goals. Implementing this model hasn’t been simple, given everyone’s inculturation within more conventional academic processes, and the deans recognize that there’s still a lot of work in front of them. Not least in that category is the “mentoring up” required to convince university-level administrators that they should approach evaluation of the college using the same model. As Hart-Davidson notes, he’s recently received pointed questions about why the college isn’t producing the same number of books and articles that it has historically. His answer has been to point to the college’s marked increase in other forms of work — award-winning documentary films, major exhibitions, successful digital projects, and significant grant funding — work that produces greater impact of the kind that a university espousing its commitments to its public mission should value.
¶ 17 Leave a comment on paragraph 17 0 Once the university recognizes and begins to reward the kinds of values-enacted work being done within the college, however, there’s another level of transformation yet to be sought: revising the criteria for assessment and ranking of institutions done by organizations such as the AAU. But that transformation begins at this local level: the college has asked the folks who work for it to move from a bean-counting evaluation process into one in which they are asked to tell a more textured story about the work they are doing and its significance for the life of the college. The college is likewise trying to tell its own textured story to the university administration. It follows, then, that universities will need to find ways to tell better stories about their work and its cultural impact, such that we can all begin to escape from the debilitating mathematics of prestige.
- ¶ 18 Leave a comment on paragraph 18 0
- I owe this phrase to Chuck Henry, president of CLIR, who used it during a discussion about university ranking systems, but the mathematics involved, and the category mistakes it relies on, are fractal, strangling higher education from the institutional to the individual, and at every level inbetween. ↩
- Fritzsche, Hart-Davidson, and Long, “Charting Pathways of Intellectual Leadership.” ↩
- This and all quotations that follow in this case study derive from Long et al., Interview. ↩
I never thought of it in quite this way but it makes perfect sense
One of the biggest problems with traditional peer review — both of manuscripts under consideration at presses & of faculty under tenure & review processes, but also within professional settings, such as at seminars and conferences, where “status” is performed and assessed in more informal, yet palpably visible ways that are, I would argue, very much in sync with other forms of “peer” review — is that it is always embedded in a chain/cycle in which all of these modes of review are interlocked in ways that are difficult to break through and change. So: tenure & promotion committees have expectations and criteria for review that are essentially unethical & uncharitable & metrics obsessed in certain ways (which I know you & your cohort at MSU, esp. Dean Long, know very well) and university presses understand what these (messed up) criteria and expectations are and they build this in to their own review processes which depends upon the expertise of the very same people who also serve on T&P committees, and who are often anything but generous. And then, post-publication (when successful), the scholar gains “status” through the publication, *if* it with the “right” press, and then the cycle is set to repeat. It has to be reformed at all levels. So what you are doing at MSU with your colleagues is transformative at the college or institutional level. I would just like to see more of a discussion of how institutional forms of review might be transformed in ways that have positive effects all along the cycle(s) of scholarly production. This is a labor issue as well.
Which is all just a way of saying that, in addition to the reforms of review your advocate for here, we’ve also got to reform the ways in which scholars review each others’ work in a fashion conversant with what you sketch out here, where the means should not be the “ends.” & I would go further: we need to transform even what we mean by “means”: we need to widen the ambit and forms of “means” and get rid of markers (as evaluative criteria), such as “status” and “privilege,” that we attached to *places* of scholarly production and output. Only faculty researchers can do this, primarily on their own, but it will certainly help if more schools do what MSU is doing. I don’t see that happening any time soon though, esp. in the humanities. Schools, journals, presses, conferences, etc.: all are earmarked as having or lacking status by a variety of dubious measures. Just trying to think through your thinking here in a way that is more global.