Values
¶ 1 Leave a comment on paragraph 1 0 We live and work in a world that is deeply invested in assessment. That world creates an insatiable need to know how we’re doing both individually and at an institutional level, whether we’re working adequately toward our goals, how our work compares both with our own expectations and with those around us. In the business realm, such assessment often takes place with reference to a set of KPIs, or key performance indicators, which provide the metrics that a company or a unit within it has decided are relevant in thinking about effectiveness and productivity. We have our own KPIs in the academy, of course. In a library, the indicators used to evaluate units and services might include numbers of patrons served, numbers of books checked out, numbers of articles retrieved, numbers of searches of the catalog, numbers of unfulfilled requests. In a college or department, the indicators might include numbers of course sections that fill, numbers of students per section, numbers of students on waiting lists, numbers of majors, percentages of students who graduate within five years, and so on. These metrics allow a unit to assess its performance, to determine how well it’s serving its purpose.
¶ 2 Leave a comment on paragraph 2 2 Say the word “assessment” to a group of faculty members, however, and you’re likely to encounter at least one who has a profoundly allergic reaction to the concept, becoming itchy and irritable at the very idea of being asked to apply such metrics to their performance. Of course, we do it all the time: every year we go through the bean-counting exercise of the annual review, reporting on our publications, citations, presentations, course evaluations, and the like. But every year we suffer through this assessment process in much the same way we suffer through the clouds of pollen that choke us in the spring.
¶ 3 Leave a comment on paragraph 3 0 In part, this allergic response derives from all those numbers; assessment is in many places tied to quantification, a deployment of Taylorist strategies for defining and rationalizing something as ineffable and interpersonal as teaching, or learning, or the development of new knowledge. Boiling such a complex cluster of human processes and interactions down to a set of metrics and indexes that are used to compare us to one another, and that are aggregated to compare our units and our institutions to one another, manages to steam away the non-numerical sense of purpose in our work. Why, in the larger picture of what we’re trying to accomplish, do these particular figures and measures matter?
¶ 4 Leave a comment on paragraph 4 0 Quantitative measures can sometimes help us set goals: if we want to expand the impact of a community-oriented project, for instance, figuring out how many people we’ve reached with that project and how many we’d like to reach in the coming year can create a framework for our work. Assessing our progress within that framework can tell us something about the effectiveness of our outreach methods and, if we can drill down further into the data, we might be able to learn something about which outreach methods have been most effective.
¶ 5 Leave a comment on paragraph 5 0 But there are a lot of things that we can’t learn from standard, quantitative metrics. We can’t really begin to understand why members of the communities we want to work with are engaging with us. And we certainly can’t understand why they aren’t. We can’t understand what the purpose of building engagement is, and whether we’re serving that purpose or merely growing a number. We can’t fully account for the good that we’re doing based on metrics.
¶ 6 Leave a comment on paragraph 6 1 In such an environment, it’s easy to understand why quantitative forms of assessment might generate allergic responses. Metrics run the risk of distracting us from asking about our less readily measurable goals, their significance, and how we’re working toward them. (Of course, one might inquire here about our attachment to applying quantitative assessment to our students in the form of grades, and whether they similarly run the risk of distracting our students from their real goals, but I digress.) The issue isn’t that those deeper goals and our progress toward them can’t be assessed. In fact, assessment can support all of the ways that we work within the academy if we take the time to create assessment practices that serve our purposes. To do so, we need to keep the fullness of our goals in view, as well as the purposes and missions of our institutions. We need to focus on the nature of the things we’re assessing and what it means to understand and support their growth. As Beth Bouloukos described her work to keep Amherst Press’s goals oriented toward what was most important, she told me about their need to develop a different framework for assessing their success. “Instead of this many books,” she noted, “we’re thinking about how this creates equity in a system that is terribly inequitable.”1 Meaningful transformation requires assessment practices that are grounded within our values.
¶ 7 Leave a comment on paragraph 7 0 My own first experience of the assessment allergy came when I was a faculty member at a small liberal arts college in southern California, where the administration and the faculty were preparing for a visit from our regional accrediting body. The vast majority of the institutions over which that body had authority were large public universities, which were as different from our campus as could be imagined. As a result, the kinds of questions and instructions that emerged from the accreditors were in many ways antithetical to the ways that we worked, the ways that we taught, the ways that we related to one another. Asked by our administration to think about how we might assess student learning on campus, several influential senior faculty members dug their heels in, resulting in the faculty saying no: it was not the Way Things Were Done.
¶ 8 Leave a comment on paragraph 8 0 This turned out to be a serious strategic error. It’s true, of course, that assessment as we were being asked to implement it wouldn’t have been useful to us, and it may in fact have been harmful to the practices of teaching and learning that we valued. But the response from that accrediting agency to our allergic reaction was, effectively, “too bad.” The college, perennially ranked in the top 10 small liberal arts colleges nationally, was threatened with having its accreditation withheld unless we complied — and complied in precisely the forms that we had been handed.
¶ 9 Leave a comment on paragraph 9 0 Insofar as this cautionary tale has a moral, it begins by sounding more fatalistic than I intend: Resistance is futile. But that doesn’t mean that there’s no way out. The trick is not to resist assessment, which in our case only resulted in having inappropriate methodologies forced upon us, but instead to get out in front of it and take control of its shape. If the faculty had taken the opportunity to detail the Way Things Were Done, and most importantly, Why They Were Done That Way, and the Important Outcomes Resulting from Them, we might have been able to build a form of assessment that mattered for our own purposes, and then persuaded the accreditors that we were both taking their requests seriously and, more importantly, taking our own goals and values as an institution seriously.2
¶ 10 Leave a comment on paragraph 10 0 All of that, it should probably go without saying, is a lot of work: articulating both our goals and the means for knowing whether we’re meeting them takes time and requires some difficult thinking and negotiation among colleagues. As a result, it’s easy to feel as though the assessment interferes with the actual ability to get the job done. But taking that time for reflection is a key part of the job, if we want to ensure we’re following through on the commitments that our stated values create. It’s not a coincidence, after all, that the root of “evaluation” is “value.” Reflecting on the role that our values play in the goals we set and the ways we mark our progress toward them can help us refocus our work, and our assessment practices for that work, not on an abstracted set of KPIs but rather on the things that matter most to us. This is true of the many different forms of assessment in which academics engage every day, including grading, peer review, and a wide range of personnel processes from hiring, to annual review, to tenure and promotion. Our busyness can lead us to seek out easily identifiable metrics and measurements despite their misalignment with our deeper scholarly values. Worse, our belief that we know “excellence” when we see it can lead us to make judgments derived more from assumptions and affinities than from a real engagement with the work in front of us. By pausing to articulate how we know when our students are learning, or when a piece of scholarship is important, or when the work a colleague is doing is making a significant intervention in a field, we define what it is that matters for us, and how what matters can and should be observed. In so doing, we can create assessment practices that not only work to improve the objects of the assessment — the learning, the scholarship, the career path — but that also serve to build stronger relationships between those being assessed and those doing the assessing.
¶ 11 Leave a comment on paragraph 11 0 The first step — obvious, perhaps, but not easy — is to begin by articulating the values that we bring to the work we do. Part of the challenge in this process lies in the pluralness of that “we.” It’s often easy to assume, especially when we’re working in collective contexts, that our values are shared and that our terminology is as well. This is particularly an issue for those who occupy positions of privilege, who have not been marginalized by the dominant culture, as even well-meaning, progressive white people often fall into the trap of taking their values to be universal. Surfacing and discussing those values is a necessary part of the process of their articulation, as is thinking deeply with the varied experiences and perspectives that all members of our institutions bring to their work. Those experiences and perspectives form the heart of our values, and this is one of the reasons why dominant institutions often exclude those values from decision-making processes; values are subjective and personal, when we’re supposed to be striving for objectivity and neutrality. But as Iris Marion Young reminds us, the entrance of “substantive personal values” into decision-making processes isn’t a problem to be eliminated; rather, “the entrance of particular substantive values into decisions is inevitably and properly part of what decisionmaking is about.”3 Excluding the personal and the subjective and the differences they’re based in is itself a value. As the HuMetrics HSS team notes in Walking the Talk, “[t]he danger is not simply that unexamined prejudice will inform our decision, but also that a naive understanding of objectivity will prevent us from recognizing the biases that condition all judgment.”4
¶ 12 Leave a comment on paragraph 12 0 The HuMetrics HSS initiative, a collaboration working to develop “humane metrics” for conducting various kinds of assessment in the humanities and social sciences, initially came together around the problems created by that naive sense of objectivity. As they note, they began their project “by asking what on its surface seemed a simple question: What would it look like to start to measure what we value, rather than valuing only what we can readily measure?”5 As they worked, however, that question opened up several more thorny ones. The team could sketch out their sense of “scholarly values,” but there were significant problems with doing so.
¶ 13 Leave a comment on paragraph 13 0 Could we presume these values were universal? (We could not.) How might we craft a framework that allowed for adaptability if not universality? (Certainly not by drawing solely on the experiences of the core team.) Could statements of values serve as markers of aspiration, rather than traps that limit scholarly invention? (That is the plan.) What potential indicators and evaluation practices could exist if we started from a set of values, rather than starting simply from what we could measure? (If nothing else, practices that better represented the work in the humanities and social sciences).6
¶ 14 Leave a comment on paragraph 14 0 As a result, the HuMetrics team spent several years conducting workshops and discussions leading them to a values-oriented framework, which then formed the basis for an extensive interview-based research project.7
¶ 15 Leave a comment on paragraph 15 0 For Walking the Talk, the team conducted 123 interviews across the institutions comprising the Big Ten Academic Alliance, speaking with administrators, faculty members, librarians, and other personnel involved in various aspects of tracking, measuring, and assessing impact and productivity in the context of promotion and tenure reviews. Their research indicated that “evaluation policies and the cultural practices that surround them are not only misaligned with work scholars find personally meaningful, they are also out of joint with the very values many institutions of higher education identify as core to their mission.”8 Even worse, their interviews uncovered a sense of futility around attempts to change these processes and policies:
¶ 16 Leave a comment on paragraph 16 0 Whether it is because of willful ignorance about how tenure and promotion processes are determined, unacknowledged investment in the idea that merit equates to success in a hierarchical system, a feeling of being overcome by the enormity of a decades-long problem, or a trepidation to poke an already irascible bear, it seems that no one feels that they have sufficient agency, authority, or energy to change the system, although there is broad recognition that the system is broken.9
¶ 17 Leave a comment on paragraph 17 1 Working through these anxieties and disavowals requires something more than any individual — even an individual invested with significant positional authority — can accomplish. The HuMetrics report makes a significant number of recommendations for steps that can begin to transform tenure and promotion processes for the better. Some of them — such as using job letters or other hiring documents to align faculty assignments with both institutional values and faculty aspirations — do require the participation of deans and other academic administrators. Others, including participating in values-based workshops at the unit level, and revising unit-level governing documents, must involve collective action among the faculty, but those efforts have the potential to transform not merely a specific evaluative process but rather an entire faculty culture.
¶ 18 Leave a comment on paragraph 18 0 In fact, these processes of articulating the values that we bring to the work we do have the potential to transform many assessment-related aspects of our work. Grading, for instance: many instructors are exploring forms of contract-based grading, or ungrading, that free them — and most importantly, their students — to focus on the parts of the learning experience that matter most, rather than obsessing over the mathematics of rating and ranking.10 Similarly, peer review processes that allow for open, honest discussion among colleagues might better allow us to support one another in fostering better work across our fields, rather than the always tense and often competitive exercises in gatekeeping that we traditionally employ.11
¶ 19 Leave a comment on paragraph 19 0 A word of caution, however. The values that we articulate as the basis for such reimagined assessment practices cannot simply be stated once and assumed thenceforward. Rather, these values themselves require ongoing assessment and re-articulation, both to ensure that they’re guiding our work in the ways we want and to account for the ways that our thinking will of necessity continue to evolve. Trevor Owens, director of digital services at the Library of Congress, described to me the process through which his team considers, as part of their project close-out meetings, how their values are instantiated in their work, ensuring that those values remain focal. If collaboration is a stated team value, for instance, they might reflect on the ways that the project supported and encouraged collaboration, and how the next project might do so even better.12
¶ 20 Leave a comment on paragraph 20 0 It’s not unlikely that the team, on reflection, might decide that the value as they’d named it doesn’t fully get at what they want it to mean, and thus that the values statement itself requires revision. The process of articulating a collective set of values is of necessity a recursive one, which will likely never reach a fully finalized state. But connecting the naming and defining of values with the development of methods of evaluation is a necessary part of building the assessment systems that can support those values rather than working at cross purposes with them. This is especially true when the object of our assessment is people rather than programs: ensuring that we’re evaluating the right things requires us to think long and hard about what we value and why, and then to develop means of focusing in on those things that we value.
Questions for reflection and discussion:
- ¶ 21 Leave a comment on paragraph 21 0
- What are the highest values you hold for yourself and your work? How would you want to be assessed according to those values?
- What might a process of annual review based on the articulation of values, goals, and plans for reaching them look like?
- How are members of your institution held accountable for upholding the values that the community has established? What processes are in place for reinforcing and upholding those values?
- ¶ 22 Leave a comment on paragraph 22 0
- Bouloukos, Interview. ↩
- Embedded in this narrative is another lesson, however, about the power of accrediting bodies and whether it is always appropriately wielded, and to what end. But that’s an argument for another book. ↩
- Young, Justice and the Politics of Difference, 79. ↩
- HuMetrics HSS, “Walking the Talk: Toward a Values-Aligned Academy,” 8. ↩
- HuMetrics HSS, 6. ↩
- Rhody, “On ‘The Value of Values’ Workshop.” ↩
- HuMetrics HSS, “HuMetricsHSS.” ↩
- HuMetrics HSS, 7. ↩
- HuMetrics HSS, “Walking the Talk: Toward a Values-Aligned Academy,” 11. ↩
- See, for instance Nilson, Specifications Grading; Blum, Ungrading. ↩
- See Fitzpatrick, Planned Obsolescence for much, much more on rethinking peer review. ↩
- Owens, Interview. ↩
If we are supposed to work on ourselves, as part of leadership, and accountability is part of that leadership, then does assessment necessarily follow from that?
I think so — but your question suggests that assessment needs to apply to assessing the work of leaders as well as the work of those of us in the classroom or the lab… which is an interesting idea!