Is there a more ubiquitous category mistake (Gilbert Ryle) today than that involved in the use of the term “innovation?” Categories are fundamental concepts which do not name things but instead different modes of understanding reality. “Tree” names a type of plant. There is an actual tree in my backyard, and the seed it produces is a potential tree (if it takes root it will become an actual tree). If we confused the potential tree with the actual tree, we would be making a category mistake. We might understand what a tree is, but not the difference between potentiality and actuality.
In the case of “innovation” the term is merely descriptive but is constantly used in a normative sense which makes no sense, unless further qualified. Descriptive terms simply assert the way things are or name the things of the world. Take for example the statement: “The internal combustion engine was an innovation in transportation.” The term “innovation” refers to a novel feature of reality, typically created by human thought and action. As a descriptive term it says nothing about whether the innovation was good or bad, but only that at time t the innovation did not exist and at time t1 it did. However, if we look carefully at the way in which the term is used in the media, by government officials, and business leaders, it becomes clear that when they use the term normative content is smuggled in: the change in question is assumed good just because it is a change, when in fact the goodness or badness is in fact still in question. The normative content is illegitimate because change is not necessarily good just because it introduces novelty. A moment’s reflection makes it clear that the new and the good are conceptually and ontologically distinct (that x is new does not entail that x is good). Hence to argue as if everything that is “innovative” is good, i.e., better than the thing it changes or replaces, is to commit a category mistake.
Let us take two obvious examples to illustrate the point before coming back to the real social implications of the confusion. Plutonium is amongst the most toxic substances in the known universe. One could imagine scientists devising an innovative method for vaporizing it and disseminating it throughout the entire atmosphere, thereby poisoning everything that breaths. That would be an innovation, but it is hard to see it as in anyway good. Perhaps one objects that the example is too hyperbolic in its negative implications. Granted. Let us take a more mundane example, the size of a smartphone. Having run out of qualitatively new technical capacities for the time being, smartphone manufacturers have been reduced to touting merely quantitative alterations as “innovations” worth opening your wallet to acquire. But is a marginally bigger or smaller phone really better in some important way? The answer depends upon information that the term “innovation” alone cannot capture. We have to know what the device is for before we can decide whether the given innovation is good . An innovation is good only to the extent, a) it enables a thing to better accomplish its purpose, and b) that purpose is itself essential to the health, well-being, and meaningful life-activity of human beings.
The problem should now be clear. When a descriptive term is confused with a normative term, then its uncritical adoption commits people to accepting the merely different as good. When we accept something as good we validate it as a goal to which we should aspire. So, when politicians and business people talk about the need for innovation, they are asserting that whatever changes governments or businesses introduce that can be sold as innovative are good, and we should not only accept them, but think of ourselves as “change agents” whose goal in life should be to “innovate” as well, in all spheres of life, just because contemporary socio-economic dynamics demand it. However, without critical reflection on the purpose of the processes and things we aim to change, and especially on our own (human) purposes and what sorts of social institutions support and what sorts undermine them, we can in no sense ensure that we are making things better just because we are making things different.
Let us take a concrete example to better explain my concern. In a recent series of articles in The Toronto Star, Don Tapscott argued that Ontario’s universities need to innovate in order to stay relevant to a new generation of students: “If there is one institution due for innovation, it’s the university. It’s time for a deep debate on how universities function in a networked society. The centuries-old model of learning still offered by many big universities doesn’t work any more, especially for students who have grown up digital.” I will come back to the substantive claims he makes about teaching methods and students in a moment. First, notice the category mistake. Tapscott clearly means that universities cannot fulfill their function unless they change (innovate). Innovation is identified with the better and stasis with the worse. But before we can accept that equation we must know what universities are for, what it is they are actually doing, and where, in what they are actually doing, they are failing (and where succeeding) to fulfill the purposes they serve. There may indeed be changes that need to be made in some areas of university life and others may be perfectly fine. But blanket statements of the form “universities need to innovate’ clearly confuse a mere change with “better fulfillment of the function,” because “innovate” is being used in a normative sense to imply that change as such is good.
To better understand the specific and the general social problem involved with this confusion let us examine Tapscott’s argument in more detail. He argues that universities fail to take advantage of the full possibilities that digital communication technologies provide for collaborative learning, that they remain wed to hierarchical pedagogical styles (especially the lecture), and that their insistence on testing the knowledge of students treated as abstract individuals is in tension with the collaborative learning today’s students have grown up with on social media.
On empirical grounds much of what Tapscott argues is simply false. No area of university life (save buildings and administrative positions) has received as much funding as teaching and learning centres. For the past decade North American universities have dedicated themselves to trying to understand better what makes for an effective learning environment, what best pedagogical practices are, how to assess effective teaching, and how to help professors value and improve their teaching capacities. Moreover, there have been massive investments in technology (smartrooms, campus-wide WiFi, software platforms for student interaction…), on-line course delivery, digitization of libraries and archives, open source journals, and more overt collaboration between the campus and the community. If anything is archaic, it is Tapscott’s understanding of teaching and learning in the twenty-first century university.
The more important question remains to be asked: has any of this investment improved the teaching mission of the university, and is technological change (innovation) identical (as Tapscott implies) to effective learning? The answers here are “of course” and “of course not.” Tapscott complains that professors are still lecturing, some even (heaven forbid!) reading notes, instead of taking advantage of technological possibilities for collaborative learning that better fit with students’ experience of interaction through social media. The implied disjunct: either “traditional” lectures or on-line collaboration is false. The use of lectures for one purpose does not rule out the use of new media for others. Beyond the fallacious false dichotomy is the absurd implication that human beings interacting in shared physical space (the lecture) reduces students to passivity while only virtual interaction is cyberspace counts as active learning.
Lectures– good lectures, in any case– are not one way transmissions of information to a passive audience. To be effective they must be interactive. For the interaction to be effective, however, students must develop an understanding not only the meanings of the ideas at issue, but the historical context of their emergence and the purposes to which they were put. These are not just facts that can be gleaned from a book or website: proper explanation requires expertise, and that is the reason the professor is there. An effective lecture is a dialectic in the original sense: a dialogue that develops through opposed perspectives on a shared subject matter: the effective lecturer does not transmit information but explains so as to engage the interest and critical capacities of the students such that they become the main drivers of the subsequent development of the conversation. The shared co-presence is essential: the tension and challenge of face to face interaction is essential for learning (development of cognitive capacities to more comprehensive scope and not just information acquisition).
The point: “old” techniques like lectures are not worse because they are old and new technologies like on-line networks are not better because they are new. Good and bad, better and worse in education, as in all fundamentally important social practices and institutions, is determined by whether and to what extent the technique and the technology satisfy the human needs that bring people together in the institution in the first place. It would be as contrary to the realization of essential human purposes to forbid old techniques that have proven effective for millennia as to ban the introduction of new technologies that open up new forms of satisfying the needs that the realization of the purposes presupposes.
In order to have a rational conversation about how best to satisfy human needs, it is necessary to avoid the category mistake of confusing the novel with the good. The novel might be good, but it might also be bad, while an old practice or technique might be good and its elimination bad. But the category mistake is no mere logical error. Behind the conceptual confusion lies social and economic interest. The supporters of innovations always have something to sell: the innovation. In order to cure the conceptual problem the self-interest behind the sales pitch needs to be exposed in all cases.