Theme 2: How do knowledge infrastructures reinforce or redistribute authority, influence, and power?
What new forms of organization and community are emerging? What power relations do they rely on, create, or destroy? Who wins and who loses as knowledge infrastructures change?
New knowledge infrastructures hold great promise, and they may help address key issues of public import. But knowledge infrastructures also face limits, create tensions, and raise concerns. These must be addressed early and often. Many of these limits and concerns are simply the flip side of the potentials sketched above. Systems that develop and elevate
new forms of knowledge may demote or undermine others. The
growth of new ways of knowing may come at the expense of old or alternative
ones. Efforts to expand access to knowledge for some groups may curtail or
limit the effective access of others.
A classic example of this dynamic can be found in the “two systems” problem noted by early digital government researchers. In the 1990s and early 2000s, as governments moved to implement new digital record systems and online interfaces with the public, they were faced with the need to maintain and staff a parallel world of paper forms and public kiosks for the sizeable component of the population that remained offline — who, in many social service contexts, were in fact the chief clients of the records and services in question. As a result, early digital government investments usually functioned as add-ons rather than substitutes for existing services, and usually ended up costing more than older paper-based systems — even though justified initially on cost reduction grounds (Chongthammakun & Jackson 2012). Similar phenomena have been observed in the context of electronic medical records (Monteiro et al. 2012). More recently, governments have begun doing away with these parallel paper-based systems, leaving those without meaningful access to digital media even worse off than before.
In the world of science, new forms of knowledge infrastructure may disadvantage and devalue older forms of knowledge production, even while producing genuinely exciting advances. For example, new sensor networks are replacing ecological fieldwork, while visions of “instrumenting the ocean” are supplanting traditions of the oceanographic cruise (Borgman et al. 2006; Borgman et al. 2012; Borgman et al. 2007; Jackson & Barbrow 2013; Mayernik et al. 2013; Wallis et al. 2007). Along the way, these new infrastructures are disrupting social and material relationships that formerly sustained human interest and commitment to particular modes of scientific life.
As these two quick examples suggest, knowledge infrastructures – past, present, and future – must be grasped in their full range of effects and dimensions. Analysts must recognize that valuable gains in some registers often entail losses, costs, and adjustments in others. This requires thinking beyond the instrumental languages of utility and function, which tend to cast knowledge infrastructures as neutral instruments whose positive and negative effects lie solely in the way they are operationalized and used. Like technology in general, knowledge infrastructures “are neither good nor bad, nor are they neutral” (Kranzberg 1986). As Palfrey and Gasser (2012) put it, even as societies have constructed vast information and communications infrastructures to “enhance connectivity and enable the flow of information,” they have failed to build a corresponding normative theory of interconnectivity that would help define the social purposes and values we intend our infrastructures to serve.
The following point united workshop participants across multiple areas and backgrounds: despite their frequently significant technological components, knowledge infrastructures must be understood in their entirety, as hybrids that join and rely on elements too often separated under the (bogus) headings of “technical” and “social.” Programmatic efforts to improve science and other knowledge infrastructures have frequently prioritized investments in technical systems over research on how to effectuate equally crucial cultural, social, and organizational transformations. The imbalance has been great enough to require periodic efforts to “bring the people back in” — for example, work by Lee et al. (2006) emphasizing the “human infrastructure of cyberinfrastructure,” and this group’s 2007 workshop emphasizing the irreducibly “sociotechnical” character of scientific cyberinfrastructure (Edwards et al. 2007; Jackson et al. 2007; Lee et al. 2006). Similarly, Pipek and Wulf (2009) developed the concept of “infrastructuring” in reference to the ongoing co-design of infrastructures-in-use that takes place as new systems are adapted to interface with existing ones through combinations of improvisation, work practices, and continuing innovation by both designers and users. Monteiro et al., following Pollock & Williams, Karasti, and others, have argued that the very concept of “design” as a local, punctual activity of system developers needs to be rethought, at least in the context of large-scale enterprise software infrastructures (Monteiro et al. 2012; Pollock & Williams 2010; Pollock et al. 2009).
Thinking the social and technical together helps in the analysis of costs and benefits attending new infrastructure development. It may also open up the effective range of action available to infrastructure designers and developers, in particular in cases where solely technical programs of development run up against immediate and insuperable ‘social’ difficulties, and vice versa. It may also help to define and characterize the success of many of those forms of infrastructure that we find most compelling and sometimes surprising, and which we sometimes hold out as models or examples of change. Transformative infrastructures cannot be merely technical; they must engage fundamental changes in our social institutions, practices, norms and beliefs as well. For that reason, many scholars have dropped the dualistic vocabulary of “technical” and “social” altogether as anything other than a first order approximation, replacing those terms with concepts such as collectives (Latour 2005), assemblages (Ong & Collier 2005), or configurations (Suchman 2007) in which “technical” and “social” figure as no more than moments or elements within what are at their core heterogeneous and uneven but ultimately unified wholes. The empirical veracity of this approach becomes apparent when we apply it to any of the examples cited above: try to tell yourself a “technology only” or “social only” story of knowledge infrastructures, and the real-world impossibility of the old conceptual split becomes immediately apparent. Brunton’s Spam: A Shadow History of the Internet (2013) provides a fascinating case study in the co-evolution of technical systems, communities, and social norms.
Once the world of knowledge infrastructures is reconnected in this way, it becomes possible to tease out some of the limits, tensions, and problems associated with infrastructural development in a more direct and fruitful way. To begin with the most obvious point, knowledge infrastructures often carry significant distributional consequences, advancing the interests of some and actively damaging the prospects of others. For example, moves toward expanded data sharing may simultaneously devalue or commodify certain kinds of data production – for example, the labor-intensive collecting practices and site-specific expertise required to produce ‘raw’ data in field ecology (Jackson & Barbrow 2013; Jackson et al. 2013; Ribes & Jackson 2013) or the intimate familiarity with specific sensors and robots needed to gather reliable data in interplanetary space exploration (Vertesi & Dourish 2011). Similarly, attempts to open access to education by posting teaching materials and recorded lectures may devalue the work and expertise that goes into the thoughtful production of lectures, syllabi, and other teaching materials. The celebration or move towards digital modes of production may undermine the practice and recognition of long-standing craft traditions, from artistic production to fishing to biological research (Pierce et al. 2011; Rosner 2012; Rosner et al. 2008). Finally, more efficient forms of information exchange have redistributed labor and eliminated whole categories of workers, from telegraph messenger boys (G. Downey 2001, 2002; G Downey 2003; G. Downey 2008) to the once-vast secretarial ranks of large organizations (Hedstrom 1991).
Our point is not that these are always bad things, nor that we should abandon the language of sharing, access, or any of the other real and positive potentials of new knowledge infrastructure development. Rather, we argue that the consequences of change are rarely socially, culturally, or economically neutral. Neglecting this point has obvious and troubling normative implications, including the callousness towards distributional outcomes sometimes associated with revolutionary or reformist programs. Yet lack of attention to the redistribution of labor has immediate practical implications, in the form of opposition, resistance, work-arounds, non-adoption and a thousand other effects that show up as brakes or challenges to new infrastructure development when such consequences are neglected or ignored. The study and practice of knowledge infrastructures therefore require new languages of distributive justice that can map change to consequence in more ethical and effective ways.
Beyond their effects on individual actors and interests, knowledge infrastructures exert effects on the shape and possibility of knowledge in general – and once again, these effects are neither neutral nor uniformly positive. One of the most powerful and exciting effects of new knowledge infrastructures is their ability not only to answer existing questions, but to make new questions thinkable. Indeed, modern experimental science owes its very existence to the infrastructure of laboratories, scientific societies, and journals developed in the 17th century, described by Shapin and Schaffer as a set of “technologies” for “virtual witnessing” (Shapin & Schaffer 1985). Other examples include the subdisciplines of cognitive psychology and artificial intelligence, which developed directly from encounters with computers in the 1950s (Edwards 1996), and the transformations that occurred in geography, cartography, meteorology, and many other fields when global satellite observing systems became available to them. It is this shift in imagination and possibility, more than anything else, that marks the great transitions in the history of science and human knowledge, whether conceptualized after the manner of Kuhn’s (1962) “paradigms,” Lakatos’ (1970) “research programmes,” or Foucault’s (1971) “epistemes”. At such moments, we see the relationship between knowledge and the infrastructures that support it as both intimate and co-productive (Edwards 2010; Jasanoff 2004). Knowledge infrastructures do not only provide new maps to known territories – they reshape the geography itself.
This too, however, is not a neutral feature. As knowledge infrastructures shape, generate and distribute knowledge, they do so differentially, often in ways that encode and reinforce existing interests and relations of power. Evidence for this may be found in the long-standing differential attention of medical research to women’s vs. men’s cancers, or diseases of the rich vs. diseases of the poor (Epstein 2008). As Bowker (2000) points out, it also accounts for the relative overrepresentation of research on “charismatic megafauna” (e.g. cuddly pandas or expressive, human-like chimps) vs. other biota (e.g. blue green algae) that are arguably more central to ecological process, but fare poorly as mascots for conservationism. At scale, the effect of these choices may be an aggregate imbalance in the structure and distribution of our knowledge.
This effect goes beyond the lack of development of some areas relative to others — a point which, taken on its own, fits comfortably within what we might call the “dark continent fallacy” of knowledge and ignorance. Under this fallacy, ignorance (or non-knowledge) is simply the absence of knowledge: a site, phenomenon, or set of questions that we haven’t yet been able or thought to investigate, as darkness is the absence of light. But as recent history and sociology of science has begun to explore, the relationship between knowledge and non-knowledge may not be as simple or innocent as that. New ways of knowing and new forms of knowledge infrastructure may do more than add to existing stocks of knowledge: they may rework them, reorder our sense of value and structure in the world, write new ontologies over old ones. In the process, whole classes of questions, phenomena and forms of knowledge may be lost or rendered unthinkable. Any effective sociology of knowledge (or knowledge infrastructure) ought therefore to provide some account of its opposite: the accidental and systematic, means by which non-knowledge is produced and maintained. A group of science studies scholars led by Robert Proctor has christened this effort “agnotology,” i.e., “the systematic study of ignorance” (Proctor & Schiebinger 2008). From a different direction, neurobiologist Stuart Firestein has called on science to reorient its conversations and public presentation from knowledge to ignorance (Firestein 2012).
Two examples will serve to flesh out these broad claims in specific cases. First, new modes of Internet-supported citizen science — headlined by leading initiatives such as GalaxyZoo, Zooniverse, FoldIt, Project Budburst, and the Cornell Laboratory of Ornithology’s eBird program — are often cited as harbingers of new strategies for practicing science at scale, while engaging, educating, and energizing a science-friendly public (Kelling et al. 2012). But they also raise questions and tensions around the nature of participation, expertise, and the relative authority of credentialed experts vs. lay publics in the production of scientific data and knowledge. While new Internet-supported collection, reporting, and analysis activities may help with sourcing and processing more scientific data, they may raise problems of quality control and “bad” data. For their part, lay participants may find their opportunities for input, learning, and progression beyond simple manual processing too limited, and long for opportunities to engage the research process in a more substantive way. To keep the new fountain of citizen assistance flowing, scientists will need to find effective ways of responding to these aspirations. This may mean reworking long-standing traditions of public engagement in their fields — some of which began decades or centuries ago with professional bids to separate the expertise of credentialed science from the contributions of amateur publics.
Second, the recent phenomenon of Massively Open Online Courses (MOOCS) is extending the reach of university teaching to remote participants numbering from the hundreds to the tens of thousands. In some cases, these activities are promoted by established institutions of higher education, sometimes building on other open education initiatives, such as MIT’s Open Courseware). But in others, MOOCs are offered by independent businesses, which piggyback on the prior experience, institutional reputation and/or day jobs of university professors (as in the case of the Coursera initiative, started as a business by former Stanford professors in April 2012). These efforts have attracted well-deserved attention: at their best, MOOCs might help to mitigate the exclusivity and expense of high-quality post-secondary education. At the same time, it is at best unclear how well MOOCs or similar forms can communicate knowledge to students in the absence of the face-to-face group experiences and systematic educational approaches universities have traditionally supplied. In the meantime, researchers, educators, and university administrators are scrambling to figure out their positions with respect to this emerging element of knowledge infrastructure, which raises major new questions and tensions. How should faculty-produced teaching tools be credited reputationally and rewarded financially? How should adjunct and part-time instructors working in MOOCs be compensated? What happens when universities and for-profit businesses compete for the same pool of remote students, untethered from classrooms located in a particular place? How can we evaluate student work effectively and accurately at enormous scales? Is it possible to safeguard against cheating and misappropriation of others’ work? All of these questions tie into future judgments about the value and limits of distinct institutional brands, including how official credit and degrees can be meaningfully conferred.
Facing the redistributions of authority, influence, and power that come with changing knowledge infrastructures requires new approaches on many fronts and scales. We close this section with three possibilities that seemed particularly promising to workshop participants:
Approach this problem as a design opportunity. The issue of sharing knowledge across different social worlds and conflicting
conceptual frameworks has been extensively treated in the history and sociology of science and technology, leading to such widely used ideas such as “boundary objects,” “trading zones,” and “actor networks,” as well as to a well developed understanding of how such sharing works in practice (Callon & Latour 1981; Galison 1996, 1997; Latour 1983, 2004, 2005; Latour & Woolgar 1979; Star & Griesemer 1989; Star & Ruhleder 1996). Similarly, the history and sociology of standards has produced considerable insight into how large communities of heterogeneous stakeholders come to (often rough) agreement on shared norms, practices, and technical systems (Bartky 1989; Blanchette 2012; Busch 2011; Egyedi 2001; Fujimura 1992; Hanseth et al. 2006; Russell 2006; Sundberg 2011). A design community versed in these literatures might transform such concepts into design principles and practices — but doing so would require deliberate efforts to “scale up” the generally lower-level focus of design thinking (Le Dantec & DiSalvo 2013; Monteiro et al. 2012; Pipek & Wulf 2009).
Create a professional cadre at the interface between scientists and software developers. So often neither side understands the vocabulary, the work practices, and/or the products of the other well enough to communicate effectively. Scientific software is too frequently a byproduct of “real” scientific work, with code written by scientists who often do not even know about, let alone apply, best practices for software work (Clune & Rood 2011; Easterbrook & Johns 2009; Howison & Herbsleb 2011; Ribes et al. 2012). Meanwhile, software developers are too quick to construct “ontologies” — a term of art in their social world — that can rapidly diverge from scientific usage. Professionals who understand domain science, software best practices, and the issues of social interfacing would be enormously beneficial in almost any area of knowledge. One possible source of such a cadre is the growing iSchool movement, where curricula and research on “data science” are rapidly emerging.
Use the participatory design movement as a model for work practices. Participatory design started when Scandinavian countries developed a requirement that new IT introduced into the workplace must be developed in consultation with workers who would use it. Workers could not effectively express their IT needs and concerns, while software developers often failed to fully understand their work processes. The flourishing design community that has developed sits between both (Kensing & Blomberg 1998; Muller 2007). We need the same function for the development of knowledge infrastructures (Shilton et al. 2008).