Automated techniques have driven new approaches to visualising and acting upon planetary crises. Leveraging the proliferation of Earth sensor systems such as satellites and drones, the rise of cloud computation capable of processing huge amounts of complex data, and the rapid advancement of a variety of neural network machine learning techniques, machine visions of the planetary promise an unprecedented capacity to transform Earth into actionable and intelligible computational artifacts and operations. Big platform projects include Amazon’s partnering with start-up Overstory (formerly 20tree.ai) to create a digital twin of Earth via automated analysis of remote imaging data and Microsoft’s Planetary Computer, which applies its Azure cloud computing and machine learning capabilities to remote imagery and environmental data. These also extend to a broader, more distributed platform uptake of planetary and environmental system “goals” such as using big data collection and analysis to achieve net zero carbon emissions, exemplified in Palantir’s “data-driven decarbonization” challenge (Palantir 2021). Large-scale platform efforts are matched by research efforts such as the US National Academies of Sciences, Engineering, and Medicine’s report A Vision for NSF Earth Sciences 2020-2030 (2020), which sets a decade-long agenda for resolving complex and critical Earth science questions by creating integrated and automated computational, modeling, data curatorial, and predictive planetary infrastructure. The scale of commercial data, infrastructural, and financial resources amassed by corporations such as Microsoft, Amazon, and Palantir has literally facilitated a scaling up of platform dominion, reach, and imaginary that both captures and engenders the planet as a computable “object.”
This computable planet finds an academic counterpart in the emergence of efforts to conceptualise the “planetary” as an object of theoretical inquiry. These engagements with the planetary range in (inter)disciplinary origin and critical orientation, from Bruno Latour’s (2017) science studies–inflected return to James Lovelock’s Gaia thesis, to Dipesh Chakrabarty’s (2021) historiographic effort to challenge the shibboleths of humanism, to Yuk Hui’s (2020) pursuit of the metaphysics of thinking the planetary. Without dismissing out of hand the value of such endeavours or erasing the important differences between, they all share to varying degrees a desire to refigure both the ground and the object of theoretical inquiry to a reconceived “planetary” whose energies, networks, ecologies, and systems seem to simultaneously resist existing frameworks yet smoothly become objects manipulable and manageable by the theorist. Amongst these thinkers, Benjamin Bratton’s work most overtly draws together the planet as both computable and theoretical object. Bratton argues that projects like Overstory or Microsoft’s Planetary Computer might be understood as the “earth layer” of “the Stack,” forming a global skin of computation and turning the planet into a materialisation of its own envisioning (2015, 88). This turn to the “planetary,” then, is not confined to corporate or state enterprises, but has become a site of theoretical formulation and contestation. As Gayatri Spivak (2015) has pointed out in an influential essay on the subject, the enclosure within theory (re)exerts control of spaces, places, events, and processes that the theorist cannot otherwise tame. If the age of the atomic bomb engendered a new era in which both theory and war made the world their target (Chow 2006), then it seems all too possible that climate catastrophe will pull us further into the age of the planetary target in theory, war, computation, and capital alike. Indeed, as Orit Halpern (2021) argues, the ideological entrenchment of “smartness” within capital and state extends to research efforts such as the Event Horizon Telescope’s global black hole data collection, and analysis turns the planet into a medium for computation while simultaneously embedding a trajectory toward the necessity of automated “planetary intelligence.” Perhaps unsurprisingly, planetary intelligence is itself a subject on which Bratton (2021) is now focused.
These forms of “planetary” machine envisioning (and to a lesser extent their critical theorisation) tend toward an assumption that scaling “up” to the planetary also means that all planetary systems, events, and situations possess “scalability.” Further to this, the data- and machine learning-driven approaches deployed by the cloud computing of commercial platforms to find solutions to planetary crises and events are lauded as enablers of such scalability. Exactly what is meant by “scalability” in computation is both opaque and mutable. When we look at claims for capacity to meet planetary computational crises, goals, and demands, we hear a lot of noise about “scalability.” Microsoft’s Azure—its cloud computing platform enabling projects such as the Planetary Computer—has built-in automation for scaling up and down:
Scalability is the ability of a system to handle increased load. Services covered by Azure Autoscale can scale automatically to match demand to accommodate workload. These services scale out to ensure capacity during workload peaks and return to normal automatically when the peak drops. (Microsoft Azure 2022)
Cloud computing services and operating systems such as Azure, upon which the Planetary Computer is being built, deploy an idea of scalability which assumes programs, operating systems, infrastructure and optics are seamlessly interoperable. However, computer science has a contested history and definition of scalability that tracks in tandem with certain sociotechnical and infrastructural developments within its ongoing transformation of its technics. These include shifts from sequential computing (algorithms executed sequentially) to parallel computing (algorithms executed in tandem); from small intranetworked computers to internet computing; and from networked services to cloud computing. In this article, we pay attention to definitions of scalability in computer science and their sociotechnical application to see how “planetary” computation is supported by a claim to seamlessly “scale.” Scalability—of data, IT resources, and especially machine vision models and assemblages—then supports the promise of “oversight” at planetary scale by computer vision.
The “planetary,” we argue, has become a mode and platform for seeing Earth and beyond via computer vision seamlessly conjoined to predictive imaging of where the planet’s systems and environment are heading. This conjunction has become mutually dependent and self-reinforcing. In this article, we argue that planetary systems, crises, and events are not so easily “scaled” and that Earth is currently producing and presenting us with environments and situations incommensurate with claims for “scalability.” We ask: who and where else offers diverging and incommensurably scaled images and indeed imaginaries for the planetary? While for Spivak (2015), “planetarity” is necessarily an alterity (outside our grasp, yet inhabited by us) and beyond computerization, our approach here calls instead for a pluralism of the planetary, which is open to the generative potential of (certain forms of) computation.
We argue for a multiple and nonhuman conception of and for Earth images and imaginaries that supports a pluralising of the planetary. This pluralising must be multiscalar—generative of multiplicitous and diffractive ways of seeing, situating, and making sense of planetary phenomena and ecologies. Here, we think pluralism in conjunction with William James’s “pluralistic universe” (1909). His pluralism does not relativise all perspectives, allowing each to be compared and evaluated against each other and, ultimately, to simply coexist, on their own terms. Relativistic pluralism begins by assuming perspectives or entities as implicitly given or formed; the conceptual gesture of relativising then brings them into relation. Instead, James begins with the “mess” of relation prior to its individuation into perspectives or entities. Here, there are no clear outlines and no “pictorial nobility,” just a “turbid, muddy gothic sort of affair” (1909, 45), of relations undergoing continuous transition and change. Crucially, the pluralism of this universe is dynamic and coherent, given consistency by qualities of relations, which can be observed, felt, and made known. But it is also incomplete because it is in perpetual process, “the actual world, instead of being eternally complete…may be eternally incomplete, and at all times subject to addition or liable to loss” (James 1909, 166). We propose, then, that there is no planetary “object” that can be fully computationally observed since images of and imaginaries for the planetary are always already radically incomplete.
As a technoscientific enterprise, planetary computational images and imaginaries cannot avoid what Paul N. Edwards calls “data friction” (2010, 84). Here materialities of both media and Earth impede the seamless movement and exchange of data longed for in most current models of and projects for planetary computation. Instead, disjunctive syntheses are constantly generated yet aim to be somehow reintegrated into the knowledge systems, datasets, and computational techniques martialled by planetary computation initiatives. In her examination of the becoming-environmental of computation, Jennifer Gabrys shows how the distribution and integration of environmental sensing systems aim to make Earth “programmable” such that it “activates the planet and its entities as an operation space” (2016, 14), even if in doing so they produce multiple planets via their multiple datasets and contingent integrations. Our argument here builds on Gabrys by addressing the processes and consequences of scalability, which is only gestured toward in her work.
In what follows, we read Microsoft’s Planetary Computer and Amazon’s partnership with Overstory as exemplary of the promise of planetary optics, or the subsuming of the planetary into computational vision and machine learning analysis. From there, we turn to theoretical interventions that seek to reckon with the interconnections between visions of the planetary, computational capacities and infrastructures, and remote sensors and other digital media techniques for generating and analysing data. By way of contrast to the planetary promise of the visioneers, we then look at eccentric modes of configuring the planetary via the artwork of Tega Brain, who deploys disjunctive and nonscalable relations of climate and environment in her use of data and AI imaging techniques. We briefly gesture toward other artists, such as Rebecca Najdowski, who are also using machine learning to find ways to generate a more situated approach to planetary imaging. In spite of considerable financial, cognitive, and affective investment to entangle Earth with machine vision, we propose instead that imaging and imagining the planetary is a radically incomplete project. Drawing on Indigenous approaches to AI development via Country Centered Design (Abdilla et al. 2021) and the process philosophy of William James (1909) and others, we propose that planetary “vision” operates within a pluralistic universe of seeing, in which ongoing and radical incompleteness is core to its imaging.
Computational seeing at planetary scale now requires more than infrastructure, processing power, and data storage capacity; it requires “scalability.” Microsoft’s Planetary Computer initiative, launched in April 2020, promises “scalable sustainability” (Augspurger et al. 2021). Amazon Web Services (AWS) partnering with climate tech start-ups will “help put their various custom trained models into production and easily deploy them in a scalable way” (Amazon Science 2020). But what exactly is being promised when “scalability” is conjured for planetary computer vision and its operationalisation? As a number of computer scientists have noted, “scalability” is a mysterious quality attributed to technical systems, and often imparted so as to offer positive attributes (see Hill 1990; Duboc, Rosenblum, and Wicks 2006). On the one hand, it suggests the capacity of computational systems to “scale up” to increased requirements for more computation. Clearly, dealing with monitoring systems across an entire planet might entail such scalability. On the other hand, the promise of scalability occludes the computational and human work performed across heterogeneous operations, data, and infrastructure incurred by such systems. Scalability assumes that all technical components are or can be made fully interoperable. In doing so, it elides crucial distinctions that emerge with or result from shifts in scale from, say, the localized climate system in one region of a state to Earth’s climate as sets of entangled ecologies. With respect to the vision of planetary computation delivered via the machine processing and learning of multifarious modes of imaging and sensing from satellite imaging to generative simulations, scalability comes to promise an infinitely elastic power to make sense across different objects, phenomena, systems, and activities.
Within computing, however, the term “scalability” has itself changed alongside the wider technosocial transformations of the last thirty years. Research and debate in computer science on scalable computing began during the early 1990s (see McColl 1995) when the potential for widespread parallel computing—running and executing code and tasks in tandem rather than sequentially—also seemed realisable. During this period, scalable computing meant distributed, multiprocessor, and networked memory capacity. Hence scaling computation was understood largely to be a hardware standardisation and connectivity issue that would lead to software-related processes being simultaneously executed. This might take place through multiprocessing on a personal computer or via the Internet Protocol system of addressing that was taking off at a global scale during the same period. Scalable computing, in line with discourses and political economies of globalization, was aimed at global (read US, UK, and European-centric infrastructural initiatives) computational reach and resourcing: “architectural convergence […] brings with it the hope that we can, over the next few years, establish scalable parallel computing as the normal form of computing, and begin to see the growth of a large and diverse global software industry” (McColl 1995, 49).
Yet the advent of just such a global address system as resource locator via URLs, rather than as simple individual computer addresses, led global computational scaling to be intrinsically caught up with information as a key resource. In 1994, with the launch of a browsable World Wide Web, URLs became “sites” for the location, storage, and browsing of information at global scale. At the same time, the question of scale was being examined within complexity science and topological theory in work by Albert-László Barabási and Reka Albert (1999). For Barabási and Albert, large-scale networks—from the web to disease epidemics—were comparable because they shared generic and universal mechanisms for growth. For this then burgeoning domain of network science, networks developed by adding nodes to other nodes or hubs, which were already well or strongly connected. The concept of the “scale-free” network became both a way of imaging information networks such as the internet and a technique for further mining and capitalising on information. Anna Tsing warns of the transduction of scale into a verb: “scale has become a verb that requires precision; to scale well is to develop the quality called scalability, that is, the ability to expand—and expand, and expand—without rethinking basic elements” (2012, 505). Holding this tension between scalability and its basic elements, we now turn to how promissory tendencies—especially of financially and technically scaled-up platforms and corporations—inform claims to a fully scalable planetary computer vision.
As scale has transformed from a descriptor of computational processing and globalised information networks to become a verb that “actions” computation itself, what “scalability” entails has become more opaque. This is especially true in cloud computing and machine learning contexts. Thus, Microsoft’s Planetary Computer promises “scaling environmental sustainability” by using its platform that harnesses “the power of the cloud” (Microsoft Planetary Computer 2023). Overstory’s splash page marketing statement undertakes “Realtime vegetation intelligence at scale” (2023). But what or which scale is being evoked here? We witness instead the conjunction of a machine learning-driven AI supported by cloud computing in which any scale whatsoever is deemed feasible. As Louise Amoore has argued, however, the cloud is more obfuscating than enlightening, evading human vision and its capacities: “the idea of the cloud is once more describing the advent of processes at scales that appear to transcend the observational paradigm, and exceed our capacities to see and to understand” (2020, 30). The scalability of the cloud thus becomes not simply a platform response to IT and resource management but rather exactly “the advent of processes” that enable planetary oversight unobservable by humans.
Microsoft’s Planetary Computer was launched with the promise to put “global-scale environmental monitoring capabilities in the hands of scientists, developers, and policy makers, enabling data-driven decision making” (Microsoft Innovation, n.d.). According to its corporate website, Planetary Computer will combine petabytes of global environmental data, intuitive APIs, and access to machine learning algorithms and other analytic tools backed by Microsoft’s proprietary Azure cloud computing platform (Microsoft Innovation, n.d.). An offshoot of the company’s AI for Earth program and its $1 billion commitment to be carbon neutral by 2030, Planetary Computer is the brainchild of chief environmental officer Lucas Joppa. “A planetary computer will borrow from the approach of today’s internet search engines,” writes Joppa in Scientific American, “and extend beyond them in the form of a geospatial decision engine that supports queries about the environmental status of the planet, programmed with algorithms to optimize its health” (2019). Joppa’s vision proposes taking the “ubiquity of data, advances in algorithms, and access to scalable computing infrastructure” and applying them to the “natural world.” Transposed into a promotional video staged in an old-growth forest on a sparse yellow set and complete with orchestral soundtrack, Joppa’s vision is presented as a paean to the world-saving potential of computation, datafication, and algorithmic transformation at scale. Planetary Computer promises an unprecedented witnessing of complexity, rendering elusive, endlessly mutable environments not only knowable but actionable. Planetary Computer marks the convergence of big data, big tech, and big optics with climate crisis—a computational enclosure reimagined as liberation, rather than ever more complete capture.
In his classic study of the formation of the climate monitoring system, Edwards shows how the twinned dynamics of making data global (data from one place that could be analyzed in relation to data from elsewhere) and making global data (data about climatic change on a global scale) entailed significant labor, which he calls “data friction” (2010, 84). Encompassing everything from adjusting recordings based on known instrument eccentricities to the sheer effort of transcribing handwritten records into a digital spreadsheet, the challenge of data friction erodes the promise of bringing immense computational power to bear on climate and environmental analysis. Planetary Computer promises to do away with much of that friction, presenting its computational architecture and the Azure platform’s AI functions as precisely that which smoothly generates continuity and relationality via “scalability” where previously significant labor was required. In this, its promise conforms to the aesthetic smoothness of big tech, which associates seamlessness with an absence of politics and the depoliticization of technology. Yet this seamlessness is a deception: algorithmic technologies are aesthetic-political, seeking to sense and make sense of entire social, cultural, economic, and, increasingly, ecological fields.
Operating within the messianic mode of contemporary tech branding, Joppa and Microsoft’s old-growth forest video reveals the discursive framing of the Planetary Computer. To face contemporary environmental crises, “planetary scale innovation” is needed to “convert what used to be considered inconceivable amounts of data about Earth’s natural systems into actionable insights and information.” As the corporate video states, Planetary Computer will not be a “crystal ball” but rather “a global portfolio of applications connecting trillions of data points to computing power and machine learning capable of converting that all into contextualized information.” Machine “vision” here claims not to see the future per se but to “oversee” the infrastructure that allows complete connectivity across variable datasets and indeed variable modalities of the visual itself. Scalability is in fact synecdochal for the planetary, even while the necessary computational resources for such massive computation accrue and consolidate only to certain corporate and platform agents.
While the claims of corporate visionaries must of course be tempered by a healthy dose of skepticism, their ambitions and imaginaries can also be traced in the operational capacity, constituent datasets, and practical applications of the platform itself. At the time of writing, Planetary Computer is in “preview” on request mode only, but the datasets listed on its website and accompanying “Application” case studies are revealing. Twenty-two are available through the platform’s API, the majority of which are multispectral satellite imagery datasets, including Landsat 8, Sentinel-2, ASTER L1T, NAIP (National Agriculture Imagery Program), Copernicus DEM, and ALOS World 3D-30m. Planetary Computer provides these at high resolution directly via its API, along with a range of other cartographic datasets about energy usage, biomass, land cover, biodiversity, and weather. Every dataset frames its information from the same top-down atmospheric perspective, but with different spectral bands, instrumentation, revisit times, wavelengths, layering, filtering, normalization, and so on. As Chris Russill points out, Earth imaging systems “relativize human vision as a mere slice of the broader EM spectrum,” with Earth imaging depending “on light recorded from sites that are uninhabitable or inaccessible to humans, at wavelengths we cannot perceive directly, traveling at speeds and in quantities we cannot handle” (2015, 231). By both augmenting and exceeding human ocular capacity, these imaging systems strengthen the “scopic mastery” through which seeing from above produces a belief in control through aerial vision (Kaplan 2018). The presentation of other datasets through similar top-down schema signals the normative authority of this scopic mastery.
Privileging Earth imaging and cartographic datasets within the Planetary Computer system potentially amplifies their already considerable capacity to shape Earth monitoring, scientific understanding, and policy response. Already witnessing apparatuses, their inclusion within a computational engine for relational abstraction opens the possibility for bringing such modes of imaging and knowing into conjunction with one another and with other datasets in unexpected ways. While the application of AI/ML to Landsat and other such datasets is not in itself novel, Planetary Computer uses those techniques to link Earth imaging into a range of other preexisting scientific datasets, ranging from the Global Biodiversity Information Facility to settlement-level High Resolution Electricity Access imagery to labeled flora and fauna image sets. Producing machinic relations between these datasets enables the generation of new informational assemblages that can be harnessed to various apps, projects, and purposes. It is here that the ongoing computational work of scalability is both embedded and implied but rarely seen as such outside the platform itself. Planetary Computer imagines a supercharged scientific vision, a transformative mode of seeing relations that operationalises action on the planet. But its datasets are US-centric, revealing and reinscribing the delimitations and inequities of global knowledge. While Microsoft no doubt plans for more expansive applications, the “planetary” here is weighted heavily toward the United States. For Microsoft, planetary media is an American form, realized through an aerial scopic regime and blind to the situated knowledges that resist grand claims via their material insistence on opacity and uncertainty.
In projecting its own transcendence of the messy pluralism of Earth’s ecologies and worlds, Planetary Computer runs counter to pluralistic propositions such as as the ‘Indigenous Protocols for AI’ developed by Angie Abdilla, Megan Kelleher, Rick Shaw, and Tyson Yunkaporta (2021). Their protocols for Country Centered Design of AI systems “represent a clear commitment to systemic change in a time of flux and transition, a phase shift towards a way of life that is not transhumanist or utopian, but ingeniously re-embedded in the Law of the land to ensure the future survival of our living biosphere” (Abdilla et al. 2021, 3). For Aboriginal and Torres Strait Islander people in Australia (the settler colonial context of both authors of this article), Country refers to place but also ways of living, knowing, and communicating within networks of kinship, ritual, territories, and peoples that are inseparable from land, sky, and sea. Respect for and responsibility to the relational ecologies of Country is vital to culturally appropriate knowledge making (Tynan 2021). Centring Country in the development of AI systems—including those for environmental monitoring and intervention—means much more than consulting with First Nations people. For Abdilla and coauthors, it means situating technological development within culture from the beginning, then building cultural engagement and reflexivity into an iterative approach to development: “Indigenous protocols in AI might be enacted by a continuous process of engagement, challenge, innovation and response embedded in our obligation to care for Country, and every layer of the digital stack that is built upon it” (Abdilla et al. 2021, 16). By contrast, Planetary Computer’s constitutive assumption is that data sets are inherently scalable and transferable: geospatial imagery and local biosphere data can simply be combined in iterative, mutable forms without regard for how data and AI computation might ignore or erase vital knowledges. Planetary Computer begins with the availability and computability of datasets, rather than with relations between data and Country or place.
In the Australian context, for example, beginning with Country and Law would necessarily undo the very smooth scalability that animates the promise of Planetary Computer. While the laws of computation are bound to code (just as state law is codified), Laws in an Australian Indigenous context are inseparable from Country, from history, sovereignty, and relational kinships. Indigenous data, for instance, requires Indigenous methods of storage analysis and not solely of collection (Yunkaporta and Moodie 2021). Because Laws are dynamic, relational, and contextual, their enactment in computational systems requires very different practices of coding and system design. For instance, Planetary Computer generating relations between Landsat geospatial imagery and data on bushfire impacts on flora and fauna would elide Laws that might relate to distinct Country and the relations that flow from it. As Abdilla et al. observe, “In Country Centred Design, you can never stand outside a system and observe or intervene—you must embrace the fact that you are part of that system” (2021, 9). Yet this exteriority is precisely the relation proposed by Planetary Computer, necessitated by its constitutive faith in scalability.
Another platform paradigm for planetary overseeing is also emerging through temporary offers to extend computer vision infrastructure to climate tech start-ups via incubator programs. One such example is Amazon Web Services’ (AWS) Startup program, in which the start-up Overstory (formerly 20tree.ai) participated in order to access AWS’s vast data object and cloud storage infrastructure for “vegetation intelligence” (Amazon Science 2020). For Amazon, this builds climate monitoring credibility alongside boosting claims of the wholesale simulation of a computer vision-enabled digital twin of Earth. For Overstory, on the other hand, total planetary oversight falls somewhat short of this grand vision, as their business model has devolved mainly into assisting electrical power line companies to monitor edge vegetation threatening power line damage or fire threat (Overstory 2023). As Erich Nost and Emma Colven (2022) have shown in their studies of Microsoft’s and Palantir’s engagement with climate, platforms extract value from environmental data’s scale, and use it to optimise and commodify models and datasets. In Amazon’s partnership with Overstory, a clear venture capital scenario emerges overlaid by a concern with envisioning environmental tech.
Amazon and Overstory invest both capital and rhetoric into a planetary level of suturing plural forms of satellite imaging (hyperspectral, multispectral, and synthetic aperture radar, or SAR) using machine learning to “then turn those actionable insights into trimming recommendations for their customers to help them prevent outages and wildfires” (Pale Blue Dot 2022). Interestingly, the imaging of Earth’s terrain being produced here turns toward a mix of both optical and nonoptical satellite imaging forms. SAR uses the principles of radar detection in which a sensor or an antenna on an orbiting satellite generates electromagnetic waves to interact with the planetary surface; the amount of that same energy reflected back from Earth then provides surface terrain recordings. However, to produce high-resolution signals of, for example, vegetation around a power line, which may be quite dense and impenetrable, the sensor/antenna needs to be very large, making it difficult to mount on an orbiting satellite. The aperture or “lens” through which the antenna detects and sees in SAR is instead synthetic: “a sequence of acquisitions from a shorter antenna are combined to simulate a much larger antenna, thus providing higher resolution data” (Earth Data n.d.). Not only, then, are the various modes of satellite imaging radically different across the spectrum; one form, SAR, is neither optical nor indexical but wholly simulated.
The materialities of the different forms of “images” being used to support both the climate tech start-up Overstory and Amazon’s claims to the digital twinning capacities of its planetary computer vision are literally unseen at the machine learning stage, which operates to learn only “features” in (any) data. Rather than using a customised AI model or algorithm to learn data-specific patterns from each of these different modes of spectral imaging and simulation, these heterogeneous imaging types are instead “seen” using Amazon’s SageMaker, which the platform describes as an end-to-end cloud-based machine learning service (Amazon Science 2020). Trawling vast swathes of high-resolution datasets requires the capacities of a platform like AWS and its totalised management services for computer vision and not simply because of the big data quantities and computational processing capacities involved. Rather, such paradigms of planetary vision are committed to imaging itself as a nonspecific, nonlocalised and scalable enterprise that can see through and across anything, anywhere, anytime. It is only through such an imaginary that the rough edgings of plural, optical, and nonoptical data and simulations can appear to be “seamlessly” computed, simultaneously proffering planetary digital “twinning” and accruing environmental accreditation for the platform itself.
The Promise of Envisioning the Planetary
Questions of scale implicitly and explicitly pervade recent attempts to theorise planetary computation and its modes of sensing and sense-making. New computational “vision machines” such as satellite imaging, data surveillance forms such as facial recognition together with their algorithmic techniques, and cloud infrastructures are at the core of what Bratton has called “the Stack” (2015, 5). He sketches the architecture of “stacks” as a simultaneously vertical and horizontal layering of modular containers (of energy, sensors, interfaces, users, and so on) that have the capacity to plug in and out of each other, in turn creating interdependencies. A stack is at once a material composite of such heterogeneous elements but also a model that abstracts a general logic, tending toward regulation and governance in its operativity. “The Stack” takes this logic to an all-encompassing level, both in the pursuit of all planetary information and in a bid to generate the planet as informatic (Bratton 2015, 8). What is at stake in “the Stack,” however, is not just the capacity to operate and to govern at planetary scale but to contract and expand between scales. Bratton argues that planetary vision engenders “an experience of place as one resonant scale within a much larger telescoping between local and global consolidations” (2015, 16), contrasting this with the concept of “nonplace” that characterised the sensibility of globalisation. What subtends this planetary accordion-like squeeze of local-global relations and its accompanying zoom in and out of space and time are none other than the mysterious affordances offered by scalability. Although Bratton is careful to point out that each layer of “the Stack” generates the possibility of its own contingencies and accidents, nonetheless his model of “the Stack” as design analysis of planetary computation and governance is contoured by a rhetoric and ambition similar to the platform imaginary of Microsoft and Amazon’s projects discussed above. His envisioning of the planetary as ultimately “stackable” relies on origin stories about the architecture and protocols of networks—in particular, the internet:
It just worked to tactically glue together lots of different things at different scales into more manageable and valuable forms. The same is basically true of the Stack as an accidental megastructure. There was no one commission or council whose vision authored it (though versions of it have appeared in dreams and nightmares for centuries). Its layers “just worked” for Users and platforms to make immediate tactical gains, and the accumulation of these trillions of maneuvers terraformed the planet. (2015, 64)
What is missed in Bratton’s Stack abstraction are the relations to other modes of state and extra-state governance, planning, experimentation and regulation, including via entities such as the Defense Advanced Research Projects Agency (DARPA) throughout the post–World War II period that enabled computation to become networked. Military and logistical operationalisation of computational networks have played a crucial role in planetary architectures. The US military also co-opted commercial intelligence companies to undertake early and exploratory forms of datamining—for example, for US security operations before and after the first Gulf War—which enabled both local and global computational activities to coalesce. Additionally, the adoption of TCP/IP protocols was not a straightforward story of information architecture developmental progress but can also be seen as the failure of early internet experiments in democratic governance (see Palfrey 2006). Bratton’s analysis of “the Stack” co-opts and ultimately reproduces the imaginary of smooth scalability, favoring form over messy operation(s). However, the image of “the Stack” can also be set against other attempts to think through planetarity and computation, which entail more situated and less infinitely telescopic vectors.
As Edwards demonstrates, the origins of climatic computing can also be found in both nuclear fallout monitoring regimes and the Cold War effort to render the world computable by early warning nuclear strike systems. Model simulations required new spatial and atmospheric data, collected through a range of sensors fitted to planes, boards, floats, and satellites (Russill 2015, 245). Much of Edwards’s work on climate data and modeling is concerned with the situated, material, and messy processes that enable computation to make epistemological claims. Normalizing historical data entails work to account for known differences in instrument sensitivity, to decipher handwritten records, and so on. To produce data that enabled computation of climate at a planetary scale, “scientists developed suites of intermediate computer models that converted heterogeneous, irregularly spaced instrument readings into complete, consistent, gridded global data sets” (2010, 188). Interconnected communications networks, shared practices, standardized measurements, and techniques for data reconstruction all depended upon computation but were also required to scale up computation itself to the planetary. Making the incomplete “computable” is necessary to develop planetary climate models, but machine vision systems such as Microsoft’s Planetary Computer risk further abstracting and disappearing that incompleteness in favor of an illusory seamlessness.
Along with its dependence on ocean temperature monitoring, ice core samples, and other ground-bound climate sensors, planetary computation also relies on the network of satellites, space flights, and orbital platforms that have produced the planet as an object of perception. What Paul Virilio called “Big Optics” expands the frame of what can be perceived, but it also extends perceptual capacity across the electromagnetic spectrum and reduces human vision to a narrow band of what can be apprehended (Russill 2015, 231). This kind of planetary imaging “depends on light recorded from sites that are uninhabitable or inaccessible to humans, at wavelengths we cannot perceive directly, traveling at speeds and in quantities we cannot handle” (232). Just as war was critical to the emergence of global systems of climate monitoring via carbon-sensing, so too was it inseparable from the push of American technoscience, via platforms such as the TIROS, Nimbus, and Earth Radiation Budget Experiment satellites, to produce a planetary regime of surveillance that mediated vertically between the terrestrial and the stratospheric (Parks 2018). War was also critical to the abstraction of vision, exploiting light across its spectrum to perceive, store, process, record, and image from the planetary scale down to that of the human (Russill 2017). Light also leads us toward thinking the planetary as co-composed by the micro and the cosmic: the energetic vitality of biological life existing only thanks to the cosmic condition of solar light hurtling through the void. As Russill puts it, Earth is “a medium of life because it is a medium of light” (Russill and Maddalena 2016). This mediality of the planet is thus not a product of computation but rather its necessary precondition. Earth does not become media through the application of machine vision and planetary computation but is already media and always in mediation: its envisioning is always inseparable from the excess and endlessness of light (Cubitt 2014), no matter the claims of state and corporate visionaries that computation is remaking the planetary as environment for computational operations such as scaling data analysis up or down. Light enables scalability—it animates the accordion-squeeze of layers in “the Stack”—but it is also in excess of scale, illuminating its artifice and necessary delimitations and boundaries.
At the same time, technical systems that sense and monitor the planet are also becoming-Earth. As monitoring assemblages, they are also environmentally modulated: in a minimal sense, the SAR antenna has to be boosted artificially in order to bridge the distance from Earth’s orbit to its terrain; in more tangible ways, camera trap imaging of wildlife requires durational monitoring and dedicated scientific auditing, changing, and repairing of the sensors in situ. As a number of more situated and materialist approaches to the technics of planetary vision have argued, media and environment dynamically co-compose. Gabrys, for example, has argued that the range of cameras used in environmental monitoring change what counts as and can be experienced as environment: “Cameras-as-sensors concresce as distinct technical objects and relations, and in the process they articulate environments and environmental operations” (Gabrys 2016, 8). Here we can begin to articulate environmental vision machines beyond a naive conception of recordings of “states of nature” and as actively reconfiguring ecological sensing. Environmental sensors are particularly important to think about in this context since they often function within distributed networks and arrays collecting carbon dioxide, temperature, and moisture as well as optical images of phenomena. Understanding the rates of growth, decay, or deterioration of moss, fungi, or moulds, for example, requires differential analyses of the relations across and between the data collected. Such differentiation already acknowledges that phenomena and processes are mutiplicitous in their extensivity and duration (intensivity); observing or knowing this data engages with tensions and diffractions rather than the data’s “scalability.”
Focusing on the specificities of the collection and display of biodiversity data, Mitchell Whitelaw and Belinda Smaill (2021) argue for an always localised set of practices in ecological data’s generation and claim to knowledge production. How and by whom data is collected, from citizen scientists to environmental experts, and the “settings” in which it is embedded—how data is medially mapped via different displays or visualisations—determine its purview. In particular, they point out that large-scale database projects such as the Atlas of Living Australia maintained by the CSIRO, Australia’s preeminent scientific institution, often follow established scientific knowledge practices of displaying a hierarchical pairing of species to their sighting/location in their taxonomies of display (Whitelaw and Smaill 2021, 84). Whitelaw and Smaill ponder on the multispecies relations that may also be immanent to the data but are not often visualised. What is at stake in Gabrys and in Whitelaw and Smaill’s interpolation of a more dynamic and situated analysis and set of possibilities for environmental sensing and sense-making is a reenvisioning of the planetary as always already multiplicitous in its scales. Here we take these multiplying scales of planetary envisioning to pose serious modes of technical, epistemological, and ontogenetic resistance to claims for a fully scalable planetary computer optics.
Potential Planetary Pluralities
In the installation Asunder (Brain, Oliver, and Sjölén 2019), an automated environmental “manager” produces recommendations and solutions for specific regions of the planet based on machine learning assessments of satellite, climate, geological, biodiversity, and topographical data for the regions at hand. Asunder calls industrialised, planetary-scale AI’s bluff with the latter’s promise to deliver solutions to a range of complex climate and environmental problems. As Brain (2018) notes, such solutions transmute the generalised promissory rhetoric of industrial AI into the very problems produced, in part, by global computation whose large-scale satellite and Earth-sensing data infrastructures rely on extractive technologies. In both the Microsoft Planetary Computer and the Amazon/Overstory partnership to ramp up planetary vegetation intelligence, an all too neat mirroring of cause and effect plays out, where globalised platformism becomes the automated solution to planetary crisis. Asunder calls out this tidy parallelism by literally testing out what happens when automated decision-making, based on a machine learning-driven approach to solving environmental catastrophe, is given a chance to play out: “Asunder responds to a growing interest of the application of AI to critical environmental challenges by situating this approach as a literal proposition” (Brain, Oliver, and Sjölén 2019). Running computer vision algorithms on satellite imagery and feeding data collected from specific regions to an actual open source coupled-climate model—the Community Earth System Model, which conjoins different datasets from atmospheric, ocean, ice, and land together in the same model—Asunder runs geoengineered scenario simulations and recommendations for environmental decision-making.
Yet in Asunder, the automated manager returns unexpected solutions for its machine-learned scenarios—one of its recommendations for the Arctic region is to “re-ice” it, for example. As Brain states: “Most of the scenarios generated are economically or physically impossible or politically unpalatable for human societies” (2020). Working with AI that tries to couple together sometimes incommensurate datasets to wrangle the complexity that is climate modeling, Asunder retraces and amplifies the deficits built into predicting large-scale environmental simulations; namely, that climate modeling is already weighted toward what is palatable for a human-centred future for Earth. But what if an environmental manager were to literally act “neutrally” or “ethically” from a more-than-human starting point? Re-icing the Arctic would indeed make sense as a more-than-human environmental solution. Asunder performs an onto-epistemological decoupling—of the planet and AI from human life and goals—and asks us to envision Earth from an imaginary that opens up to incompossible environmental agendas.
Another approach to envisioning pluralistically can be found in Rebecca Najdowski’s artwork Deep Learning the Climate Emergency (2022). Using the deep learning AI StyleGAN2, Najdowski created different datasets of “climate crisis” images populating social media repositories as training data. The AI then uses these to create synthetic images based on the images of its training data. Commercial applications of the StyleGAN AI typically pose an image and then an image “target,” which then become the synthetic basis for a new image—a recent application, for example, is to take a person “X” and a person “Y,” each wearing a particular piece of clothing, to synthesise the image “X wearing Y’s clothing.” Many artistic explorations of generative adversarial networks—or GANs—have dwelt less on the predicted image output and more on the strange imagistic scapes that are generated between images and “target images.” Likewise, Najdowski’s generative video morphs across and between the image landscapes of human social media responses to climate crisis, which tend to coalesce around capturable, photographable crisis events such as the recent Australian and Californian wildfires, Arctic ice shelf collapse, drought, and so on. Yet what the AI “sees” in this work fails to deliver on the elastic telescopic planetary optics of either Bratton’s Stack or the visioneer platform projects. Instead, strange semiburned forests emerge and collapse back into raging fires as if to somehow confound easy predictions of what a “solution” to planetary crisis might be—and how it might come to be. The forests coming from and returning to these fiery conditions in the video suggest both destruction and regeneration at the same time. This oscillation is perhaps a clue to the contemporary relational entanglement of climate and “tech” in which the latter’s extractive pull on energy and resources stokes the flames of crisis yet nonetheless hints, hauntingly, at the potential for (re)generativity. Many Australian native trees, we should remember, which populate Najdowski’s dataset, require fire to disperse seed and for the seed to germinate. However, the scale at which, for example, the 2019–20 Australian fires occurred, burning 10 percent of total Australian fauna, put in danger regular cycles of seed germination as they also level all vegetation and damage soil conditions. In many ways, then, what is at stake in Deep Learning the Climate Emergency is the irreducible relational plurality immanent to our viewing of AI’s “learning” across our images of climate crisis.
Against the planetary ambition of the AI systems that works such as Asunder and Deep Learning the Climate Emergency show to be inherently incommensurate with multispecies ecologies, human societies, and human-machine entanglements alike, the Indigenous protocols for artificial intelligence developed by Abdilla, Kelleher, Shaw, and Yunkaporta seek to elaborate embedded, contextual, and cyclical practices of development that centre cultural Law and lived experiences of Country. Exposing the irreparable discontinuities of planetary computer vision—the planetary of “the Stack”—does not mean that we must reject AI altogether, or the potential for machine vision and deep learning computation to generate novel solutions to damaged ecologies and wounded worlds. But it does demand a plural, situated, and creatively incomplete conception of planetary computation. As the Indigenous AI Protocols demonstrate, we need not think of the incompossibilities that undo the planetary visions of Microsoft or Amazon as absolute wrenchings apart of human, Earth, and AI or the impossibility of other kinds of planetary computation or aesthesias. We might instead think of these as differentiating tendencies within James’s “pluralistic universe” (see, especially, 1909, 322ff). A process-philosophy approach to the planetary such as James’s or Gabrys’s take on Alfred North Whitehead, and the emphasis on relational ways of living and knowing offered by Indigenous Country-Centered Design, conceives each “world” (human, planet, AI) as always already relational: “each relation is one aspect, character, or function, way of its being taken, or way of its taking something else…” (James 1909, 322–23). Importantly, this potential and actual taking up and being taken up by things—humans, planetarity, computation—also means that nothing fully settles or is bounded. Ongoing and radical incompleteness lies at the relational mesh of such a pluralistic universe, as David Lapoujade suggests: “In effect, to be pluralist consists in allowing relations to be laid out in all directions” (2020, 36). The re-icing of the Arctic is not absurd, then, but a real potentiality for the planet; it tends toward another kind of artificial scenario for “taking up…something else” for dealing with climate crisis via a more “turbid” planetary vision machine in which a clear, knowable outline is not a given. For the sticking point with a seemingly infinitely scalable vision machine is that the worlds it envisions or imagines are stretched to fit its scope without tending to the ways they fundamentally alter its dominion. Instead, we need to begin with a sense of a planetary imaginary whose parameters are always already incomplete. And it is this multiplicity of other “little worlds,” absurd machine scenarios, and relations to Country and to Earth that are now surging to the surface, and urgently insisting that we pay attention.
We acknowledge Bidjigal and Gadigal people who are the traditional owners of the unceded lands on which we live and work, and the First Nations artists, scholars, engineers, and activists working toward alternative visions of machine vision. We also thank Emily Parsons-Lord for research support on this article and the project from which it draws, which is supported by funding from the Faculty of Arts, Design and Architecture, University of New South Wales.
Nothing to declare.
Discussion about scale, strength, and networks also took off in internet research, especially around the ethnographies of social media’s “weak ties.” Nicole B. Ellison, Charles Steinfield, and Cliff Lampe undertook a series of Facebook studies around questions of strong and weak ties beginning with the 2007 article “The Benefits of Facebook ‘Friends’: Social Capital and College Students’ Use of Online Social Network Sites,” Journal of Computer-Mediated Communication 12(4): 1143–68. Ned Rossiter and Geert Lovink critiqued the notions of “ties” and of the “social” in social media, situating the issue of time scales in relation to organized networks. See G. Lovink and N. Rossiter, “Organised Networks: Weak Ties to Strong Links,” The Occupied Times, 2013, http://theoccupiedtimes.org/?p=12358. However, questions of social networks and scale go beyond the purview of this article, as does the rich literature on scale and climate change.
Joppa’s promotional video has since been removed from Microsoft’s home for the Planetary Computer and the main Microsoft YouTube channel, but is still available via Microsoft Taiwan:https://www.youtube.com/watch?v=9bG8E1kPPSs.
A burgeoning body of research addresses hegemonic platform computer vision in relation to climate-related issues, including Ruth Machen and Eric Nost, “Thinking Algorithmically: The Making of Hegemonic Knowledge in Climate Governance,” Transactions of the Institute of British Geographers 46 (2021): 555–69; Eric Nost and Jenny Elaine Goldstein, “A Political Ecology of Data,” Environment and Planning:E 5, 1 (2022): 3–17; and an interesting analysis of the problems of downscaled images of climate catastrophe enabled via interactive zooming tools in Birgit Schnieder and Lynda Walsh, “The Politics of Zoom: Problems with Downscaling Climate Visualizations,” Geo 6,1 (2019), https://doi.org/10.1002/geo2.70.
Further discussion of this application, “TryOnGAN,” can be found in Kathleen M. Lewis et al., “TryOnGAN: Body-Aware Try-on via Layered Interpolation,” 2021. The Github site for TryOnGAN is https://tryongan.github.io/tryongan/.