Post-Humanism, More-Than-Human Community, Emergence, Digital Simulation, AI, Symbiosis, 3D Animation, Speculative Fiction, Post-Capitalism, Machine-Nature Interaction
Tags:
World Upstream
World Upstream, Cezar Mocan, 2023. Digital Simulation. Infinite Duration.
In "World Upstream," an emergent, more-than-human community is in a perpetual process of reclaiming a decaying hydroelectric dam and transforming it into a site for leisure. The simulated protagonists—a sentient poplar tree, a group of quadruplets, an AI-powered Dyson vacuum, among others—co-exist at a never-ending picnic upstream of the dam, tending to their individual and collective needs by engaging in mundane, social, anti-productive, small behaviors as a way of rewilding the surroundings of a soon-to-be-obsolete piece of infrastructure. The dam is a quiet but consistent presence in the scene.
As a past-its-prime technological marvel that was once at the forefront of cultural discourse during the 20th-century nation-building era, it acts as a metaphorical device for speculating on the future of technologies currently at their hype peak. At the same time, it invites reflection on our affective response to technologically altered landscapes. What becomes of our definitions of “nature” when a concrete monolith is placed at its center, or when a myriad of artificially intelligent beings become integral to its processes?
This project chooses to take for granted a future where AI entities exist in the world in embodied ways. Their umwelt, however, is mediated by the landscape rather than the server farm and rooted in the complex truths, dangers, and histories the landscape holds. In doing so, "World Upstream" aims to imagine a genre painting for the late 21st century, a small, interdependent fiction where different types of intelligence share a less hierarchical, more porous world. "World Upstream" is a real-time simulation built in Unreal Engine.
Artist Interview
Q: In this new edition of the UAAD online magazine, we're exploring the theme of "[Matrix] of the [Not-Yet]." How would you interpret these two words, and how do you see your work aligning with the concepts of [Matrix] and [Not-Yet]?
I relate to the "Matrix" element in its sense as a complex network of interconnections, as well as in its technical capacity as a foundational element used to structure data and represent a version of the world that is low resolution, but also fluid, dynamic, and with a multitude of interstitial spaces to explore.
My work often lives in these interstitial spaces. For instance, in projects like "World Upstream," emergent behaviors create a matrix of interactions and relationships that unfold over time. These proto-narratives exist in a state of constant becoming, never fixed, always evolving. The characters in my simulations operate within a network of competing desires and influences, and their actions and interactions hint at futures that are possible but not predetermined, inviting viewers to imagine different outcomes and alternate realities.
As for the "Not-Yet," it aligns with my interest in slow temporality and the notion of potential. By designing systems where narratives evolve without a predetermined endpoint, I create spaces where the future is perpetually in flux and challenge the goal-oriented paradigms often associated with technology.
Q: We are very interested in the trajectory of your creative practices and their connection to the theme. Could you provide us with a little more information about your background?
I have a background in computer programming and have been practicing it since an early age, and remember being fascinated with the idea of artificial intelligence and autonomous systems from the very beginning. At some point during my university studies, I started seeing code as an incredibly powerful material for crafting stories and aesthetic experiences and began slowly diverging from the traditional tech world product-focused path. At the time, I was finding a lot of inspiration in net art (artists such as Olia Lialina or JODI) and started making work that used the web browser as an artistic medium—often small interventions that played against the accelerationism of the "default" internet, circling around nostalgia, slowness, repetition, emotional connection.
My medium has shifted over the past few years. I still use computer code as a material, but I now create autonomous software — zero-player computer games built using game engines, which use generative techniques or artificial intelligence to allow micro-stories to unfold over time, without ever repeating themselves. Early in my journey with game engines, I was interested in interactivity, but over time the "user" became smaller and smaller, until they disappeared completely. Both ends of the interaction now involve more or less complex forms of artificial intelligence.
The early impulses that drove me towards nostalgia, slowness, and repetition are still present, and they show up in my work in conjunction with a more recent interest in the natural landscape and the built infrastructures that inhabit it. I’m interested in how these assemblages mediate human perception and reasoning at an emotional level—the moments when utility turns into affective memory.
Q: The creators participating in this magazine work across various mediums, including moving images, interactive installations, music composition, etc. What factors influence your choice of medium for your works?
In "Arcadia Inc.," a past project that proposes synthetic nature imagery as a context-free alternative to "real" photographs—for images that go to work, images that need to carry other meanings than the ones they were born with—a few virtual beings survey a 4x4km landscape set up in a game engine and take photographs. Each virtual being is driven by a rudimentary, rule-based AI. The rules come from historical tropes relating to modalities of human engagement with the landscape: Ansel A.I. is programmed to always walk west, and Claudia L. always seeks the highest point in her surroundings in order to photograph nature from above.
In "World Upstream," a more recent simulation work centering on a more-than-human community reclaiming a decaying hydroelectric dam, I graduate to a less prescriptive way of letting an autonomous story unfold. Characters—humans, animals, inanimate objects—are all constantly trying to satisfy a given set of internal desires through interactions with each other and the environment. The result is a significantly stronger emergence effect. Relinquishing some of the authorial control I practiced in "Arcadia Inc." was key to allowing this simulation to perform small amounts of learning and tap into its combinatorial richness potential.
I’m interested in how proto-narratives can emerge from semi-controlled random walks within the possibility space of an artificially intelligent system, as a way of understanding the present moment in the history of technological development and also as an investigation into how meaning is created. Without falling into the trap of assigning artificial intelligence sentience or other qualities it doesn’t possess, I do try to acknowledge the fact that it represents a different type of being in the world and try to learn from/with it. If we can talk about such a thing as the subjectivity of an AI, I want to know what it looks, sounds, matrix multiplies, parses, and learns like, on its own terms.
Real-time simulation as a medium, in conjunction with programming and game engines as tools, provides me with a great set of affordances (and constraints) for exploring these interests.
Q: How does your work reflect or actively engage with the cultural and social dynamics of your community or the communities you interact with? Are there elements in your art that seek to bridge, disrupt, or transform these dynamics?
The reason I chose to center "World Upstream" around a hydroelectric dam is this tension between the fact that hydropower is a renewable source of energy, but building the facilities for obtaining it often comes hand in hand with displacement: of ecosystems, communities, memories, and so on. When adding a sociological dimension—the fact that infrastructure is always more than just utility, it also acts as a symbol of power and progress—I start to find it even more interesting.
I love power lines, electric poles, and big dams in an almost child-like way. Not because of the type of living that they enable, but because I became an adult with them as part of the landscape. While I didn’t grow up there, one of my favorite places in the world is this long strip of land in the suburbs of Northern Virginia, a buffer area between two suburban neighborhoods, with these massive transmission towers growing out of unmanicured grass and a fence-less concrete basketball court at one end. You can hear the buzzing of the high voltage lines as you lay down on the concrete and look at the sky, and it’s the most "I was born in the 20th century" thing I can imagine. There’s an essay I keep coming back to, “What does landscape want?” by Larry Abramson, which makes an argument for how the natural landscape hijacks its way into our affective state, how it bypasses critical thinking, and I see that happening as well with the human-built elements I’m talking about.
These built elements don’t exist in a void; I’m learning that new infrastructures are very often built on top of old ones, often inheriting their geographical locations, but also biases, power structures, faults, and displacements. Two of my favorite artists address that in really beautiful ways, Evan Roth and Everest Pipkin, and they’ve taught me that relying on new technology alone to break these cycles is not enough. In fact, it’s quite the opposite; new technology will continue to perpetuate existing flows of power unless it’s very intentionally designed not to.
During the research phase of "World Upstream," I was reading parts of Anna Tsing’s "Arts of Living on a Damaged Planet," which introduces this re-framing of a "ghost" as a past that could have been, as ecosystems and entities that are no more but live through in material or immaterial ways. It shows me that thinking about sustainability is as much looking at the past as it is looking at the future:
“As humans shape the landscape, we forget what was there before. Ecologists call this forgetting the ‘shifting baseline syndrome.’ Our newly shaped and ruined landscapes become the new reality. Admiring one landscape and its biological entanglements often entails forgetting many others. Forgetting, in itself, remakes landscapes, as we privilege some assemblages over others. Yet, ghosts remind us.” (excerpt from "Arts of Living on a Damaged Planet")
Within the realm of technological progress, I think we need more emphasis on the more-than-human, as well as a push towards slow, fair, relational. I don’t think I believe in stopping, but in being slow. The "World Upstream" simulation is frozen in time, even though things are happening, in some ways the speed of passing time is set to zero—which is a utopia; or tyranny depending on how you look at it. The picnic goes on forever, yet time doesn’t pass. Is that a model for sustainability, besides its practical impossibility? Probably not. Because we’re also in this moment of high urgency to stop the damage we’re doing to our planet.
There’s a naive optimism in the piece — maybe because I’m a pessimist IRL when it comes to this topic — the real world is literally burning, and yet somehow we’re able to fix that and live happily ever after at this more-than-human picnic. The characters in the piece act by trying to fulfill a pre-defined set of internal desires, one of them being nostalgia. Which in some ways does mean acknowledging this past I’m talking about, the ghosts, but also implies an unpredictability that defies rationality or notions of “good” and “bad”. One which I find in myself as well. Through it, I see the entities that are missing from that picnic becoming as important as those who are present.
Q: What real-world strategies or methodologies do you employ in your art practice to manifest your visions of the future? How do these tactics serve as forms of resistance or intervention within the current socio-political landscape?
See previous question.
Q: How do you hope your work impacts its viewers or participants, particularly in terms of rethinking potential futures or alternate realities? Who do you perceive as your audience?
When it comes to showing work, all I can ask from the viewer is a slice of their time and attention. I try to avoid making prescriptive work that conveys one singular idea I believe in; it’s more about setting up a certain future, possible or impossible, and indirectly asking the viewer one question: how would you feel if this was the world we lived in? The pacing in my simulations is generally slow, which I hope is conducive to the emergence of more follow-up questions in the viewer’s mind as the simulations evolve. I often look at history (of technology, image-making, etc.) as a way to better understand the present, and this opinionated understanding becomes a starting point for imagining the futures that appear in my work. I’m hoping this engagement with different points on the timeline of technological progress becomes a way of contextualizing the current moment, as well as a primer for zooming out of it.
Q: As a creator, what do you see as the threats or uncertainties we will face in the coming decade?
The metaphors big tech uses to talk about artificial intelligence are so broken and so far removed from reality because they create this perception where AI at large: 1) falls under a hierarchical paradigm (AI as an assistant to the human), 2) is on the verge of becoming incredibly powerful (AGI is X months away), and therefore 3) is bound to violently replace us (the assistant overthrows the one it serves).
Metaphors are important because the language we use to talk about a new technology reveals how we think about it, which in turn informs its future development, so it becomes a kind of self-fulfilling prophecy. And my concern is that so much capital is going to be allocated towards creating “the best AI assistant,” “disrupting productivity,” “automating automation,” and so on, without caring much for how data and energy-hungry this technology is or who it displaces and causes harm to, let alone trying to use it at scale to tackle much more pressing problems such as climate change or even gaining insight into what “intelligence” can mean in a more than human context.
As general examples, it worries me that so many companies are building smart “assistants” and not “companions,” “ecosystems,” or “critters.” I watched a Figure demo the other day - it’s an embodied AI, which looks like a humanoid robot taken straight from a Hollywood SciFi, their branding and video editing are also leaning into that aesthetic. It has impressive motor skills, and it broke my heart that their demo was “Hey Figure01, make me a coffee.” You could say, ok sure, it’s just a matter of language, what difference does it make whether we call it assistant or companion, whether we say that “it thinks” or “it computes.” But it does make a difference because the language choices currently anthropomorphize AI in ways that often create distrust and fear and tend to create non-existent similarities between how a person thinks and how an AI system “thinks.”
The point of distrust and fear may be rooted in this implicit post-colonial master-slave paradigm because if that’s our way of relating to artificial intelligence, and the pace of progress is as fast as it has been, it can really only mean one thing: that the slave is going to outsmart the master, and that the roles are going to reverse. We have these high-profile debates about AGI safety between Sam Altman-type people, which on one hand are necessary; safety is so important when building new technologies, but it’s also funny to observe how Silicon Valley suddenly started caring about the safety of the technology that’s being built, in contrast with (or maybe in addition to) the “move fast and break things” ideology from 10 years ago. The doom-ist AGI safety debates are very much used to advance the narrative that the technology that’s being built is SO powerful, SO intelligent that we can’t understand it and its goals anymore. Portraying a product (or a speculative future version of it) in this way is of course great branding.
This gets into “technology as magic” territory; Kate Crawford has this term, “enchanted determinism,” for presenting machine learning techniques as magical, beyond scientific understanding, and so on, which also serves to elevate the perceived strength of these systems (in the future, in potential) and distance the creators from responsibility over what is being created (in the present.) Which, again, when seen under the anthropomorphized AI as a slave now - master in the future - way of thinking, becomes even more harmful.
Philosopher Yuk Hui talks about how we can change this way of thinking, about how we can learn to become “within” this form of intelligence and focus our energy there, instead of the debate of whether machines can think or not, over the existential fear rooted AGI doom discourse, and proposes 3 ways of changing the thinking:
1. Suspend the anthropomorphic stereotyping of machines and develop an adequate culture of prostheses. Technology should be used to realize its user’s potential instead of being their competitor.
2. Instead of mystifying machines and humanity, understand our current technical reality and its relation to diverse human realities.
3. Stop repeating the apocalyptic view of history, and liberate reason from its fateful path towards an apocalyptic end, which in turn should enable experimenting with other ways of living with this new type of non-human intelligence.
Q: What motivates you to continue creating as an artist?
I believe it’s important for artists (and writers, scientists, philosophers, etc.) to engage with new technologies as a way of providing narratives that are different from the ones put forward by the power structures that often lead the public discourse in this area.
On a personal level, it’s as simple as “I do it because I enjoy it” — working in this way challenges me to continually learn, (try to) be multidisciplinary, see and listen to the world, and take good care of myself so I can continue creating from a place of curiosity.
Q: Are there any theories, books, or artists you would like to recommend in your current areas of interest?
Yes! Reading K Allado McDowell’s “Air Age Blueprint” was one of the first experiences that truly nudged me towards taking machine intelligence seriously. Not in the sense of attributing it sentience, but rather in making me understand that the species of stochastic parrot “it” is, and the species of stochastic parrot I am might have more in common than I had originally thought. James Bridle makes a similar argument in “Ways of Being,” questioning whether “artificial” intelligence can also act as a vehicle toward better understanding the other types of intelligence, beyond humans, that we share the world with.
About the Artist
Cezar Mocan is a Lisbon-based artist and computer programmer interested in the interplay between technology and the natural landscape. Using narrative generative systems — animated videos of infinite duration, real-time simulations built in game engines, or other software—he creates worlds that recontextualize aspects of digital culture we take for granted, often in absurd ways, while investigating the power structures that mediate our relationship with technology. Drawing on media archaeology and art history, his research process traces the origins of our current thought patterns around technological progress.
Some of his past works have been exhibited at Office Impart (Berlin), Onassis ONX Studio (New York), Artemis Gallery (Lisbon), Romanian Design Week (Bucharest), and The Wrong Biennale. His real-time simulation work, Arcadia Inc., was recognized as a 2021 winner of the Lumen Prize in Art and Technology. Cezar holds a B.S. in Computer Science (2016) from Yale University and an M.P.S. in Interactive Telecommunications (2021) from New York University, where he also served as a research resident and adjunct professor.