01
Concept of Cybernetics
Executing a “purposeful action,” such as picking up a book from the table, is not a simple one-way process. In this process, the brain sends commands through neurons to the target muscles, taking actions to achieve the expected goal. Conversely, in any “system,” every purposeful action involves a loop. In this process, information at each stage is fed back to the central nervous system to initiate the next action, which is considered the “state of the system.” This process continues until the initially expected goal is reached. This characteristic, related to biology and some artificial machines, is called “feedback.”
This is the concept of “Cybernetics”; it was created by Norbert Wiener and corresponds to the study of control and communication systems. In a sense, cybernetics and feedback are integrated concepts, leading to the conclusion that any given system capable of generating and thus studying continuous feedback is employing a cybernetic approach that allows it to adapt to unpredictable changes. Stafford Beer cleverly combined “systems” and “cybernetics” when he said:
“When I say any system is in a controllable state, I mean it is super-stable and able to smoothly adapt to unpredictable changes. It appropriately deploys the necessary diversity within its structure.”
If the current output of the system depends solely on its current input, the system is called a static system; if the current output depends on past inputs, the system is dynamic. “In dynamic systems, if the system is not in a balanced state, the output will change over time.” Cybernetics allows dynamic systems to self-regulate and self-correct without any final state or predetermined goals.
Norbert Wiener
02
Cybernetics and Interactivity
03
Dual Origins: Computer Science and Art
Interactive art has two distinct origins: one is the development path of participatory art forms such as performances, events, and specific locations. The other is the technology-oriented approach of artist/computer scientists like Myron Krueger and David Rokobey, as well as video artists like Nam June Paik.
In the 1960s, the roots of interactive art can be found at the elimination of barriers between many lives and arts: from the dematerialization of the art object, process art, participation art, to the Happening movement and Situationism, art and technology, kinetic art, and cybernetic art. They are part of a process that profoundly impacts the relationship between artworks and their audience.
*Dematerialization of the artwork: a concept in conceptual art.
Process art: the actual action and how to define that action as a true artwork; viewing art as a pure expression of human emotion.
Participation art: a method of making art in which the audience can directly participate in the creation process, making them co-creators, editors, and observers of the work.
Happening movement: a form of performance art that takes place in streets, garages, and shops, contrasting with the general exclusivity of art galleries and exhibitions.
In 1960, Joseph Carl Robnett Licklider, with an unusual background in engineering and behavioral sciences, introduced the concept of human-computer symbiosis in the early 1960s, which is the collaborative interaction between humans and electronic machines. He proposed human-computer symbiosis, rather than machines being merely mechanical (or computational) extensions of humans, suggesting that the purpose of such (fully automated) systems is primarily “to enable humans and computers to collaborate in making decisions and controlling complex situations without rigidly relying on predetermined programs.”
In 1961, Allan Kaprow defined “Happenings” as a form of performance art taking place in streets, garages, and shops, contrasting with the general exclusivity of art galleries and exhibitions. At the same time, reactive kinetic art also developed, replacing the instructions of Happenings leaders with technological communication and pre-programmed participation.
Happenings
Myron Krueger, a pioneer in participatory space interactive art, continued to take influential steps toward interactive art. His interactive art exhibitions proposed responsive environments; his 1969 GLOWFLOW (an environment exhibition where light could glow based on participants’ movements in the exhibition space), 1970 METAPLAY (a digital screen displaying real-time video images of the audience overlaid with computer graphics drawn remotely by the artist), 1971 Physical Space (a program-driven environment that automatically responds to footsteps entering the room), and 1975 VIDEOPLACE (which allowed participants to interact with video media in unexpected ways from different locations, creating a shared visual experience) can be considered monoliths of spatial interactive art, where the use of computer algorithms plays a crucial role. He also developed a theoretical framework for what he and others had been doing for nearly a decade, describing responsive environments as a form of art:
04
Interactivity in Spatial and Environmental Contexts
When it comes to space and cybernetics, it is indeed necessary to mention Cedric Price’s iconic project from the early 1960s, called the “Fun Palace.” In addition to incorporating the basic principles of cybernetics, Price uniquely synthesized various contemporary discourses of the time, such as information technology, game theory, and situationism, producing a new “improvisational” architecture. The Fun Palace was initially a collaboration between architect Price and avant-garde theater producer Joan Littlewood, the former valuing the “inevitability, randomness, and uncertainty” of human environments, while the latter dreamed of a theater where people could experience “transcendence and transformation” as actors rather than spectators. The program of the Fun Palace is not singular but is a temporary program that can be adjusted according to user decisions, presenting itself as “ever-changing.” In the Fun Palace, unlike traditional architectural practices, architects typically state problems from a “permissive” perspective, framing issues in terms of events rather than objects.
Their most famous works include Plug-In City and Walking City. In Plug-In City, Peter Cook proposed a city composed of permanent infrastructure and circulation networks, containing temporary spaces and services that can be added or removed. This proposal addressed urban issues such as population growth, transportation, and land use by viewing the entire city as a system. Herron’s Walking City consisted of a massive walking structure that could serve as a human settlement after a nuclear war. These structures would be able to connect with each other or link to circulation infrastructure networks to exchange passengers/residents and goods.
Mark Weiser coined the term “Ubiquitous Computing” in 1988. He used writing as an example of the first long-term stored oral information technology, describing “literacy technology” products as having a persistent presence in the background. Although they do not require active attention, they need “the transmitted information to be clear at a glance.”
Weiser recognized that (at that time) silicon-based technology was far from this concept. He suggested that ubiquitous computers operate invisibly and non-intrusively, functioning within the context of everyday life and integrating into its structure. It is crucial to understand that the core of this concept is that, through networked devices, information will be available everywhere, as people do not place information on their devices but rather place their devices on information networks. He emphasized that the power of this concept does not come from any one of these devices but from the intersection of many of these devices.
Ubiquitous computing considers the social layer of human environments. Later, designs for embedded (rather than just portable), location-aware, positional (as opposed to universal), and adaptive (as opposed to uniform) systems emerged.
Malcolm McCullough pointed out in his book “Digital Ground”:
“When most objects are activated and linked to the network, designers must fully understand the technological field and prospects to take a stance on their design.”
A major contribution of ubiquitous computing is its change in computer interfaces. McCullough believes that ubiquitous computing is far from a portable or mobile form of computing, as it is embedded in the spaces of our lives. He advocates for a new, universally location-aware computing to replace the existing desktop computing.This new computing “is based on your needs and the assumptions of the objects you wish to connect, depending on your location.”
McCullough also proposed the elements that would make this new form of computing possible. These elements include microprocessors, sensors for detecting motion, communication links between devices, tags for identifying participants, and actuators for closing feedback loops. He also suggested the use of controllers, displays, location-tracking devices, and software components to complete the set of components needed for ubiquitous computing.
In the 1970s, Nickolas Negroponte talked about some exciting discourses and products arising in the fields of architecture and urbanism in the late 1960s and early 1970s regarding the dialogue between designers and machines, such as “flexible,” “adaptive,” “reactive,” “responsive,” and “operable” [architectural styles or methods]. His project “SEEK” was a declaration exhibition/installment that pioneered the concept of digital composition in architecture. He considered the boundaries between two interactions; one being passive and “manipulative,” i.e., “movement relative to movement,” while the other was responsive, where the environment plays an active role due to computational processes. Negroponte went far beyond the simple feedback loops typically referred to as control systems. His responsive architecture leads to artificial intelligence as it possesses intention and contextual cognition, having the ability to dynamically alter its goals. In his book “Soft Architecture Machines,” Negroponte proposed a model of architecture without architects. He placed architectural machines beyond some auxiliary tools in the architectural design process. Instead, he viewed them as buildings themselves. Intelligent machines or cognitive physical environments can respond to the direct needs and desires of residents.
SEEK installation at MIT Architecture Machine Group
05
Democratization of Microcomputers and Interaction Design
Creating interactive art projects, prototyping design technology products, and embedded systems as suggested by McCullough require core electronic and engineering skills. Utilizing the simplest technologies, such as basic control mechanisms, sensors, or motors, artists and designers either purchase the necessary control systems as consumers, hire engineers, or invest time and money to learn the skills needed to research and develop solutions themselves.
However, this barrier was overcome in two steps in the first decade of the 21st century. The first was in 2001, when Processing was released as an open-source computer programming language and integrated development environment (IDE) aimed at the electronic art, new media art, and visual design communities. The second was in 2005, when the open-source electronic platform (microcontroller) Arduino was launched in Italy’s Ivrea Interaction Design Institute, targeting non-engineers to create a low-cost, simple platform for art students wanting to create interactive electronic art projects.
Arduino Documentary
Arduino quickly became a tool for artists and designers, entering art museums and galleries, growing increasingly popular in mainstream and museum contexts. This indicates that artists and designers are embracing this new potential as tools for their art projects. Processing and Arduino have propelled the development of interactive art, design, and to some extent, architecture and urbanism. These two platforms opened a path, followed by many similar, related, and supportive software and hardware platforms such as Raspberry Pi boards, Intel Galileo boards, BeagleBoards, openFrameworks, and Pure Data.
Analytics statistics from search engine queries show a strong interest in these platforms. In 2009, the term Arduino appeared on 1.9 million websites. Searching for “Arduino and Design” yields 613,000 sites, while searching for “Arduino and Art” yields 603,000 sites.
Many user groups simultaneously use these platforms, leading to overlapping research fields. These new tools and platforms, once held by engineers and computer programmers, are now accessible to artists, interaction designers, educators, and others. These different groups continuously work together, sharing their code, materials, and techniques. The byproducts of these revolutionary products democratize the tradition of interaction design and provide artists and designers with a new, accessible realm.
06
Interactivity and Experience Design: Similar but Different
It is crucial to distinguish between interaction (interaction design and interactive art) and the roots of user experience. Although there are many similarities between these two concepts (in the fields of art and/or design), the primary difference is that experience design has long been a topic, while interaction design can be considered a concept less than a century old.
The concept of interaction design has been shaped by various artists and computer scientists. It can be understood as the entanglement between computer science and art. However, user experience has been an important topic since the history of modern architecture and industrial design. User experience is user-centered, focusing on problem-solving methods, while interaction design emphasizes questioning the authority of the creator/designer and the importance of the system.
Finally, in 1995, Don Norman created the term “user experience” based on all the activities conducted by companies like Toyota and Apple, as well as the ideas of scholars like Henry Dreyfuss.
07
UX Design
Today, with the rise of digital products, “User Experience Design” specifically appears in the form of “UX Design.” UX is a controversial topic in today’s design field. Many believe that interaction design is a part of user experience design. However, these are mostly popular viewpoints from well-known product/UX designers or design agencies, without considering the actual history and development path of interaction design.
I believe that user experience design is a form of experience design that specifically refers to the digitization of a given product. Interaction design was almost born through digital technology, making it meaningless without it. In my view, when people use the term UX design, they emphasize the interaction design aspects of user/customer experience, looking at the user/customer experience from a human-computer interaction perspective. User experience does not necessarily require the use of digital technology, but in interaction design, technology is not just a key factor. For example, when designing a hammer, user experience remains a valid concern. On the other hand, aside from some very initial examples of interactive art, the presence of digital technology gives meaning to the concept of interaction itself. One might say that when people think of UX, they always think of some form of digital interaction design, while designers often use the term “user experience design” in a more general and inclusive way to describe any given product, regardless of its technical attributes.
ABOUT US
References:
https://uxdesign.cc/where-did-this-interaction-come-from-a-brief-history-of-interaction-design-ebcc8c278ae7
Author:
Mahan Mehrvarz
Translator: Sweet and Sour Mushroom
To respect copyright and the translator
Please contact us for reprints