Tag Archives: grau

Frankel , F. (2007). Image, Meaning, and Discovery. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Felice Frankel, science photographer, is a Senior Research Fellow in Harvard’s Initiative in Innovative Computing (IIC), as well as a part-time research scientist in MIT’s Center for Materials Science and Engineering. She collaborates with scientists and engineers to create images for journal submissions, presentations, and publications for general audiences, and she writes regularly on the importance of visual thinking in science and engineering.

Frankel’s position is straightforward, but important. We need a “more accessible visual language” for people to understand the difficult intricacies of science–or planning. Frankel submits the image is the artist in artistic imagery (a trope I don’t like in general, but I’ll let it slide), whereas scientific imagery must be representative and an objective translation.

Leave a comment

Filed under Annotated Bibliographies, Media Literacy, Minor Field, Research Fields

Manovich, L. (2007). Abstraction and Complexity. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Lev Manovich is Professor at the Visual Arts Department at UC, San Diego, a Director of the Software Studies Initiative at California Institute for Telecommunications and Information Technology, and a Professor at European Graduate School. He teaches new media art and theory, software studies, and digital humanities. He’s authored Software Takes Command (2008), Soft Cinema: Navigating the Database (2005), and The Language of New Media (2001).

In this essay, Manovich traces two concurrent modernist reductions and ‘complexifications’ from the 19th century. From 1860-1920, modern art streamlined the image, reducing it to abstraction. Likewise, physics, chemistry and neuroscience all discovered foundational elements, deeper scientific truths. However, at the same time and into the 20th century, Freudian psychoanalysis, quantum physics, Heisenberg’s uncertainty principle, etc. all underscored the world’s deeply complex constitution: “the sciences of complexity seem to be appropriate in a world that on all levels–political, social, economic, technical–appears to be more interconnected, more dynamic, and more complex than before” (346).

So, the big question is, how can we adequately represent this complex world? Manovich submits that software-generated

“symbolic representations . . . seem to quite accurately and at the same time poetically capture our image of the new world” (352).

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Burnett, R. (2007). Projecting Minds. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Ron Burnett, author of Culture of Vision: Images, Media, and the Imaginary (1995) and How Images Think (2005), is the President of Emily Carr University of Art and Design, and former Director of the Graduate Program in Communications at McGill University. He’s authored over 150 published articles and book chapters and was named Educator of the Wyar by the Canadian New Media Association in 2005. In 2010, the French government honored him with an Order of France: Chevalier of the Ordre des Arts et des Lettres.

“Interactivity then cannot be predicated on or predicted by the design of the game or any medium. The challenge . . . is not to make too many assumptions about the behaviors of players or viewers” (310).

Here Burnett unpacks the history of the “‘fabrication’ of audiences” (312) and proposes that photography and film factor heavily in this movement. In comparing photography and cinema, Burnett outlines several reasons media art processes and the art itself offer much to planning.

Re documentation, Jean Luc Godard often complained about photography and cinema’s close relationship and deep differences. Photography resists time, documenting single moments. Harking back to Groys, photography conveys the aesthetic, whereas cinema, poetics and the possibility for a life narrative. While the former communicates a lot of information, it cannot stand for the whole of a film.

“Projection allows audiences to visualize the effects of frames in motion” (319).

That immersive experience–and this is key–depends not just on the technological apparatuses “but also on the capacity of the user to fill in the gaps between what is there and what cannot be there” (331). Local knowledge and context matter.

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Paul, C. (2007). The Myth of Immateriality: Presenting and Preserving New Media. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

New media art has increased and improved the conventions and possibilities for exchange, collaboration, and presentation. While many call it “immaterial,” it isn’t necessarily so. Yes, algorithms constitute, but hardware contains those algorithms. New media art encompasses several aspects: process, time (sometimes real-time), dynamism, participation, collaboration, performance. In addition, it is “modular, variable, generative, and customizable” (253).

Those are good things. Here are some challenges (that make as much sense in planning terms as they do in Paul’s museum-specific context). New media art takes time, so visitors rarely see the full work and rarely come in at the beginning, so the narrative, assuming there is one, is non-linear. In addition, museum struggle with new media art’s prescribed interactivity.

To make it work, artists, curators, and audiences share deep involvement from the project’s initiation. The artist (planner) becomes the curator, establishing parameters, a creative context, for audience agency and sometimes “public curation.” New media art can be in the gallery, locative, online, and “has the potential to broaden our understanding of artistic practice” (272).

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Poissant, L. (2007). The Passage from Material to Interface. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Louise Poissant, PhD, philosophy, is the Dean of the Faculty of Arts at Université du Québec à Montreal. She has led the Groupe de recherche en arts médiatiques since 1989 and the Centre interuniversitaire en arts médiatiques since 2001. She researches art and biotechnologies, as well as how new technologies are used in performing arts.

“Now the renewal of art forms has materialized through a series of iconoclastic gestures, which as introduced new materials that were first borrowed from the industrial world or from everyday life and progressively from the domain of communications and technology” (229).

This search for new materials and immateriality, to Poissant, has led artists to reorganize into the following three camps. From the emergence of new materials we observe: (1)  artists committed to sharing their view of the world and related emotions, (2) those who perceive a diverse range of roles and choose from among them, and (3) those who reorganize their practice to advance the role of the spectator to status of co-creator in interactive works.

Language and speech are performances, actions. Per Ludwig Wittgenstein’s (1953) language-games, to speak extends beyond self expression, it is to act. François Armengaud’s (1985), three notions of language pragmatics occur in art: (1) the act, where speaking goes beyond representation to trans- and inter-acting; (2) the context, which can further shape the discussion; and (3) the performance, which, once completed, verifies abilities.

There are six conductor interface categories in media arts; each one contributes to the conversion of viewer into participant. They have five functions, “alternatively extendible, revealing, rehabilitating, filtering, or the agent of synthetic integration” (240). Sensors receive and perceive data for the spectator-artwork interactivity. Recorders use binary data and allow for manipulations, sampling, etc. “Recording becomes a transferable memory, an extension of a faculty” (237). Actuators are robotics that give installations some capacity to interact autonomously with their environments. Transmitters close space and obviate time in telematic arts, such that the artworks themselves are located elsewhere. Diffusers are the projection devices from all eras (“magic lantern to interactive high-definition television” [239]). Finally, integrators, “automaton to cyborg” (239), simulate the living.

Poissant concludes that interactive programs unable to do what they can/ought must announce their shortcomings to the user at the fore. For planning, this responsibility to the user is well-taken.

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Kluszczyński, R.W. (2007). From Film to Interactive Art: Transformations in Media Arts. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Ryszard W. Kluszczyński, PhD, is Head of the Electronic Media Department and Professor of cultural media studies at Lodz University. He also teaches theory of art and media art at the Academy of Fine Arts in Lodz, and media art at the Academy of Fine Art in Poznan. He publishes on the problematics of the information society, theory of media and communication, cyberculture, and multimedia arts.

“Interactivity in art, understood as a dialogue of sorts, communications between the initiator and the artifact, occurring in real time and mutually influential, is becoming one of the essential features of contemporary culture” (p. 216).

Observing the influx of digital technologies into traditional cinema, Kluszczyński proposes two potential forms of cinema: one where telematics are used to uphold and enhance existing processes, and another obviating convention in favor of interactive cinema. So far, digital communications’ impact/s on cinema have  four concrete dimensions: (1) the “unreality effect” (p. 210) of electronic, digital simulations; (2) the distribution of film along multiple technologies; (3) interactive computer technologies that evoke distancing, Brechtian practices; and (4) Internet-enabled participation (e.g. MMOG).

Kluszczyński joins many of the summer’s authors in reminding us that photographic and digital images are different. The former represents its informative reality; the latter, meanwhile, is totally free of parameters. This distinction does not signal the end of film, but virtual reality does allow for “immersivity and telemacity” (213), and so we see film’s technological proliferation.

Each technology has its own ontology. Television is about transmission, cinema about reading, and video, digital media’s closest progenitor, is about intimacy. Video’s liberation is its potential for interaction. Video’s “proto-interactivity” (216) hints at the deeper interactive potentials in computer art. This technological transformation, however, has revealed cyberculture’s two polar tendencies. There are those who want to use interactive art within in the canon of the modern aesthetic paradigm of representation, expression, and the preeminence of the artist. The resultant interactive art is not about communication but rather the intermediary relationship with the software’s creators. At the other end of the spectrum there are artists who believe the work transcends conventional paradigms, and bears with it a necessary rejection of representation. The artist here is a designer of contexts the viewer then shapes.

Following Derrida  (1967, 1974), Kluszczynski proposes interactive art tendencies (echoing Couchot, 2007!). In the first, the authorial hand not only makes something “art” but injects it with predetermined meanings, thus dampening opportunities for interactivity. The second tendency frees work from being derivative since it is in the “primary position” (219), emphasis is on structure, therefore “the work of art requires a different type of reception–an ‘active interpretation,’ resembling a game, promoting a transformative activity toward ‘nonfinality,’ ‘nonultimacy'” (ibid).

Interactive art is the ultimate example of the “deconstructive, postmodernist, cybercultural understanding of an artwork and of artistic communication” (220). It is not an a work at all, but open to every individual’s personal interaction and context, enmeshed in a complex, multivalent network of communication processes” (223).

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Broeckmann, A. (2007). Image, Process, Performance, Machine: Aspects of an Aesthetics of the Machine. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Andreas Broeckmann is the Artistic Director of transmediale–festival for art and digital culture berlin. He studied art history, sociology, and media studies, and in his university courses, curatorial projects, and lectures, he discusses media art, digital culture, and an aesthetics of the machinic.

Broeckmann introduces these aesthetic categories of image, execution, performance, process, and machinic to show that digital art isn’t its own thing, not another aesthetic category, but situated within art history and practice. A digital art interface is unique in that it reminds us continuously of its constitution–it is ephemeral, barely material.

Our “digital culture [is] a social environment, field of action and interaction, in which meanings, pleasures, and desires and increasingly dependent on their construction or transmission and thus on their translation by digital devices. The necessary technical abstraction that the contents have to go through is becoming a cultural condition, which has effects far beyond the actual mechanism of extrapolated signal switching” (194).

In the image category, we understand media art gives us broader parameters than strictly visual. Now we have opportunities to examine images’ temporal structures, not just the narratives but the actual programmatic infrastructures, as well.

In execution, we see that computer software is a cultural artifact. Cultural theorist Michael Fuller distinguishes among types of “software art”: “critical software” refers to existing software programs, “social software” to social dimensions, and “speculative software” to the boundaries, to what can be considered software. Execution projects examine the change, the process. Those images with spatiotemporal bases require “processional approach[es], Bertrauchtung as an act of realization, of execution, which is itself the very moment of the aesthetic experience” (199).

With performance, the “domain of ‘live art’ . . . . the non-participatory live presentation of body movements, images, and sounds” (199), we see the witness the outcome of an execution. Situationism, Fluxus, intermedia and later computers all evoke performance through the use of scripts.

Process differs from performance in that it’s “the notion of process-based yet not fully programmed sequences of events that build on one another in a non-teleological manner” (201). “Processuality in art is closely tied to the existence of communication tools” (201) and the aesthetics of process-based art crucially implies this context–it cannot be other than relational” (202, emphasis mine).

Finally, the machinic category refers to how this art is produced, through any assemblages of apparatuses, be they mechanical or biological. Here the art’s existence is contingent upon mechanical forces outside of human control and beyond our subjective determination.

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Couchot, E. (2007). The Automatization of Figurative Techniques: Toward the Autonomous Image. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Edmond Couchot, doctor of aesthetics and visual arts, explores the relationship between art and technology, specifically between visual arts and data-processing techniques. From 1982-2000 he was the head of the Department of Arts and Technologies of Image at the University Paris 8. In the 1960s he created cybernetic devices requiring spectator participation, and has expanded this work into digital interactive projects, which have been in numerous international exhibitions.

Until the advent of digital image-making, the subject’s “perceptive habitus” (182), her “epistemic position” (ibid) that she controls the perspective, held as the primary mode in art history. However, with digital production, wherein the images have no necessary connection with the real world, the subject’s position is no longer linear but potentially diffused among/within the network. “Therefore a new perceptive habitus is emerging” (183).

And the images themselves are interactive–the computer makes the images, shapes, colors, and movements the “virtual semiotic objects” (183) such that the images can behave autonomously. This notion of autonomy dates back to the 1950s and concerns “emergent behavior,” the phenomenon where neural networks’ cognitive strategies determine original (not programmed) solutions. “Low autonomy” or “low self-organization” refers to performative changes, events that occur because of fortuitous connections not previously programmed into the system. “High self-organization” refers to the performative tasks that occur because of the way that system has evolved.

Autonomy is essential to interactive artworks, and its own levels correspond levels of cybernetics and interactivity:

  • 1st cybernetics: “control and communication in animal and the machine…homeostasis information” (186)
  • 1st interactivity: “relation between man and machine by reflex model or action-reaction” (186)
  • 2nd cybernetics: “cognition, auto-organization … networks, adaptation, evolution” (186)
  • 2nd interactivity: “action/perception … embodiment, autopoiesis” (186)

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Grau, O. (2007). Remember the Phantasmagoria: Illusion Politics of the Eighteenth Century and its Multimedia Afterlife. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

“Media exerts a general influence on forms of perceiving space, objects, and time, and they are tied inextricably to the evolution of mankind’s sense faculties” (140).

Grau holds that a major problem with cultural policy is the poor understanding regarding audio-visual media’s beginnings. There are two views, utopian (futurist) and dystopian (poststructuralist critical theory), and they are both teleological, and neither seem to recognized the phantasmagoria dates back to the 17th century. In truth, the process of merging the message/image with its medium/apparatus such that the medium’s rendered invisible has a deep history. Grau believes that media technologies have done more than just heighten our sensory perception through telematic processes, as McLuhan suggested, but through virtual ones, as well, and drawn connections “with the psyche, with death, and with artificial life–with the most extreme moments of our existence” (142).

Digital art is changing to become more process-based and with new interactive, telematic, and genetic imaging process parameters. So what’s really new about new media isn’t so new. We continue “to generate illusionism and polysensual immersion” (154) using all contemporary means of art and science available.

Finally, a word about immersion. Grau considers it foundational to media’s development:

“Immersion can be a mentally active process; in the majority of cases . . . immersion is mental absorption initiated for the purpose of triggering a process, a change, a transition. . . . An increase in the power of suggestion appears to be an important, if not the most important, motive force driving the development of new media of illusion” (155).

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Daniels, D. (2007). Duchamp: Interface: Turing: A Hypothetical Encounter between the Bachelor Machine and the Universal Machine. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Dieter Daniels is the Director of the Ludwig Boltzmann Institute Media.Art.Research. in Linz, Austria. He initiated the Videonnale Bonn in 1984, was director of the ZKM Video Library from 1992-4, and has been Professor of Art History and Media Theory at the Leipzig Academy of Visual Design.

In this paper, Daniels draws parallels between Duchamp and Alan Turing, and in placing their work alongside contemporary media art, notes the prevailing confusion of cause and effect of art and media technological innovations. After all, don’t they both emerge from “models, sketches, and blueprints” (104)?

In Alan Turing’s universal machine, detailed in his 1950 paper “Can a Machine Think?” thinking is done for humans, but not by them. And in Duchamp’s La mariée mise à nu par ses célibataires, même [Large Glass] (1915-23), the “bachelor mode” is unfulfilled sexual urge. Both instances, Large Glass and the Turing Test, are “specifically masculine scenarios that revolve around an insurmountable distance from the female and, as a result, install a media-technical communication as a replacement for a physical encounter” (115).

“But today the bachelor machine has left the field of art and literature far behind and instead become a motif for the omnipresent practice of media technology. The universal machine of the computer serves as a means to realize these wishes, but its capacity does not suffice to fulfill them completely, nor to replace the human counterpart” (127).

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Huhtamo, E. (2007). Twin-Touch-Test-Redux: Media Archaeological Approach to Art, Interactivity, and Tactility. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Erkki Huhtamo, media archaeologist, scholar, and curator, is Professor of Media History and Theory at UCLA’s Department of Design | Media Arts. Recent research topics include peep media, the history of the screen, and the archaeology of mobile media.

“The idea of interactivity is intimately linked with touching” (71).

“Haptic vision” refers to the visual touch. Within figurative art, there are two tendencies: imagery at deep distances and texture at close proximity. However, per Deleuze and Guattari (1987) and McLuhan (1964) in this essay and throughout this summer’s other readings, we know we can’t really separate the optic and haptic practices of looking — sensuous experiences inform and interact with each other to create a full “picture.”  The Cartesian dualism is an inappropriate.

The emphasis of this essay explores “the cultural, ideological, and institutional ramifications of touching artworks …. How has touching art been related with acts of touching taking place in other contexts–at work, leisure, and in ritual?” (72). Early museums encouraged their visitors to touch the works, but this practice ceased as notions of private property, access and education, social status altered society’s relationship with objects, supervision, and preservation. At the same time, the newly minted department store stepped in to provide consumers with opportunities to touch the finery.

Also at the same time, the avant-garde railed against these new “tactiloclasms” (the express forbidding of touching art). F.T. Marinetti’s “Manifesto of Tactilism” (1921) articulated the Futurists’ attack on academia and bourgeois culture. It’s possible that Picasso’s and Braque’s use of found things conveyed an interest in the tactile. Duchamp proclaimed that “retinal art” should be “cerebral” instead (78). Huhtamo wonders if Bicycle Wheel (1913) isn’t a “protointeractive work” and considers Duchamp’s and Frederick Kiesler’s Twin-Touch-Test (1943) to be “the most explicit experiment in tactility” (82). Feminist work with the “tactile passive body” (85) foregrounds the relationships between bodies (those of artists and sometimes participants) in happenings, performances, and “body art” of the 60s and 70s.

Some contemporary interactive art is fine with visual/aural feedback and implied tactile replies, but there are those that do give discrete “intimate touch” responses. Examples include Ken Feingold’s The Surprising Spiral (1991), Bernie Lubell’s Cheek to Cheek (1999), MIT Media Lab Tangible Media Group’s inTouch (1997-8), and Christa Sommerer and Laurent Mignonneau’s Mobile Feelings I (2001).

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Shanken, E.A. (2007). Historicizing Art and Technology: Forging a Method and Firing a Canon. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Shanken’s piece is an exhortation to integrate the nexus of art, science, and technology (AST) into art historical canon, methodology, and historiography because, simply, people are creating/using/adapting it. (I hold this is essential for planning, too, since all fields have experienced fundamental shifts of practice, orientation, methodology in the information age.) The “telematic embrace” has happened everywhere, but intellectual silos and variable languages at least slow, sometimes inhibit, intellectual sharing. Shanken’s research dispels the myth that where science and technology go, art follows.

“My research suggested that ideas emerge simultaneously in various fields and that the cross-fertilization of those ideas presupposes that an underlying context already exists in order for seeds from one field to germinate in another” (57).

For Shanken’s part, he devised the following (fluid) themes for his Art and Electronic Media (2002):

  • coded form and electronic production: the generation of multiple images, 3D copies, high-resolution photography and printing
  • motion, light, time: following from the early 20th century inclusion of motion and, therefore, representation of art through space and time
  • networks, surveillance, culture jamming: the proliferation of telematics-enabled exchange and collaboration
  • simulations and simulacra: the former are near-enough copies of the originals, but the latter “refer often to a form of similarity particular to media culture, wherein distinctions between the original and copy become increasingly murky” (62) — so much so that the simulacra might eventually attain an authenticity once the sole preserve of the original
  • interactive contexts and electronic environments: art has always needed the viewer but here the need is acute as artists design open-ended contexts for manifold possibilities
  • bodies, surrogates, emergent systems: the design of robots to consider human nature and a post-human world; like simulacra, the distinctions between the “real” and AI blur
  • communities, collaborations, exhibitions, institutions: art involving digital media almost presupposes collaboration among “artists, scientists, and engineers, and between individuals, communities, and institutions” (63).

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Weibel, P. (2007). It Is Forbidden Not to Touch: Some Remarks on the (Forgotten Parts of the) History of Interactivity and Virtuality. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Peter Weibel has been the Chairman and CEO of the ZKM/Center for Art and Media in Karlsruhe since 1999. Prior to that he was curator at the Neue Galerie Graz, as well as artistic consultant and artistic director of the Ars Electronica in Linz. In addition, he has been Professor for Visual Media Art at the Hochschule für Angewandte Kunst in Vienna, Associate Professor for Video and Digital Arts and Center for Media State at SUNY Buffalo.

In this piece, Weibel argues kinetic and op art are being rediscovered, only with new applications, and that it’s in art, specifically kinetic and op art (not computers) that we find the richest interactive and virtual art interfaces. In op and kinetic art, the viewer is now essential for the work. The illusion is not the device but the object, and in some cases viewers experience the kinetic/spatial “stereokinetic effect” (30).

Kinetic and op art are: contemporaneous with the emergence of computer arts and graphics, dependent on interactivity and virtuality, and bear “the rudiments of rule-based algorithmic art” (21). Algorithms are decision procedures; they have a set number of rules and instructions that lead one to a determined end. They are present in digital and electronic tolls, art and non-art, and rely on two forms of interactivity: manual/mechanical (e.g. op art) and digital/electronic (e.g. new media art).  There are two uses for algorithms in modern art: “intuitive application” (e.g. Fluxus) and “exact application (e.g. computer art). “The future of digital art can be found in approaches explored by kinetic practitioners” (38).

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields

Grau, O. (2007). Introduction. In _Media Art Histories_, O. Grau, ed. Cambridge and London: The MIT Press.

Oliver Grau, Professor for Bildwissenschaft and Dean of the Department for Cultural Studies at the Danube University Krems, researches the histories of media art, and immersion of emotions, as well as the history, notion, and culture of telepresence, genetic art, and artificial intelligence. He is also head of the German Science Foundation’s Immersive Art project. In 2000 this team developed the first international Database of Virtual Art.

This book subsumes “the history of media art within the interdisciplinary and intercultural contexts of the histories of art” (1). His and his collaborators’ express goal is to widen the scope of art history scholarship to include digital art and its concomitantly diverse practitioners. Grau’s separated the book into four sections: “Origins: Evolution versus Revolution,” “Machine–Media–Exhibition,” “Pop Meets Science,” and “Image Science.” As is the case with many edited volumes, certain essays speak louder, more relevant truths than others. This is especially the case here, given that my objective is linking media arts practices to planning, so not all readings will be summarized.

Leave a comment

Filed under Annotated Bibliographies, Media Arts, Minor Field, Research Fields