Images: Royal Ontario Museum

I recently volunteered to work in Philip Beesley’s studio for a handful of days, helping with fabrication for an upcoming Philip Beesley Architects installation — commissioned for a tour beginning in Vienna in October. The evening of my first day, I was invited to “Le Metabolism: Transforming Design on Film,” an artist talk at the Royal Ontario Museum — including a screening of the film “Le Metabolism” about the making of the ROM Dome Dress; access to the two exhibits — “Philip Beesley: Transforming Space” and “Iris van Herpen: Transforming Fashion”and a private reception. 

Just to provide further context, in their studio, I spent several days working on fabrication of the long acrylic conical spikes for the larger globes, and the smaller star/ globe pieces in acrylic and metal (the Noosphere structures) — laser printed, heat formed, meticulously constructed with tiny fasteners (joiners, tubing and acrylic nails). Twenty small acrylic globes were needed for the upcoming exhibit, although there were several larger and similar sized ones fabricated from metal as well. Each (small) globe took an entire day for one person to fabricate; medium sized globes took half a day. 

Not only was it really insightful to see how a studio at this scale manages their workflow/ spaces for the development of int’l touring exhibitions, but it was really cool to see similar fixtures in the exhibit to what I had spent the day fabricating in their studio.


Note: Since writing this, I asked about how the machine learning changed the nature of interactions throughout the exhibit, but there wasn’t an engineer in the Toronto studio at time, so they could not confirm. I had a brief conversation with Philip Beesley though, who confirmed that the curiosity-based algorithm is programmed to predict behaviour and test those predictions.  

environmental futures

Walking through, the exhibit feels like a futuristic fairytale forest and fantastical celestial environment. The “Philip Beesley: Transforming Space” exhibition presents two sculptural environments, called Aegis and Noosphere, made of 3D-printed formations embedded with AI. “Hovering canopies, tangled thickets, and soaring clouds created from 3D fabricated forms embedded with artificial intelligence that can learn, adapt and even show curiosity as it evolves.” ¹ The installation is a responsive, immersive space that blurs boundaries, encouraging a reimagining of potentials to intermingle beauty, biodiversity and technology. ² 

While the use of motion sensors was apparent, it was not immediately clear that artificial intelligence was used, nor how the machine learning adjusted to ongoing interaction. The forest installation (Aegis), however, did feel like the canopy was breathing in its continual shifting. Very little of the technology underpinning the exhibit was described in the texts/ literature provided in and around the exhibit (presumably to avoid de-mystifying the experience). I felt the need to research how artificial intelligence was used and the interactions constructed following the exhibit, but what I uncovered was far too involved and intricate to detail here. However, I will attempt to show how the components appear to work together in the Aegis installation, as an illustration.

Using sphere-shaped skeletons emulating bone structures of birds, breathing pores (identified by feather-like fronds) curl up when a wire is heated under an electrical current triggered by movement. Glass vessels with LEDs light up also when motion sensors are triggered. These, amongst others (sounds, for example), are all connected. Some motion sensors are programmed to respond to human movement, while others respond to movement of the artificial environment. In either case, triggering a sensor sends off a chain reaction to whichever structures are interconnected (presumably this is varied throughout the canopy). An additional AI layer is programmed with a curiosity-based algorithm which causes the system to constantly search for and identify new patterns of behaviour.  ¹ ³

It remains unclear how the curiosity-based algorithm changes the chain-reactions responding to motion sensors, although presumably it does. If so, how the installations respond to audiences on the first day of the exhibit might be/ feel quite different than on the last day, depending on the propensity for the environment to learn throughout the duration of the exhibit.  

In his article, “Transforming Space: Philip Beesley unpacks the science behind his current Royal Ontario Museum Exhibition,” Beesley describes how Noosphere’s AI elements are similarly constructed and mesh within the installation. He says, it “resembles a resilient central nervous system that can support varying forces and shifting motions responding to viewers’ movements with patterns of light, vibration and multichannel sound. It connects chemistry, artificial intelligence, and an immersive soundscape to create a living piece of architecture.” ¹ ² 

Beesley’s approach to structures and space falls within the emerging field of responsive architecture, where he invites the audience to experience what architecture may feel like in the future. In developing “Transforming Spaces” he collaborated with Waterloo-based Living Architecture Systems and Amsterdam-based 4DSOUND. ²  

works cited

Beesley, Philip. “Transforming Space: Architect Philip Beesley unpacks the science behind his current Royal Ontario Museum Exhibition.” Canadian Architect: The National Review of Design and Practice. Aug 14, 2018. Web. Sep 12, 2018.

Canadian Architect. “Transforming Space: Philip Beesley exhibit coming to ROM.” Canadian Architect: The National Review of Design and Practice. Apr 24, 2018. Web. Sep 12, 2018.

N.A. “Aegis/Noosphere – Transforming Space, Toronto Canada 2018.” Philip Beesley Architects Inc. Website. Sep 23, 2018.