Mobile Interactive Responsive Reflector (MIRR) – Part 1

The MIRR installation is the next development in Tech Studio’s research into responsive and adaptive systems. We have been exploring a range of network and interaction types in the form of large prototypes as a way to understand the potential for intelligent responsive systems within buildings. MIRR is allowing us to think through the possibilities of what it might mean to develop responsive building systems, such as facades, mechanical, or structural systems. Building a prototype at this scale challenges us to work through a number of potential constraints as well as engage with some of the latest technology. As the tools to build smarter electromechanical systems become better and more readily available we can engage with them through installations in order to explore how these systems might communicate, respond to external and internal inputs, and allow for buildings to adapt to a range of short and long-term conditions.

MIRR is a scale-less prototype that could be equally valid as a building façade element, where each panel might be at the scale of a large window and serve to regulate light, glare, and views or as a micro texture of a building skin that regulates air flow or allows for air filtration in high pollution areas. We use sensors as a way to study how different inputs might be organized, how to develop priorities for their analysis, and what it might mean for a built environment to adapt to incoming  data. We believe that developing building systems that have the potential to adapt and learn can lead to resilient buildings that can more nimbly adjust to the changing needs of our built environment.

The full piece is 5’ x 7’. It is made of 98 see-through mirrored acrylic panels that are 7.7” x 12”. Each panel is attached to the framework with a custom designed and fabricated bracket. Each panel is rotated by an individually controlled mini-servo motors that we have programmed to rotate through 140 degrees of rotation. The panels rotate in response to external input as well as through pre-programmed patterns.

The two inputs currently attached to the system are a Kinect and a panel of 98 buttons mapped to the panels. The Kinect is set up to look outside the window where the installation is displayed in order to capture the people walking in front of the Center for Architecture and Design and people walking up to the display window to get a closer look as inputs to the system. The Kinect registers and tracks the skeletal frame of up to two people within its field of vision. The points that define the skeletal frame serve as point inputs into a Grasshopper definition that uses a number of components including some from the plug-in Firefly to create and send angle data to the servos through the microprocessors.

The button panel serves as an input by having people inside the exhibition space push the buttons. Each button press offers the user feedback through a change in the button light’s color.  The data of which button was pressed is sent to the same Grasshopper definition that interfaces with the Kinect where it triggers a range of localized patterns of rotation. There is even a secret button push sequence that unlocks a special behavior of the system.

Category

0 comments