Grasshopper needs more real estate. In some cases, it needs a different location and orientation altogether. We’ve noticed that it’s very difficult to stand around and talk in groups about a Grasshopper definition in the same way you might stand around and talk about a physical model. Humans tend to leverage all sorts of cognitive hardware when we’re speaking and gesturing that we otherwise do not when we’re passively gazing at a distant image. As opposed to the rigid separation of attention and communication that comes with Powerpoint, embedding the task space within the communication space tends to lead to more consistent and fluid collaboration. It stands to reason that a spatial scripting interface like Grasshopper would benefit from being a bit more prominently situated in the workspace. Grasshopper needs more physical and mental real estate.
We’ve been addressing this problem by moving our Grasshopper definitions to a table-top display, tracking an IR LED light pen for interaction on the canvas. The canvas can take up the entire table-top while the linked geometry can be projected on a nearby wall. We’ve only begun setting up the equipment, but our early tests are promising.
In our first tests, we were having trouble with ceiling-mounted projectors, mainly because you are constantly occluding the image with your arms, hands, pens, etc. Instead, we decided to go the “low-road” with this interface…no expensive plasma screens or Microsoft Surfaces…just some mirrors, glass, projectors, and a Wiimote. Here’s how it works…
There is a workstation running dual-view desktops. The desktop with the Grasshopper definition is routed to a projector sitting on the floor under a table. This projector is casting an image on a large, 45-degree angled mirror, mounted under the table. The reflected image is cast on a large piece of frosted glass which forms the work surface. You can spill coffee on the table and not feel too bad about it.
The second desktop (the one with the Rhino perspective viewport) is cast on a near-by wall from a ceiling-mounted projector. A wireless keyboard is connected to the workstation for text entry. A 3DConnexion SpaceNavigator is sitting atop the table for controlling the perspective view cast on the wall.
Interaction with the Grasshopper canvas uses a WiiMote tracking an IR LED light-pen. The WiiMote is sitting on top of the projector under the table, tracking the reflected image of the IR light coming from the tip of the pen. To make the pen, we hollowed out a dry-erase marker and soldered an IR LED to a switch and an AA battery. The IR LED pen works just like a mouse, single-click and double-click working exactly the same way. Admittedly, this works best with the zoom-selected turned on in the newer versions of Grasshopper.
The setup is surprisingly usable. The pen-based interaction has its strengths: ease of setup, explicit points of entry to the interface, standard mouse-like gestures…but this is just a first-pass. We would like to get this setup working with more natural (and pen-less) multitouch interaction, perhaps using the new Microsoft Kinect. Grasshopper is well suited to multitouch interaction. When the canvas is large enough, the buttons are big enough for fingers. The point-and-drag based wire system works well for editing and debugging. The tabs are large enough to reach up and grab without feeling clumsy. We can imagine how compelling it would be to move large chunks of code around with your fingers, cleaning up and reorganizing a data-flows, using gestures to explode and implode clusters. How this sort of “social coding” might work in groups – with multiple actors/designers working together – is well worth further exploration. But for now we can safely say: sliders look better bigger.