Above I've laid out a patterned key frame from each one of our four animations in our project, which I've tentatively named "Maslow and the Hierarchy". Each key frame is used as the trigger for an animation, which I've also shared below.
The first key frame shows five square layers colored red, orange, yellow, green, and blue, each covered with a different pattern. The red layer uses zig-zags. The orange repeats five dots arranged as points of a star. The yellow uses slightly s-shaped curves. The green uses an array of dots, and the blue repeats a sinusoidal wave, rotated 45 degrees. In the foreground, a floppy white mass with eyes wraps a small circular object. In the video, we watch the white mass Maslow leap onto a cookie and acquire some volume. This scene represents food or sustenance, the earliest rung of the well-known Maslow's hierarchy of needs.
With each key frame, the outermost square layer is shed, suggesting that we are entering deeper into the image, or higher up the pyramid, depending on your perspective. In the second scene, captured in a single frame as a large monster chasing a lone Maslow, a group of Maslows coalesce into a formidable force themselves and rout the threat, showing the next rung of the hierarchy, safety.
In the third scene, hearts hover above two Maslows locking eyes. Not long after, they join as one, sharing a moment of passion through entanglement, and when separate at last, reveal a smaller Maslow that jumps for joy, representing the next need, love. And finally, in the last scene, a group of Maslows attaches as a chain break down the very frame of the image itself, in pursuit of freedom and independence – they show us the next rung, esteem.
What does the last step, self-actualization look like? Maybe all of these together culminate as self-actualization. Or maybe it's not a process we should illustrate for the Maslows, who have broken away from frames?
This project is a tactile piece of art with an animated component trigged by the patterned images. Framing it this way, we'd like to suggest that the tactile experience is not an addendum, but essential.
We used the Swell machine to create these tactile graphics. On special heat-sensitive paper, we printed a modified version of a frame from each animation that emphasized both Maslow and the stage of the hierarchy using a combination of bold black lines and inserted patterns like those described above. Then we ran each page through a heater.
Initially, we planned to use Spark AR to play the animations using the trigger images. I thought it would help create a more accessible option since more people were likely to have Instagram or Facebook than the EyeJack app installed. But it was quickly apparent that Spark AR is good for face filters and not much else – a single project can only hold 4MB of assets, which might be just enough for a single trigger image and its animation.
Fortunately, EyeJack is easy to use, and allows the exact functionality we needed – allowing the user to interact with several triggers through just one link or QR code. When testing the AR replay after printing the tactile graphics, however, we ran into a few interesting bugs that would be worth considering for future iterations of the experience.
First, the AR replay triggered effortlessly on the first two stages of the hierarchy, but seemed to lag on the third. We wonder if the bolder red and orange used in the outermost layers of Sustenance and Safety provided the contrast needed for the AR to track the image accurately, unlike the yellow used in Love. Second, the animation of last stage Esteem shakes unpredictably on replay. We think that this might have to do with the aspect ratio of the trigger and video for this scene in particular – 1920 x 1080 – which we used to create the visual effect of "breaking" out of the frame. Since most of the width of the video is empty space, perhaps it's difficult to track and place it on a surface. Third, we noticed that different phones have different rates of success tracking and replaying animations. To accommodate the various lighting conditions under which the triggers will be viewed, we added photos of the tactile prints as triggers to the same animation – however, these photos might only optimize the AR for the particular type of phone used to take it.
I wasn't enthusiastic about the AR component of this project, and if I can help it I'd like to avoid AR projects from now on (I spent enough time trying to work on AR in my job before ITP), but overall I'm quite satisfied with EyeJack and could see myself using it to experience other people's projects in the future.