Sae and my final project, currently titled “Cave temple for training digital ninjas,” incorporated ideas and work from a few of the projects we worked on during the semester. We were interested in using Pepper’s Ghost along with live video feeds to put people in a new space. At first we were interested in doing an installation in an abandoned store front, using a video feed with some effects like the video filters I had made and background cancelation so people could see themselves as ghosts in real time. Unfortunately, none of the real estate agents I tried calling ever responded to my messages.
So we decided to do something smaller scale and eventually settled on a cave. The cave was inspired by an interest in meditation and self discovery. We planned to project people into the cave as they were looking into it, seeing themselves journey into the cave, projected on several plexi glass Pepper’s ghost screens. First, we build a prototype with three screens.
I was interested in creating something that would create a map of movement which could then be applied to different objects. Tak gave me some code that can record mouse movements into a text file and then read them back into another sketch to animate something. I got that far. Theoretically I would have liked to create different interactions of objects based on different factors like proximity, which could lead to more complex narratives or sort of choreographies. The video shows a few different examples of objects moving in space, the pink objects are using the recording text file, and the green object moves in relationship to the pink object, either moving away, or taking the path and using that. It would be really cool if the movements could generate new movements, but I didn’t get this far because I couldn’t figure out how to append files that had already been written on to. If you know how to do that, let me know!
Code (the parts that are supposed to write over the file are commented out):
I was playing with some Processing code to create weird effects on images, just to work out some ideas I have for the final.
I spent some time playing around with Junaio last week, mostly just using there platform to see how it worked. I added a bunch of different elements to an image tag using the batman logo on my wallet, but couldn’t get them to work at the same time.
I spent a lot of time trying to get the Processing with Android and IP Camera code to work and wasn’t particularly successful. I did have one funny success, which was capturing the IP Camera in the classroom at about three in the morning when I finally got it working, and this is what I saw:
It took me a moment to realize it was a MacBook. It was kind of creepy. It might have worked well with the idea I wanted to try, which actually is a more sort of augmented reality idea, but I wanted to take on the of the IP cameras and mix it with another video and use the movement of people in the camera (though the hallway at ITP for example) to create the image of ghosts coming through the the video. Here’s a sample of the code I worked out in processing, which tracks the changes in pixels in my webcam and adds the colors to the video that is playing. Right now it kind of looks like it is just using alpha to overlay, but I want to make it look more like crunched pixels or something. I tried to work this into my IP Camera code, but haven’t gotten very far.
The code for my IP Camera is taken from the link sent out by Luca and Elena, here.
My processing code is an adjustment to the background cancellation example from the video library: Continue reading “Compcam: Take a picture in an unexpected place or time”
Five interesting face tracking projects.
This bug that follows you is a more dynamic/robotic version of the classic eyes following passers by in a monitor or shop window or something. The exectution and design of the bug is pretty impressive, but its not super compelling in terms of the interaction.