This week, I had fun tracking a ton of activity from my input to my computer... key strokes, mouse clicks, application usage, etc. However, I didn't feel that there was enough data to make anything terribly interesting, so I chose to run the Most Interesting Words of the Month program on my entire Google Chat history with another person.
I thought this was fascinating from Google's perspective, especially. Though I can recall specific conversations from some of the words, to a third party, especially a machine, what does this look like?
I thought an appropriate visual translation would stick with the idea that we were using a Google service to chat... so what does Google Image Search see from these things? Is the abstraction funny? does it still tell a story? I batch-downloaded 30 images for each of the words in the first column, as a test to see if this could make an interesting representation.
Here's what I got from a very very rough layout for the word 'whenever'. Shakira is involved, so it's not a total failure.
As you can imagine, even with a lot of automation, this is a time-consuming process. If I decide to continue, I'd like to experiment with layout and index (how much of the chat do I include? is each page just referenced in the back of the book? Would this make an interesting coffee table book?).
Another visual experiment I did involved taking screen grabs of my desktop every minute for a full day. I'm mainly interested in this as a means of visualizing both productivity, but also to see what sorts of colors we are exposed to throughout the course of the day. Are most of the websites I visit white? What does this look like as an abstracted video? Again, I thought it only made sense to translate this visual tracking to something that would also exist in the same environment. So, I generated an abstract screensaver from my computer's activity from Feb. 21st.
Here is the video: