Presence Camera — 1/7

Written on October 22, 2020

First of all, it feels like I want to use code as the medium of choice. I could imagine to experiment with the YOLO framework, do a little hardware computing experiments using radar, or finally explore 3D rendering in the web.

Somehow this blog is also a first experiment, as I never used jekyll before to create a website and wanted to see how it works, yet it feels too close to something to use, without an interesting concept.

General ideas I am playing with in my head so far:

  • Controlling a lamp with a website via an ESP32
  • Rendering a small diorama in a browser using three.js
  • Using radar to create some generative art
  • Somehow making use of machine learning, probably with ml5js

Maybe the best way is to set myself a set of parameters? Something like “every day I will create one generative art piece that is created through physical sensors”. Or just physical sensors in general.

Concept

As I think, given the current situation with Covid-19, it is interesting how we communicate or how communication would need to change to feel more present.

So, what if a camera would only be rendering a sharp image if someone is close to it, making it visible for conversation partners if one is “present” in the conversation?

Process

My first experiment is to connect an ultrasonic range sensor to an Arduino, which then maps the distance to the blurriness of the camera image. To do so I want to use the Johnny Five API with a node.js server.

After trying out some variants with the physical module, I realized that it is kind of hard to get the values to the browser. Needing a translator for the Arduino anyway seems misleading as then people would need a setup to try out the experiment themselves at home. That’s why I needed to come up with a different method to calculate the distance.

Using a CLM tracker and P5, as in this example, I was able to calculate the distance between two points, getting a relative value that could be mapped onto the filter function. Having this base setup, I tried out different distances to the camera to see what would be a “natural” distance, but still force the user to come closer to the screen to look the other person directly into the eyes.

Outcome

Here is the final camera. You can test it (it only activates when clicking on it). You can find its source code here.

Reflection

For me it helped to understand that even thought at the first glance, the CLM tracker method seemed more elaborate, it was actually quicker to set up, helping me to test it more quickly. The now created prototype is buggy of course, yet still helps to see that the concept in itself works, since I constantly want to be closer as it feels odd to be blurry.

This could be a concept to be more immersed in conversations and also let one feel less surveilled when moving away from the camera during a call.