Food Whisper

Food Whisper is an experimental design of alternative food experience. It uses food as the medium to carry data along with a specialized bone conduction device. It allows people to access audio data privately and immersively. This design explores the overlap of digesting both nutrition for physical needs as well as digital information people take in daily routine. (Ongoing project)

- Patent pending -

Concept Design

The initiation of this project came from an article about DNA-digital data storage while I was having breakfast one day. It talked about an emerging technology that allows scientists to store digital data like images, movies, audio on the DNA sequences.  Data is encoded by ATCG instead of 0 and 1. I was really fascinated by this idea and instantly I threw myself several questions: What if the food we eat contains data? How might it change the food experience we used to have?

Having these questions in mind, I started to consider how to interpret this concept through design? Since even if we literally eat the food that carry digital data on the DNA sequences, there is no way we can understand them by eating. If so, is there anyway we can create a multi-sensory food experience that allows us to understand another layer of information? And what kind of scenario it could be?


Inspired by precedents such as Brainport V100, Eye Candy which provides electro-tactile stimulation on tongues allowing blind people to "see" as well as James Argur's conceptual Audio Tooth Implant, I decided to use bone conduction on the utensil to deliver the message so that while people are eating , they are able to perceive audio data without additional audio devices, which not only increase the delights of diet but also creates an immersive and private experience. Mechanism includes three steps:

1. Encode: The shape/pattern of the food is generated by an audio file. Food will be 3D printed or specially manufactured.

2: Decode: A wireless mini camera attached on the utensil captures the characteristic of food. Image gets processed and decoded back to the audio on the mobile application using computer vision.

3. Deliver: The audio file is delivered back to the utensil through wireless connection and bone conduction.