Anti-surveillance
May. 2021


︎ Class Project, advisor: Sung Jang
︎ Solo Project


“Can Trust be Designed? If so, what is the language of trustworthy design?”

As data technology advances, concerns about surveillance and privacy grow. People often mistrust intangible feedback from screens, unable to fully understand how devices work, which makes them seem suspicious.

This project explores how trust can be designed by identifying design elements that communicate reliability. The goal is to create a sense of assurance, ensuring users feel their devices aren’t “listening” or “watching” when they don’t want them to.




Impact:
Fear of surveillance when people use smart speakers. 


Problem:
When it comes to privacy, intangible interaction is not reliable enough.







Goal:
Trust is based on Knowing.


How might we design a trustworthy smart speaker that clearly communicates with its status?

Explore the way how products explain their functions or status to users.






Ideation for how to visualize the staus of sensors such as mic and camera.








Concept 01. True Representation


The lens and speaker holes clearly represent their function. By blocking the spots, it gives certainty to a user about the physical status of the device which is unchangeable.
















Concept 02. Abstraction

It still uses blinding even if it does not block the speaker hole physically. Sliders provide a visual cue of the device’s status.















Concept 03. Metaphor

Sometimes, people anthropomorphize AI assistants, making them feel both familiar and useful, yet also evoking fear. This fear often stems from concerns about personal privacy, especially with AI speakers.

The design uses the tilt of the speaker’s top, reminiscent of someone wearing a hat, symbolizing attention. When the LED lights are not visible, users can be assured that the device is inactive, providing reassurance that the AI speaker is not "watching" or "listening."
4o mini



Index

©Shinkyung Do
shinkyung.do8@gmail.com