NeuroTech: Speculating the Future of Hearing Accessibility
By Naphat Pansailom
What if hearing accessibility wasn’t just about amplifying sound, but about redefining how we communicate? In this project, we explored the future of sensory augmentation—imagining a radical shift in how individuals with hearing impairments might engage with the world.

Developed as part of an exploration into radical accessibility solutions, Neurotech proposes a helmet-based device that decodes brain signals into text or speech, enabling seamless, direct interaction without requiring sign language or written communication.
The Challenge: Rethinking Communication Beyond Sound
Globally, nearly 20% of the population lives with some degree of hearing loss. While sign language, hearing aids, and cochlear implants have improved accessibility, these solutions still rely on external devices, social adaptation, or complex learning processes.

Looking Back: Learning from Radical Precedents
To challenge existing paradigms, we first analysed three historical precedents that took unconventional approaches to hearing assistance:
🔹 Ear Trumpet – A purely acoustic solution, manually controlled to amplify sound.
🔹 Akouphone – An early electronic hearing aid that introduced powered amplification.
🔹 Manual Alphabet System – A foundational visual language shaping modern sign communication.
These designs, while radical in their time, were limited by physical constraints—focusing either on amplification or visual language without integrating multimodal experiences.



Looking Forward: Technology-Driven Innovations
We then examined three contemporary radical designs that use technology to expand sensory perception:
🔸 SoundShirt – A wearable garment that translates sound into vibrations, allowing users to physically feel music.
🔸 Ontenna Hairpin – A small, clip-on device that converts sound waves into discreet tactile feedback.
🔸 Hearing Bot – A smart assistant that translates sign language and spoken language into real-time communication.



Neurotech challenges this paradigm by asking:
📌 Can technology eliminate the need for intermediary communication tools?
📌 How can brain-computer interfaces (BCIs) be used for direct thought transmission?
📌 Could this transform accessibility beyond hearing impairments, bridging communication gaps for broader communities?
This speculative project explores the future of assistive technology, where thoughts, ideas, and emotions are transmitted instantly—removing the reliance on sound-based communication altogether.



Designing the Future: AI-Generated Speculation
With insights from the past and present, we sought to envision a speculative design for the future. Using MidJourney AI, we generated visual concepts that blur the boundaries between technology, fashion, and communication. The AI helped us explore form, materiality, and interaction—pushing our design beyond existing limitations.

We then curated key elements from the generated concepts, refining them through iterative prototyping. The result was a speculative design prototype that challenges the idea of what hearing accessibility could become in a future where technology seamlessly integrates with the human body and environment.



The Outcome







🎭 Role: Researcher & Speculative Designer
📌 Tasks: Radical Design Research, AI-Assisted Concept Development, Sensory Augmentation Study, Interaction & Material Exploration, Prototyping & Visual Design
- Solo Project