Wearable acoustic sensors provide an effective communication solution for individuals with speech disorders by calibrating vocal cord vibrations and converting them into synthesized speech. In this study, published in the journal *ACS Sens*, researchers from The Hong Kong Polytechnic University, led by Zhongqing SU, and Professor Limin Zhou from the Southern University of Science and Technology, developed a novel piezoresistive acoustic sensor for speech recognition. The sensor was fabricated using aerosol jet printing technology, incorporating graphene/cellulose nanocrystal (CNCs) layers encapsulated in polyurethane (PU) films through additive manufacturing. This sensor exhibits high biocompatibility and flexibility, enabling precise measurement of varying sound pressure levels (SPL).
The experimental results demonstrate that the acoustic sensitivity of the sensor can be adjusted by varying the graphene concentration. At a graphene concentration of 20%, the sensor exhibits a high sensitivity of 9.7×10⁻⁶ dB⁻¹, with an operating range spanning 30 to 90 dB. The minimum detectable change in sound pressure level reaches 10 dB, and the sound pressure level shows a linear correlation with the resistance variation measured by the sensor. When applied as a wearable device attached to the subject's throat, the sensor can accurately capture subtle vocal features such as timbre and rhythm. Combined with a machine learning algorithm based on support vector machines (SVM), the device achieves a high accuracy of 95.9% in recognizing digits (0-9), enabling individuals with speech disorders to engage in digital communication.
This study successfully prepared a novel piezoresistive graphene/carbon nanotube (CNC) acoustic sensor using AJP technology, and encapsulated it with polyurethane film to enhance flexibility and skin adhesion. The sensor with a graphene concentration of 20% (GC-20% G) exhibits the highest sensitivity of 9.7 × 10 ⁻⁶ dB ⁻¹ and a minimum sound pressure level change resolution of 10 dB, achieving precise acoustic detection. The sensor exhibits low hysteresis and excellent reversibility within the sound pressure level range of 30 to 90 decibels, and has excellent repeatability stability: the standard deviation of local maximum response and minimum response is only 0.035 and 0.0349, respectively. The wearable acoustic device is fixed to the subject's throat and can reliably capture vocal cord vibrations, thereby accurately recognizing speech patterns, pitch changes, and unique audio signals. The SVM model trained on machine learning can recognize the speech digital signals collected by sensors with an accuracy of 95.9%, demonstrating its ability to interpret the semantic content of speech signals. These achievements demonstrate the detection and recognition potential of sensors, expanding the broad prospects for communication technology applications for people with language barriers.
Looking ahead to the future, by training models on larger and more diverse datasets, it is expected to achieve recognition of more complex speech patterns - extending from numbers to complete vocabulary and phrases, thereby expanding its application scenarios in human-computer interaction, medical health, and accessible technology fields. In addition, molecular dynamics simulations will be used to explore the mechanism of microscale tunneling effects, providing guidance for the rational design of a new generation of high-performance acoustic sensors through quantitative analysis.
Source: Sensor Expert Network
