badge icon

Bu madde henüz onaylanmamıştır.

Madde

Wearable Guidance System for Visually Impaired Individuals

Alıntıla

Wearable Vision Technologies

Designing an AI-powered wearable guidance system to facilitate the daily lives of individuals with special needs.


The burden of visual impairment has increased globally, affecting millions over the past decade. Those with partial or complete vision loss encounter everyday challenges that most people take for granted. Prosaic activities like walking, cooking, or cleaning become difficult and even dangerous without help. These challenges significantly reduce the autonomy and quality of life for blind people.


The most common (and oldest) tools for blind people have long been ones used for navigation, with traditional mobility aids, including canes, among the most common. Though they provide a generic level of assistance, they cannot fully support users in navigating the complexities of today’s environments. These traditional devices sometimes lack safety, awareness, and adaptability. Moreover, while more assistive devices have been developed recently, they are often difficult or impossible to obtain due to their cost or unavailability. These tools usually struggle to deliver on user expectations, especially without advanced technology or AI.


In recent years, rapid advancements in innovative wearable technology have provided new possibilities for improving the living conditions of blind individuals. However, most non-AI-enabled devices only provide short-term assistance and are eventually discarded because of little fundamental change.


The proposed project aims to design and develop a mobility assistance system that provides realtime navigation for visually impaired people. It combines modern technologies like object detection cameras, ultrasonic sensors, GPS, and energy regeneration systems, which work thanks to solar panels and body movement. These components collaborate, increasing the user’s environmental awareness and interaction ability. It comes with an accident detection function, feedback through vibration and sound and an AI recognition function that recognises objects and traffic lights. Integrated with these technologies, they aid navigation, safety, and situational awareness. The GPS module allows you to create routes and provide directions, while the emergency call assists in dire circumstances. Also, reflective panels on the jacket would increase user visibility and safety, especially during low-light conditions. These improvements are intended to make the wearer more visible to others, reducing the chance of accidents.


Literature Review

Assistive technologies for the visually impaired have evolved significantly, moving beyond traditional tools like white canes and guide dogs. While these conventional aids provide basic support, they often fall short in offering comprehensive assistance for navigating complex, everyday environments. Recent advancements include smart devices such as OrCam and Envision Glasses, which enhance visual accessibility but remain costly and limited in functionality. Emerging solutions now focus on wearable technologies that integrate ultrasonic sensors, AI-based object recognition, GPS navigation, and multimodal feedback systems. Despite this progress, many existing devices still face challenges in usability, flexibility, and energy efficiency. There is a growing need for a lightweight, affordable, and intelligent wearable system that can reliably support independent and safe mobility for visually impaired individuals.


Methodology

A structured approach was followed to design, develop, and evaluate a smart wearable guidance system for visually impaired individuals. The methodology integrated concepts from embedded systems engineering, artificial intelligence, human-centered design, and assistive technology. The development process was carried out in two main phases: first, the design and prototyping of the wearable system; second, its testing and validation through technical performance benchmarks and user trials.


Design Principles and System Architecture

The intelligent wearable assistive system should be modularised and integrated to support the parallel operation of various perception and feedback components. The architecture's main components are environment sensing modules, visual input units, real-time processing microcontrollers, and multimodal feedback systems. This architecture allows the simultaneous detection and recognition of obstacles, objects, and navigational aids in the task domain in real-world environments

In such dynamic environments, low-latency feedback is key to guaranteeing the system's safety and efficiency. This necessitates optimized C&C from ultrasonic sensors, cameras, IMUs, and ultimately edge computing platforms such as Raspberry Pi or Jetson Nano. These platforms can run TensorFlow Lite or OpenCV and other embedded machine learning frameworks supporting real-time inference on edge devices.


Multi-modal sensation (360° ultrasonic, scene classification through camera modules, IMU-based motion tracking) enriches the perception of the environment. The sensors' data needs to be well synchronised for them to work together to minimise false positives and misses. The user should get feedback immediately and in an intuitive manner. Vibration motors placed on the body’s lateral sides or wrists could provide direction indications, and small speakers or bone-conduction transducers could provide verbal feedback such as object names or route directions. All elements should be physically incorporated into a flexible, breathable fabric base to maintain system robustness and wearability. Using lightweight material coupled with modular mounting systems means sensors and processors can be worn long without overheating or hindering mobility. The inclusion of thermoregulatory materials such as phase-change fabrics may even further improve long-term wearability. 


Human-Centered Design and Evaluation Methodology

When designing assistive devices for people with visual disabilities, comfort, meaningful interpretation, and accessibility should be user-centred and priority factors. To optimise the design, weight distribution, tactile readability, audio intelligibility, and discreet control interfaces are accounted for. Wearable devices should be easy to put on and take off, and should be equally effective for persons with more advanced mobility problems. The quantisation and qualitative aspects of the evaluation must be considered. Performance-oriented validation: Static objectives regarding obstacle detection range and precision, cognitive system object recognition rate, system latency, and battery life in various conditions. Testing environments should also involve different illumination, noise, and surface conditions to make models robust and generalisable. Field testing with visually impaired people should be comparable to real-world navigational activities completed with the TUD. Participants will cater to a spectrum of visual impairment, e.g. , being born blind, having partial vision, or having acquired loss of vision. Well-organised task scenarios might include hallway following, obstacle avoidance, following a target outdoors, and crossing a crosswalk. These tasks allow for assessing spatial responsiveness and feedback clarity in dynamic situations. Qualitative feedback should be derived through interviews, feedback questionnaires, and observing behaviour. The most important subjective measures include user confidence, mental load, comfort, safety and trust in the system. Therefore, there is a need to conduct such comparison studies across different feedback modalities (e.g., vibration vs. audio) that would guide future system adjustments. Findings commonly have revealed that blind participants who became blind after birth prefer audio feedback over haptic feedback and congenitally blind participants prefer haptic feedback over audio feedback.


Computational Framework and Safety Considerations

A software infrastructure is needed for potentially autonomous, highly responsive, and accurate sensor fusion and decision-making. AI-based scene classification must be implemented on the edge to guarantee user privacy and reduce delay. Edge AI models should encourage low-power inference so they can work continuously without overheating or draining batteries. Sensor fusion algorithms must integrate ultrasonic, camera, and IMU data to detect obstacles and movement abnormalities. Appropriate feedback prioritization logic is crucial to prioritize more serious events (e.g., falls or close-proximity collisions) over less dramatic cues (e.g., object announcement, GPS directions). It prevents information overload, which promotes user responsiveness. With respect to fall, data from accelerometers and gyroscopes should be processed to apply threshold-based acceleration analysis and motion pattern recognition. Unusual motion then triggers a timer response. If the alert is not cancelled within a time limit, the device shall send an automatic message and GPS coordinates to the preferred numbers. Energy-conservative policies need to be adopted for persistent applications. These consist of powersaving MCUs, sleep scheduling for non-essential sensors, and extrasolar charging. Photovoltaic film integrated on the worn absorbent surface is used for daylight harvesting and battery runtime. Furthermore, the firmware and the electrical connections of the system should be isolated against short circuits, humidity, and mechanical wear, particularly in outdoor areas. Open-source, low-cost designs in hardware and software need to be adhered to, allowing for global applicability and community-based development. Open hardware platforms and public code repositories facilitate replication and improvement by researchers, NGOs and startups developing inclusive technologies. 


Solution

In order to help the visually impaired persons, the proposed approach is to develop a smart wearable vest containing a built-in system of sensors, cameras, perceptions and embedded electronics. The system's central component is the Raspberry Pi 5, a small but powerful AI device that can run multiple neural networks in parallel for applications such as image recognition. Raspberry Pi 5 analyses visual data via image detection cameras with pre-trained models based on Python, Vision AI, and the Opencv library. This setup allows the system to detect and identify obstacles, traffic lights, pedestrians and other crucial environmental elements blocking the user. At the same time, ultrasonic sensors placed in the front and sides of the vest enable constant proximity detection. The sensors provide the user with instantaneous feedback about obstacles in the immediate vicinity, including invisible objects or objects that move without warning. Feedback modules of the haptic feedback apparatus respond to sensor inputs and facilitate the user's intuitive orientation to avoid collision. A system for detecting a fall, which is based on an IMU unit, works by continuously tracking the user's posture and movement. The solution turns into an alert protocol if the sensor senses a rapid descent and the user does not move for 30 seconds. This is accomplished as a haptic alert to the user, along with an automatic SMS message sent to persons of interest, such as friends, family members or the user's caregiver, informing them of the user's current GPS coordinates. The wearable vest further includes a GPS module for tracking the user's real-time position and offering directions. Authorised family members can log in and check on the user’s movements, even providing assistance if needed, over the GPS system. This feature will offer an extra degree of protection, particularly for those travelling in an unfamiliar area or for those who may be vulnerable to health-related issues. Rechargeable lithium-ion batteries make the vest long-lasting and provide continuous operation. These batteries can be charged with two energy-harvesting methods too: flexible solar panels that are installed in the shoulders of the vest and kinetic energy generators weaved within the garment. These functionalities contribute to the optimisation of the device's battery life, which easily lasts throughout the whole day even when the charger is not used, regardless whether it is operated in urban or rural environment. In summary, the proposed solution is comprehensive, less power-consuming and a user-friendly wearable system. It improves spatial recognition, navigational freedom, and safety for visually impaired individuals and brings peace of mind to family members by means of real-time monitoring and emergency contacts. 


Implementation Results

The proposed wearable guidance system was successfully developed, integrated, and tested in practical settings. Its hardware and software architecture incorporated ultrasonic sensors, camera modules, and microcontrollers for real-time environmental sensing and processing. The design was visualized through structural diagrams and isometric views of the final prototype. The software, built using Python and programmed via Arduino, enabled AI-based object recognition and provided multi-channel feedback through vibration motors and buzzers. The system’s logic allowed safe directional guidance for the user. Extensive testing was conducted with individuals with varying levels of visual impairment, both indoors and outdoors. Performance was evaluated through quantitative metrics such as obstacle detection accuracy, response time, and clarity of feedback, supported by qualitative user feedback. Results indicated that the system was especially effective for users with acquired visual impairments, offering reliable assistance through audio and haptic cues. Overall, the system demonstrated strong functionality, usability, and potential for real-world application.


Yazar Bilgileri

Avatar
YazarMahammad Samadli1 Temmuz 2025 12:48

Etiketler

Tartışmalar

Henüz Tartışma Girilmemiştir

"Wearable Guidance System for Visually Impaired Individuals" maddesi için tartışma başlatın

Tartışmaları Görüntüle

İçindekiler

  • Wearable Vision Technologies

    • Designing an AI-powered wearable guidance system to facilitate the daily lives of individuals with special needs.

    • Literature Review

    • Methodology

      • Design Principles and System Architecture

      • Human-Centered Design and Evaluation Methodology

      • Computational Framework and Safety Considerations

    • Solution

      • Implementation Results

KÜRE'ye Sor