1 Introduction
The visually impaired population refers to a range of people with visual impairments from low vision to total blindness[1]. According to the World Health Organization (WHO), at least one billion people will be visually impaired by 2020 [2]. Low vision is defined as multiple vision loss that interferes with daily activities [3]. According to the WHO, at least 2.2 billion people worldwide have impaired near or distance vision [4]. When travelling, most environmental information is received through the visual sensory system [5], and low vision can have a significant negative impact on a person’s ability to behave [6], such as difficulty receiving information from the outside world and moving around in complex terrain. It is undeniable that users with visual impairments face varying degrees of difficulty, from low vision to total blindness. For people with severe visual impairment, more people will choose to use canes and guide dogs to avoid obstacles [7]. For people with low vision, on the other hand, most people rely on residual vision when travelling, which can actually have a negative impact on people with low vision[8]. Research by Arditi Aries [9] has shown that because low vision is usually acquired later in life, few people with low vision learn Braille, and that this population does not typically make tactile connections with their environment. However, even with significantly reduced vision, they were still able to use vision for basic navigation and for locating large objects and architectural features. This highlights the importance of accessible signage in removing the legibility barriers faced by people with low vision.
Accessible facilities have the function of reducing barriers to travel and creating ease of living. In different regions and for different types of people with disabilities, many researchers and institutions have worked to create environments suitable for people with disabilities. World Health Organization [10] argue that to enable faster integration of people with disabilities into the environment, barriers between people with disabilities and buildings need to be removed. Priorities to be considered when identifying existing and potential architectural barriers in the environment are, for example, the construction of adequate entrance ramps, improved floor mobility, adaptation of sanitary facilities, etc. Katarina Rogulj et al. [11] also propose a new concept of MCDM (multicriteria decision-making) that helps communities to create an accessible and barrier-free environment by providing a systematic process of policy options. In urban environments, authorities should pay attention to accessibility in order to better protect the rights of visually impaired people. In a study by Sylvester Kyeremeh et al. [12], the importance of providing low vision services (e.g. provision of low vision assessment equipment, clinical diagnostics) was noted. The Architecture and Construction Authority of Singapore (ACM) [13] launched the Universal Design Guidelines for Public Places in 2016. In the guidelines, the ACM argues that physical and sensory cues, such as touch, sound, smell and tactile or auditory information, need to be provided to help people with visual impairments to move around more independently. Accessibility plays an important role in assisting visually impaired people with various activities of daily living, such as identifying people [14], recognising objects [15, 16], and assisting with indoor and outdoor navigation [17], such as Braille signs, tactile and large print maps, and tactile pavements. There have also been attempts by scientists and researchers to use information technology to provide assistive services for visually impaired people. mirela Gabriela Apostoaie et al. [18] focus on the mobility of visually impaired people and ensure the functionality of smart cities through different technological solutions. Smart solutions for visually impaired people, such as beeping traffic lights, Mobility as a Service (MaaS) [19] and Wayfindr[20], can help visually impaired people make better use of smart technologies in the city and gain more accessibility.
2 Literature review
Visually impaired people have behavioral blindness in their daily lives due to the loss of visual information input [21, 22], including visual information needed for search and analysis. The most commonly used products for visually impaired people in their daily travels are guide canes designed for the blind [23]. Other products target walking aids designed primarily for individual visually impaired users. Some studies use sensor technology to help visually impaired users detect obstacles in their paths while navigating independently indoors [24] or use visual aids (e.g., visual augmentation and substitution [25–28]) to provide additional information to visually impaired users. Several scholars have worked on providing viable indoor navigation methods for visually impaired users. Cheraghi et al. [29] designed an indoor wayfinding system called BVID, which helps people navigate between any two points in an indoor environment. Chaccour and Badr [30] used indoor cameras with smartphones to provide mobile navigation applications for visually impaired users. Several scholars have designed diverse solutions for the different needs of visually impaired people in outdoor environments. Ko and Kim [31] has developed a wayfinding application based on a smartphone system using computer vision; its users can locate their paths with 97% accuracy. Research on visually impaired users, whether in indoor or outdoor scenarios, has mainly utilized personal products for user route finding and has produced diverse solutions. However, with a large population of visually impaired people [2], the drawback of individual product solutions is that they are difficult to generalize and standardize for the general public. This part of the work is also an essential part of accessibility design. In accessible facilities, the guidance information for visually impaired people consists mainly of Braille blocks and Braille signage. Some scholars [32] have concluded that in terms of Braille blocks and signage, visually impaired users have low overall satisfaction with indoor accessible facilities, and the influence of information transmission on satisfaction is relatively strong. This may imply the need to focus on whether the information can be adequately conveyed to users when designing accessibility guidance signs.Several researchers have studied the characteristics of the visually impaired population itself. Beverley et al. [33] found that the information needs of visually impaired people in daily life were as follows, in order of importance: eye conditions, social and health care services and facilities, assistive devices and equipment, general health care, benefits and money, travel, housing, employment, education, and training. Of these, social health care and travel had the highest accessibility needs. Among the visually impaired population, some individuals are blind, and others have weak vision. Weak vision mainly includes moderate and severe visual impairment, which can be collectively referred to as low vision[34]. The difference between the low-vision population and the blind population is the former still has light-sensitive functions and may retain the ability to recognize brighter and more vivid colors or objects of a larger area with a certain degree of vision. Among the visual information used on guide signs, visible light is the part of the electromagnetic spectrum between 380 and 780 nm. Our eyes are sensitive to this light and respond by sending signals to the visual cortex through the optic nerve [35]. Visual guidance using light signals is still the most effective method for people with low vision. Therefore, when designing accessibility signs for these groups, we need to consider various factors, such as visual impairment, visual field impairment, light and dark adaptation impairment, and color vision impairment. We also need to consider whether any given sign effectively conveys information, which includes allowing visually impaired users to search quickly for relevant information, effectively attracting attention, and improving the efficiency of users’information search.In studies related to attention, some scholars record physiological signals from cognitive processes and combine them with subjective evaluations. Eye-tracking metrics[36],eventrelated potential (ERP) measures [37], electroencephalogram(EEG), and Heart rate variability (HRV) [38, 39] are all standard methods to distinguish attentional states. Specifi-cally, eye-tracking metrics can be used to obtain people’s visual preferences during observation using areas of interest(AOI). The cerebral cortex’s activity during observation can be reflected by the EEG spectrum. As eye-tracking can reflect people’s attention through the duration of gaze and changes in pupil size [40], some researchers have focused on different eye-tracking metrics. bol et al. [41] used eye-tracking data to investigate the relationship between attention and recall when accessing health information in different age groups, illustrating the conditions for identifying, understanding and remembering health information in different age groups. Kiefer et al. [42] used eye-tracking experiments to investigate the allocation of attention when using maps, identifying markers and developing orientation strategies in an urban context. Helmut et al. [43] used eye-tracking in an immersive virtual environment to assess the effects of a guidance system on subjects’ attention and cognitive load during indoor wayfinding. Pupil size has been utilized in other research to evaluate product designs [44] or to identify designs that evoke negative emotions [45]. Therefore, it is possible to examine whether interaction-guided design can successfully draw attention using a combination of different eyemovement metrics. Techniques related to detecting attention through EEG signals have been used to some extent and have some advantages [46]. In other studies, some researchers have applied EEG to explore whether users remain attentive during instruction [47], and users’ emotional states can also be identified using EEG techniques [48]. EEG is frequently used to analyze cognitive processes, and in certain investigations, attentional profiles of patients have been examined using EEG spectra [49]. When task difficulty increases or there are emotional reactions, the theta power spectrum (4–8 Hz) can shift during concentration, according to research [50, 51] (including studies of stress [52, 53]).Visual information accounts for approximately 80% of the sensory information that humans receive [54]. Some partially sighted people continue to rely heavily on accessible signage in public places. The efficiency of information conveyed by existing accessibility guidance signs can be affected by the design form, such as the size, color, and technology used for the guidance signs. It is important to research how attentive visually impaired persons are to various types of guidance signs to help them swiftly obtain information from the signs when traveling. People with low vision, who make up a sizable portion of the visually impaired population, were the population of interest in this paper. During the experiment, eye movement, EEG, and HRV data were gathered from the participants, and the PSSUQ was used to collect the participants’ subjective evaluations of the experiment to examine how differently the visually impaired population responds to signs with different design elements.
3 Materials and methods
To ensure the safety and feasibility of the experiment, this study used a video to simulate a travel scenario from the perspective of a visually impaired person; as the video was presented, users’ eye movement, EEG and HRV data were recorded through oculomotor technology and wearable physiological index testers. Since eye-movement analysis requires highly precise pre-experimental calibration of the subjects’ pupil movements, 16 individuals with normal vision were selected for this study to ensure that eye-movement data could be captured. User interviews were conducted with six low-vision individuals to ensure that the experimental vid-eo material matched the viewing perspective of the lowvision population as closely as possible. On this basis, we selected the final effect that would be applied to the experimental material in processing. The results of the interviews are shown in Table 1.
3.1 Experimental equipment
This experiment uses the ErgoLAB human–machine-environment synchronization platform V3.0 to synchronously collect multidimensional data of the human, mach-ine, and environment, including interaction behavior, physiology, eye movement, EEG, facial expressions, and subjective evaluation. The ErgoLAB eye-tracking analysis module synchronizes with a noncontact eye-tracking system (Tobii, Sweden, Tobii Pro Fusion) to record data such as AOI first gaze time, sweep and blink counts. The ErgoLAB EEG analysis module synchronizes with a portable, wear-able wet-electrode EEG recording system (BitBrain) to acquire EEG signals and perform offline processing and EEG analysis. The ErgoLAB Physiological Testing Cloud Platform synchronises the ErgoLAB Smart Wearable Human Factors Physiological Recorder to monitor changes in the body’s electrocardiogram (ECG), changes in electrodermal activity (EDA) and also physiological indicators such as pulse changes through photoplethysmography (PPG) in real time. The data are fed into the ErgoLAB human–machine-environment test cloud platform, which can analyze multimodal data synchronously, export the original data, process and analyze data, and export visual analysis reports.
3.2 Participants
Because we found it difficult for the visually impaired population to complete the tracking calibration prior to the eye movement experiment at the beginning of the study, we were unable to obtain accurate eye movement data for analysis. Theref-ore, in this study, we selected sighted people who could complete the eye movement data collection as subjects and edited the experiment video with the advice of a visually impaired consultant to make the experiment as effective as possible from the perspective of the visually impaired population. In this study, 16 university students between the ages of 18 and 25 were selected as participants to ensure a representative sample. From these participants, we obtained 13 valid samples for eye movement data and all 16 participants for EEG and HRV measurements. Written informed consent was obtained from participants prior to participation in the study and ethical guidelines and regulations were strictly followed to ensure the protection of participants’ rights, privacy and confidentiality. The study was approved by the ethical review committee and met ethical standards.
3.3 Experimental design
Based on Beverley’s et al. [33] findings, the videos were set to cover three scenes with high demand from the visually impaired population: hospitals, underground stations and outdoor streets. In order to test the degree of influence of the variables, we divided the three scenes into treated and untreated groups, with each of the two groups containing three videos, divided for the purpose of testing the degree of influence of the variables through cross-sectional comparisons. In the treated group, we applied special effects to the accessibility signage in each scene to increase the visibility of the accessibility signage. Based on the recommended accessibility and placement methods in the Universal Design Guidelines for Public Places [13], we selected some of the elements in the video that act as guides or require interaction, including: tactile paving, lift buttons, door handles, handrails, corners, intersections, traffic lights, and directional signs. We chose the visible light effect [35] as the treatment. In order to make the special effects treatment of accessible signage more easily perceived and recognised by visually impaired people, the special effects treatment approach follows the following principles: Improve recognition Regular visual effects treatments can improve the recognition of accessible signage [9], making it more visible and recognisable to visually impaired people. For example, adding a consistent form of lighting effect can attract visual attention and provide clear visual guidance.
Contrast enhancement Special effects can be used to increase the contrast between the sign and its surroundings by adjusting the luminance and colour of the light to make it more perceptible to visually impaired people in different lighting conditions [55]. Processing according to the luminance of realistic light sources allows effects to maintain consistent visibility in different luminance environments.
Emphasise key messages By adding specific visual messages, accessible signage can be distinguished from its surroundings and made more visually prominent. When adding visual information, care should be taken to avoid complex visual effects [56] that may be distracting to visually impaired people. For example, the use of specific luminous colours can highlight key information, such as the red and green signals of traffic lights.
Luminance, light colour and light frequency were set as independent variables. To ensure realistic processing, the luminance of the real light source was checked using an illuminance meter and divided into two levels of 200 lx and 300 lx, which were processed according to the luminance of the real light source in the video. For the light colors, blue, green, yellow and orange were chosen as different variables for the treatment in order to distinguish them from the white light sources commonly found in realistic environments. For the light frequency, we considered both constant and 1/2 times/sec frequencies, as shown in Table 2. This design allowed us to assess the effect of different colours and frequencies on the visibility and legibility of the accessibility signs. To ensure randomisation of the experiments, we used a randomisation function in Microsoft Excel to randomise the order of the variables for each group of experiments, as
shown in Table 3. The use of a randomised order of variables can help to avoid any potential bias or serial effects.
3.4 Experimental process
To ensure the reliability of the data and the accuracy of the study results, a number of measures were taken to ensure that the participants remained in good physical and mental condition during the experiment. Before the experiment began, we paid close attention to the participants’ pre-experimental state. They were told to get enough rest to minimise the effect of fatigue on the results of the experiment. Before putting on the experimental equipment, we washed the subjects’scalps with mild baby shampoo to ensure that the EEG electrodes were clean and properly fitted. Strict safety measures were taken throughout the experiment to ensure the physical and mental well-being of the participants. The experiments were conducted in a controlled laboratory environment to eliminate the possible influence of external disturbances on the data collection. Participants were closely monitored and researchers were able to address any problems or discomfort in a timely manner, further ensuring the reliability of the study results. Once the experiment is complete, we follow appropriate post-experimental procedures. This includes an explanatory briefing to give participants the opportunity to clarify any questions or concerns they may have had during the study. We also provide the necessary follow-up support and counselling to ensure the physical and psychological well-being of participants.As the subjects were not a real visually impaired group, in order to simulate the travel perspective of the visually impaired population, we used DaVinci Reslove 17 to reduce the brightness and blur the image of the experimental video, as shown in Fig. 1. Prior to the start of the experiment, subjects were given detailed instructions and guidance before entering the laboratory. They were clearly informed that they would be experiencing a simulated travel perspective in low vision conditions to better understand the purpose of the experiment. The subjects were also told that the video of the experiment had been processed by the software to ensure that they understood the visual characteristics of the video. At the same time, the subjects are directed to a test room with an illumination level of 5–10 LUX. This level of illumination was chosen to minimise the effect of ambient lighting on the video illumination and to allow the subjects to better adapt to the experimental environment. A diagram of the experimental environment is shown in Fig. 2, with the lighting in the room adjusted to an appropriate level of brightness. In addition, the safety and comfort of the experiment was emphasised to each subject, who could stop the experiment at any time and report any discomfort or concerns to the experimenter. They were also informed of their right to refuse to participate or to withdraw their consent to participate without adverse consequences.The subjects’ AOI first gaze time, overall visit time, EEG and HRV data were counted to further compare their concentration levels during the experiment, and user interviews and PSSUQ questionnaires were administered to the subjects, i.e., to investigate whether there were differences in objective and subjective responses.
4.1 Eye movement data analysis
We used the Tobii Pro Fusion telemetric eye-tracking device in the experiment to collect the first gaze time and overall visit time for the AOIs during the experiment, and the data were exported through the ErgoLAB eye-tracking trajectory analysis module synchronized with the noncontact eye-tracking system and finally imported into SPSS software (version 22.0; SPSS (IBM, Armonk, NY, USA) for analysis. By comparing the subjects’ reaction time to the guidance signs under different scenarios, the attentional level of the visually impaired people could be assessed quantitatively. At the same time, the experimental procedure collected visual data from the appearance to the disappearance of the guidance signs, which were used to verify whether the subjects paid attention to the guidance signs effectively and assess the degree of sustained attention to the guidance signs.
4.1.1 Analysis of eye-movement data in different scenes
According to the statistical data of the oculomotor, the eyemovement data conditions of the subjects in six different scenes were recorded, as shown in Table 4. Overall, there were no significant fluctuations in the subjects’ average pupil Diameter, average blink count, average saccade count, and average absolute distance eye movement indexes across the six scenes.Based on the different scenes for the two control groups, the total gaze time of the subjects in the AOIs of the six different scenes was counted as shown in Fig. 5. Figure 5 shows that the total gaze time of the AOIs in the untreated videos 1–3 was overall lower than that in videos 4–6 with the addition of visible light. In videos 1–3 and 4–6, the subjects showed an upward trend in the total gaze time indicator for the AOI as they watched the videos sequentially. The total gaze time at the AOI was significantly improved after the material viewed by the subjects was replaced with the processed video. The total gaze time for the AOI increased slightly from video #3 to video #4 and significantly from video #4 to video #6. The addition of visible guide marks effectively improved the users’ attention, and the effect increased as the users watched the videos.
4.1.2 Stimulus point eye-movement data
Among the attention indicators related to the eye-movement data of videos 4–6, the first gaze time and overall gaze time for the AOI were selected for recording, and the top ten visible features ranked in both indicators were derived, as shown in Figs. 6 and 7. The eye-movement heatmap generated during the eye-movement data recording is shown in Fig. 8.As shown in Fig. 6, the numbers of warm and cool colors in the top ten ranked guide marks by first gaze time in the AOI remained consistent, the illumination frequency was mainly 2 t/s with constant brightness, and the illumination remained consistent at 200 lx or 300 lx. The top two visible colors are warm, and the cold colors are more evenly distributed in the 3–10 positions.As shown in Fig. 7, the visible light color was predominantly warm, the illumination frequency was predominantly
1 t/s, and the illuminance values of 200 lx and 300 lx ranked similarly among the top ten guidance signs ranked by the overall gaze time in the AOI. In the two eye-movement indices of first gaze time and overall gaze time, the differences associated with illuminance were slight, and the illumination frequencies had opposite effects on the two indices.
Scatter plots of the first gaze time and overall gaze time of the AOI according to color, illumination frequency, and illumination brightness were generated using the Origin Software (2022, OriginLab), as shown in Fig. 9. The scatter plot distribution allows observation of the variable factors that possess good information conveying efficiency. Furthermore, a one-way ANOVA was performed for all parameters.Among different colors, there was no significant difference between the first gaze time and overall gaze time. Different colors (p > 0.05), and as shown in Fig. 9a, there was no significant difference between the four colors in terms of first gaze time and overall gaze time. The concentrations of blue and orange were slightly lower than those of the other two colors, with a slight tendency of backward dispersion. Blue, orange, and green were all slightly better than yellow in
terms of overall gaze time and maintaining the user’s attention on the guide sign. Among different lighting frequencies, there was no significant difference in first gaze time regardless of color (p > 0.05). There was a difference in overall gaze time across different lighting frequencies (p < 0.05), as shown in Fig. 9b. The difference between 2 t/s and constant light was weak, but constant light resulted in a narrower distribution than 1 t/s or 2 t/s. The distribution for constant light was also closer to the origin and had a better first gaze time. In contrast, 1 t/s had a wider distribution and reflected better capture and maintenance of users’ attention than other frequencies. Between different levels of illuminance, there was no significant difference in first gaze time or overall gaze time (p > 0.05). As shown in Fig. 9c, the difference between 200 and 300 lx illuminance was weak, and there was no significant difference in the efficiency of information transmission.
4.2 EEG and HRV data analysis
The topography of the interactions in the theta power spectrum (4–7 Hz) for the various analyses is shown in Fig. 10. The EEG theta power spectrum showed statistically significant differences between subjects watching the unprocessed and processed videos. Additionally, Table 5 compares the HRV measure between pat-ients viewing the unprocessed and processed videos. None of the study variables revealed any discernible differences.One central electrode (Cz) showed significant differences between responses to the unprocessed and processed videos (see Fig. 10c). Specifically, the processed video elicited a higher theta EEG power spectrum at Cz than the unprocessed video, while the other electrodes did not show any appreciable differences. The differential effect of the guidance markers on EEG signal fluctuation in the unprocessed and processed videos implies a differential effect on the user’s attentional state; this may point to subtle differences in the user’s state while watching the two types of videos.
4.3 Subjective evaluation analysis
The Post-Study System Usability Questionnaire (PSSUQ) has been used to assess studies on various accessibility guide sign design formats. The PSSUQ is used to evaluate users’perceived satisfaction with various software programs or systems [60]. The PSSUQ was originally designed for computer software evaluation, but since it provides quantitative and qualitative feedback on user experience [61, 62], it can also be applied to other types of utility evaluation [63]. In other studies, some researchers also use the PSSUQ to evaluate interaction schemes [56] or device features [64]. Respondents used a 7-point Likert scale was used to rate items concerning usability, with a score of 1 denoting significant disagreement and 7 denoting strong agreement. The mean
scores for each question and the related SD are displayed in Table 6.
For the 16 subjects who participated in the experiment, 16 copies of each of the 2 questionnaires were distributed; screening and analysis indicated valid data on all 32 questionnaires. By importing calculations from SPSS 26.0, a t test was performed to determine significance. On 19 questions, the test results revealed a significant difference (p < 0.05) between the two accessibility markers. The treated guide signs obtained an overall PSSUQ score of 5.747 compared to the existing guide signs of 4.066 across all questions. The PSSUQ rating implies that the treated accessibility guide signs received a higher subjective satisfaction rating.5 Discussion
This study investigated the difference in attention to accessibility guidance signs by visually impaired people in response to different design forms. In general, the treatment of existing accessibility guidance signs can effectively improve the com-munication of sign information to users. On the treated accessibility signs, the AOI indices of the same positions on the guide signs were significantly improved in terms of first gaze time and overall gaze time, which means that the treated guide signs can quickly capture the attention of the visually impaired users and can, to a certain extent, encourage continuous viewing to help the visually impaired users more quickly absorb the information conveyed by the signs. In other studies, some scholars have used AOI metrics to analyze the effects of different information on users’ attention. In this regard, in our study, it was found that the guide signs with the addition of visible light possessed better appeal to people with low vision, similar to the results of related studies [35]. It can be observed from the eye-movement data that different colors and illuminance do not cause apparent differences in user reactions to the processed accessibility guidance signs, which may mean that diverse colors can be selected to distinguish types of information while avoiding confusion with more widely used colors such as common light source colors (e.g., white, red) in the design of guidance signs. Regarding the choice of light brightness, although different brightness values do not cause apparent differences, it is still necessary to consider the visual thresholds of different types of users when designing guide signs to ensure that users can easily and comfortably absorb the visual information. Our study observed significant differences in overall gaze duration across different forms of signage design, which may indicate that dynamic information is more efficient than static information in conveying information to the visually impaired population and more likely to keep the user’s attention on signage information. Therefore, strobe or other dynamic effects can be added to different visual information in the design of accessibility signs to communicate information to users more efficiently.Regarding the EEG and HRV data, although differences in EEG theta power between users observing different forms of guidance were not commonly observed in the accessibility guidance signs before and after treatment, differences in theta power were still detected in the central region (Cz), which is consistent with the results obtained from the eye-movement data. Differences in EEG theta power were associated with attentional focus and increased task difficulty (diff-iculty increases when the theta power spectrum increases) [50, 51]. Among brain regions, the parietal lobe is responsible for somatosensory perception; integration of visual and somatic information; and the receipt of visual, auditory, an somatosensory input related to attention [65]. In other studies, it was determined using ERP techniques that selective attentional effects occur in the N1 component [66] and that multiple causes may produce this component, including stimulation of contralateral occipito-temporal, occipito-parietal, and frontal regions [67]. Additionally, a network of dispersed areas in the parietal cortex that seem to be involved in attention allocation have been discovered through brain imaging investigations [68–71]. Accordingly, our EEG findings imply that parietal areas are the main location of changes between users viewing unprocessed and processed videos. These results may correlate with the information on accessibility guidance signs after processing, suggesting an increase in user attention and thus in the user’s ability to search for information. In the HRV data, no significant differences were detected between subjects viewing the unprocessed and processed videos, which is generally consistent with the results of the EEG data, possibly implying that subjects do not differ much in their mental state when faced with the untreated and treated accessibility guidance signs.In the subjective evaluation, users rated the processed accessibility signs be-tter than the existing accessibility signs, which indicates that the rational improvement of the existing accessibility signs can effectively improve the efficiency of information communication and user satisfaction.The main limitation of this research is that visually impaired individuals were observed using a simulated screen to collect eye-movement AOI data. Because we were unable to accurately replicate the real environment of the visually impaired individuals and collect the eye-movement indices, we were unable to fully apply the findings to them. However, the image processing techniques used in this study were selected according to the outcomes of interviews with visually impaired users, and all of the characteristics of visual impairment are combined into one image as much as is practical to simulate the observational perspective of visually impaired people.
6 Conclusion
This study used eye-movement AOIs, EEG and HRV metrics, and subjective evaluation methods to examine how different designs for accessibility guidance signs affected visually impaired users’ attention. The accessibility guidance signs before and after treatment elicited different responses in terms of AOI eye-movement data, the EEG theta power spectrum, and HRV. These differences may indicate that accessibility signs with additional visible light are more able to attract the attention of visually impaired users and help them access the information on the signs more quickly. In the design of accessibility guidance signs, the needs of a wide range of people should be met as much as possible. For example, in the design of guide signs, one should consider the characteristics of people with low vision who can still sense light and give different types of visible light attributes to the guide signs, which can effectively improve the efficiency of the information conveyed by the guide signs. This study aims to provide theoretical references and practical value related to barrier-free interactive guidance systems. With the continuous development of a new generation of information technology, there is an inevitable progression toward more intelligent guidance systems for users with disabilities to improve the clarity, responsiveness, and accuracy of user interaction with guidance information and enable users to make more informed guidance decisions.Acknowledgements The authors thank the Kingfar project team for providing technical assistance with the research and supporting the use of the ErgoLAB Man-Machine-Environment Testing Cloud Platform and related scientific research equipment.
Funding This study was supported by the “Scientific Research Support” project provided by Kingfar International Inc.
Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Code availability The code that support the findings of this study are available from the corresponding author upon reasonable request.
Declarations
Conflict of interest The authors declare no conflict of interest.
Ethical approval This study was approved by the ethics committee of Hubei university of Technology (approval no. HBUT20230072). We certify that the study was performed in accordance with the 1964 declaration of HELSINKI and later amendments. All subjects completed an informed consent form prior to participation in the experiment.