| Home | E-Submission | Sitemap | Contact us |  
top_img
Clinical Archives of Communication Disorders > Volume 2(2); 2017 > Article
DiGiovanni, Riffle, and McCarthy: Auditory Stroop Using Spatial Stimuli

Abstract

Purpose

The purpose of this study is to evaluate the influence of spatial auditory stimuli when the semantic meaning of the spoken word is incongruent with the location of the sound. Based on previous auditory Stroop research we predict that individuals will respond to incongruent stimuli with reduced accuracy and prolonged reaction times.

Methods

Twenty students between the ages of 18–30 were presented with a word indicating a direction that included location cues representing the same or different direction. Stimuli were presented in the horizontal plane (i.e., left, center, right) and in the vertical plane (i.e., up, center, down). Participants were instructed to answer verbally the direction of the sound source rather than the direction the spoken word indicated. Accuracy and reaction times were analyzed in both planes.

Results

Generally, for stimuli in the horizontal plane, accuracy was high and reaction times were low, regardless of congruency. However, there was a significantly higher frequency of errors in vertical-congruent conditions than in horizontal conditions. The frequency of vertical-incongruent errors was higher still. The pattern of reaction time results matched the accuracy results.

Conclusions

Despite the simulated source angle being well above the minimal audible angle in both planes, the results suggest a lower salience in the vertical domain. If seeking to develop a multi-dimensional auditory map for sound selection, the horizontal plane is most likely to result in the clearest representations of sound-source location.

INTRODUCTION

In his seminal article, John Ridley Stroop conducted three experiments that were the first to demonstrate stimulus interference [1]. In two classic experiments, Stroop first compared reading a list of words printed in black with reading the same list of words printed in incongruent colors. He found that there was little difference in reading time for the two lists. He then compared the naming of colors for a list of solid color squares with the naming of colors for a list of words printed in incongruent colors. Stroop noted that participants took much longer to complete the color reading in the second task than they had taken to name the colors of the squares in Experiment 2. This delay had not appeared in the first experiment. Such interference was explained by the automation of reading, where the mind automatically determines the semantic meaning of the word (it reads the word “red” and thinks of the color “red”), and then must intentionally check itself and identify the color of the word (the ink is a color other than red), a process that is not automatized [14]. This phenomenon is known as the Stroop effect.
The Stroop effect has been studied extensively in both visual and auditory modalities. More recently, auditory Stroop tasks have been investigated using speakers’ gender [2], pitch [3], loudness, and time [4]. Greg et al. [2] replicated studies of Green and Barber [5, 6] as they investigated the reaction times of participants when asked to make a judgment of the sex of the speaker by pressing a key labeled “Female” or “Male.” It was found that faster reaction times were obtained in the congruent condition. Spape et al. [3] investigated reaction times when asking participants to vocally respond “high” or “low” to sinusoidal tones of 550 or 1,050 Hz. Slower reaction times were observed when the stimuli were incongruent, as in previous research. Morgan et al. [4] found similar results in increased reaction times when responding to pitch, loudness, and duration of a presented word whose meaning is in conflict with the response label (i.e. high/low, loud/soft, and fast/slow). While the Stroop effect has been studied extensively, there is limited research available on incongruent auditory-spatial stimuli, in both the horizontal and vertical plane, and its effect on participants’ accuracy and reaction time when responding to the stimuli.
The impact of auditory cues on reaction time has implications for Augmentative and Alternative Communication (AAC) devices. AAC refers to methods and tools that are used as forms of communication by individuals whose natural speech does not meet their daily communication needs. AAC does not require a technology solution; however, speech-generating devices are often used. Recent studies have suggested that around 0.5% of the population could benefit from AAC [7]. With advances in technology and reduced cost to access of modern technologies, barriers to obtaining AAC devices have been reduced considerably. Moreover, these devices can be customized to meet the specific communications needs of each individual.
Individuals with adequate motor abilities are able to utilize an AAC system with a graphic interface, in which they select their desired speech output by quickly and accurately selecting a series of graphic images. However, many individuals that often utilize AAC also have physical disabilities that require a more involved interface for them to be useful. Those with more severe motor impairments may benefit from a system that includes a form of eye tracking or a selection method where items are presented sequentially until the desired item is highlighted, to which the individual will have one motion (e.g., large button press) to select the item of their choice. There is a sub-population of AAC users with concomitant visual impairments. Since this renders the user unable to view the screen, alternative solutions are required.
As many as 1 in 1,000 children in the United States have low vision or are legally blind. Seventy percent of children with cerebral palsy also have visual impairments [8]. In circumstances where an individual has both motor and visual impairments, a visual graphic interface would not suffice. One solution is to present items sequentially in the auditory domain.
Research to improve access to graphic interfaces on AAC devices has suggested a benefit to users by giving cues as to the relative location of items in an array to aid in navigation [9]. Current auditory scanning techniques rely on labels to represent classes of items for organization. However, there is a lack of information to help organize these items [8]. Two methods for making graphic interfaces more accessible for individuals with motor and visual impairments include the use of auditory icons and spatial auditory cues.
There are implications for how spatial sound cues may or may not facilitate orientation to graphic user interfaces, especially for those with limited visual abilities. The challenge is in providing the needed vocabulary and phrases in an easily re form. Research has proposed multiple ways visual cues and an understanding of visual processing could be of assistance in accessing AAC displays [10,11]. Lacking in this research, however, are the possibilities as to how auditory cues could also play a role.
The term sound localization refers to the ability to detect where a sound originates in space. When a sound source is located at a given angle from the center of the head, the auditory system utilizes slight differences in the timing and intensity of the sound as it reaches each ear (e.g., a sound coming from the left side will reach the left ear slightly sooner and with more energy than it does the right ear). These cues are known as interaural timing differences (ITDs) and interaural level differences (ILDs), respectively, and play important roles in sound localization. Other anatomical factors such as the pinna and the shape of the head also contribute to sound localization, particularly in the vertical plane. As an incoming sound enters the ear canal, portions of the sound are reflected by the folds of the pinna and the head. The auditory system then compares these reflected sound inputs to the direct sound input; some frequencies will be cancelled out while others are amplified. This is known as the head-related transfer function (HRTF). Sound sources originating from different locations in space will have unique HRTFs which the auditory system recognizes and perceives as directionality. The smallest detectable difference in the location of a sound source is known as the minimum audible angle (MAA) and represents the spatial resolution of the auditory system.
There are a number of appealing interference tasks that might address questions posed by this study. A visual-spatial Stroop task performed by Wühr [12] examined how the spatial orientation of rectangles containing congruent or incongruent semantic orientation words affected participants’ ability to process the visual stimuli. Wühr found that the Stroop effect was present in this novel task which extended Stroop interference effects to the combined visual-spatial modality. A study by Van der Burg, Brederoo, Nieuwenstein, Theeuwes, and Olivers [13] observed the effects of audiovisual semantic interference. Participants were presented with letters presented visually and/or auditorily and were instructed to ignore one or the other. Results showed that semantic information from both modalities produced interference when the information from either modality was relevant to the task. Given that a Stroop effect is present is spatial tasks and semantic auditory information interferes with relevant task information, an auditory-spatial Stroop task is a uniquely qualified task to assess the various elements of semantic auditory-spatial interference. The purpose of this study is to evaluate the influence of spatial auditory stimuli when the semantic meaning of the spoken word is incongruent with the location of the sound. To do this, single word utterances were presented in a simulated-spatial fashion in both the vertical plane and the horizontal (or azimuthal) plane as separate conditions. If the word spoken (e.g., “left”) matched the location of the sound source (e.g., from the left), then this presentation was deemed congruent. However, if the word used was different than the sound location, then it was an incongruent presentation. Based on previous auditory Stroop research, we predict that participants will respond to incongruent stimuli, in both planes, with decreased accuracy and prolonged reaction times. Similarly, we predict that since the stimuli in both planes are well beyond the MAA, performance between planes will be similar. By investigating the accuracy and reaction times of congruent and incongruent spatial stimuli, we can better understand an individual’s response to spatial stimuli and its significance in the auditory scanning approach of AAC devices for persons with visual and motor impairments.

METHODS

Participants

Twenty students between the ages of 18–30 (2 male, 18 female; average age of 23) volunteered to participate in the study. All participants received a speech and hearing screening to ensure the participants could successfully hear stimuli and respond. The speech screen revealed no significant speech impairments. The hearing screening revealed all participants’ hearing thresholds were within normal limits (<25 dB HL) at 0.5, 1, 2, and 4 kHz. All participants reported normal or corrected-to-normal vision. Any participant with a speech, hearing, or vision impairment was excluded from this study.

Stimuli

Stimuli used for this experiment included the spoken words “right”, “left”, “up”, “down”, and “center”. Stimuli were recorded in a sound-attenuated booth (Industrial Acoustics Company, Inc., Bronx, NY) on a Roland R-26 6-Channel Portable Recorder (Roland Corporation U.S., Los Angeles, CA) using a Shure SM81-LC cardioid condenser microphone (Shure Incorporated, Niles, IL). Adobe Audition 5.0 (Adobe Systems Incorporated, 2013) was used to normalize each stimulus for duration and amplitude.

Spatial Stimuli

The normalized stimuli were presented in a sound-attenuated booth through a single Bowers & Wilkins DM601 S3 speaker. Each stimulus word (right, left, up, down, and center) was presented to Knowles’ Electronics Manikin for Acoustic Research (KEMAR). To create spatial stimuli, KEMAR was manipulated to mimic stimuli coming from each direction. For the center location, the speaker was placed 5 feet from the center of KEMAR’s head with the center of the speaker cone at mid-ear level and 0° azimuth. For the horizontal plane locations of left and right, the speaker remained at a distance of 5 feet and the speaker cone at mid-ear level while KEMAR was rotated to 45° azimuth for left location (left ear facing the speaker) and 315° azimuth for the right location (right ear facing the speaker). For the vertical plane locations of up and down, KEMAR was rotated back to 0° azimuth and the speaker was positioned at a distance of five feet from the center of KEMAR’s head at 45° above mid-ear level for up and 45° below mid-ear level for down. Each stimulus word was played and recorded from each location.

Procedure

Before the experiment began, participants underwent a familiarization process. For this procedure, the participants were seated in front of desktop computer in a sound-attenuating booth wearing Etymotic ER-2 insert earphones. During the familiarization process, the auditory stimuli was presented simultaneously with the same printed word on the computer screen (e.g., participants heard the word “left” coming from the left direction while the words “This is LEFT” appeared on the screen). The familiarization process was implemented to introduce the participants to the spatial stimuli. Each word was presented one time. Following the familiarization process, the experimental task required the participants to verbally state the location that the sound came from regardless of the semantic word presented. The stimuli consisted of congruent stimuli in which the word being spoken matched the direction it came from (e.g., the word “right” coming from the right direction or the word “up” coming from above) and incongruent stimuli in which the word being spoken did not match the direction it came from (e.g., the word “right” coming from the left direction or the word “up” coming from below). The word “center” was only presented from the center location and was used as a control. The experimental task consisted of 200 trials that were random for each participant. Participants’ answers were recorded using a Roland R26 recorder. A Tapco Mix50 compact mixer and Alesis IO2 USB audio interface were used to simultaneously record the presented stimulus and participant response.

RESULTS

Each participant’s recording was analyzed for accuracy and reaction time using Adobe Audition. Accuracy was measured as whether the participant’s response correctly matched the location of the presented stimulus. A two-way 2 (plane: vertical, horizontal) ×2 (congruency: congruent, incongruent) within-subjects repeated measures ANOVA was performed to analyze accuracy. Accuracy results (proportion correct) revealed a significant interaction between plane and congruency, F(1, 19) =50.35, p <0.001, np2 =0.726. In the vertical plane, incongruent accuracy was significantly lower than congruent accuracy, F(1,19)=62.68, p <0.001, np2 =0.767. In the horizontal plane, incongruent accuracy was not significantly different than congruent accuracy, F(1,19) =3.22, p =0.09, np2 =0.145. For congruent stimuli, accuracy in the horizontal plane was significantly higher than accuracy in the vertical plane, F(1,19)=28.30, p <0.001, np2 =0.598. For incongruent stimuli, accuracy in the horizontal plane was also significantly higher than accuracy in the vertical plane, F(1,19)=369.44, p<0.001, np2 =0.951. Results for accuracy in the two planes are shown in Figure 1.
Reaction times (RTs) were measured as the amount of time (milliseconds) between the end of the stimulus presentation to the beginning of the participant’s response. A two-way 2 (plane: vertical, horizontal)×2 (congruency: congruent, incongruent) within-subjects repeated measures ANOVA was run to analyze RTs. RT analysis revealed a significant interaction between plane and congruency, F(1, 19)=5.60, p<0.001, np2 =0.228. In the vertical plane, congruent RTs were significantly faster than incongruent RTs, F(1,19)=4.55, p =0.05, np2 =0.193. In the horizontal plane, congruent RTs were not significantly different than incongruent RTs, F(1,19)=0.00, p=0.95, np2 =0.00. For congruent stimuli, RTs in the horizontal plane were significantly faster than RTs in the vertical plane, F(1,19)=45.66, p<0.001, np2 =0.706. For incongruent stimuli, RTs in the horizontal plane were also significantly faster than RTs in the vertical plane, F(1,19)=93.62, p<0.001, np2 =0.831. RT results in the two planes are shown in Figure 2. Descriptive statistics for accuracy and RT for the horizontal and vertical planes are shown in Table 1.
Chi-square goodness of fit tests were performed to analyze the errors made by participants. The first test was to assess the proportions of total errors made by participants per plane (“center” trails excluded). A Chi-square goodness of fit test indicated that there was a significant difference in the proportion of errors committed on stimuli presented in the vertical plane (0.95) compared to the expected equal distribution value of 0.50, X2 (1, n=936)=768.27, p <0.001. Of the errors made on trials where the stimulus was presented in the vertical plane, a Chi-square goodness of fit test indicated there was no significant difference in the proportion of errors committed per semantic word presented (up =0.24, down =0.24 right=0.27, left=0.25) and the expected equal distribution value of 0.25, X2 (3, n=892)=2.53, p=0.47. An analysis of errors made on “center” trials was also performed. A Chi-square goodness of fit test showed that there was a significant difference in the proportions of incorrect responses to “center” trials (up=0.79, down=0.15, right=0.06, left=0.00) and the expected equal distribution values of 0.25, X2 (2, n =176) = 167.47, p<0.001.

DISCUSSION AND CONCLUSION

This study was designed to ascertain the feasibility of providing a two-dimensional auditory grid of cues. The Stroop design was used; listeners were instructed to report where a spoken word was located in space notwithstanding which word was spoken. Naturally, performance was more accurate and faster in congruent presentations when the word spoken (e.g., “right”) was paired with the sound coming from the right. Performance, however, was rather different in the vertical versus horizontal planes.
The data from this study show interference of incongruent performance for both accuracy and RT in the vertical plane. Accuracy was higher and RTs were lower in vertical-congruent presentations. However, participants were completely insulated from incongruency in the horizontal plane. Accuracy performance in the horizontal plane was near perfect regardless of congruency, and RTs were also quite low in this plane for both congruent and incongruent stimuli. We predicted a significant, negative impact of incongruent presentations, which held true for the vertical plane only. Given the difference in performance between the two planes, our second hypothesis proved incorrect.
It is interesting to note that there were a substantially greater number of female (18) than male (2) participants. However, examining differences in performance relative to gender was not an objective of this experiment. MacLeod [15] performed a review of the past 50 years of Stroop research. He noted that as of 1991 there were over a dozen studies that explicitly examined any differences between sexes in Stroop tasks; he concluded that the research shows there is no discernable difference between genders in Stroop tasks. He noted that while females may sometimes respond faster than males (especially in color-naming tasks), it is the result of women having a faster general response speed and was not influenced by any measure of interference, and concluded that there was no difference in Stroop interference between sexes at any age. The visual-spatial Stroop study performed by Wühr [12] used a participant demographic nearly identical to the current study (19 females, 3 males, mean age=21 years). Like our study, performance differences according to gender was not an objective of his study, and he did not report any findings or discussion points related to gender differences.
With regards to auditory spatial listening, research has shown that males have superior auditory spatial acuity. Studies by Lewald and Hausmann [16] and Zündorf, Karnath, and Lewald [17] have demonstrated that males are better at localizing target sounds in a complex multiple-source listening environment. In these experiments, participants were required to point to the spatial location of a target sound in two conditions: one with the target sound presented by itself, and the other with the target sound and distracter sounds presented simultaneously from different locations. The findings from both studies showed that in the complex environment with multiple distracters, males were more accurate at determining the location of the target sound source amidst multiple distracters, and the authors contribute this finding to males having superior sensory and attentional mechanisms when extracting spatial information from a complex scene with multiple sound inputs. While these results are intriguing, there are two key factors to consider with regards to our study. First, the studies by Lewald and Hausmann [16] and Zündorf et al., [17] found that in the condition with the target sound presented by itself, there was no difference in performance between males and females. In our study, we only used one single auditory input per trial; there were never multiple competing sounds for which the participant needed to attend selectively. Based on this paradigm, there should be no differences between males’ and females’ spatial localization performance on the stimuli used in the current study. Further, the experiments mentioned previously only tested spatial stimuli in the horizontal plane, at which performance proved to be near ceiling level in our experiment.
A study by Giguère, Lavallée, Plourde, and Vaillancourt [18] found that males had more accurate sound localization abilities in the vertical plane. The authors contribute this improved performance as a physical phenomenon from females having generally smaller sized pinna compared to males, and therefore females encode vertical HRTF information at higher frequencies than males. In our study, sound stimuli were recorded with KEMAR, which is modeled after the median human size head and torso. Accordingly, all sound stimuli were recorded with the same average HRTF and delivered via insert headphones which bypass the reflections provided by the pinna. Therefore, there should be no differences in the acoustic signal that males or females received in our study. Giguère et al. note that differences in cognitive abilities related to auditory spatial attention could be contributing to these findings, citing the Zündorf et al., [17] article. However, recall that the findings from that study only held in complex listening environments with multiple simultaneous sound sources, which our study did not have. Based on these factors, we do not believe that performance would be altered significantly by having more male participants.
Roeffler and Butler [19] argued that for a participant to locate stimuli in the vertical plane accurately, three criteria must be met: the stimulus must be complex, the stimulus must include frequencies above 7,000 Hz, and the pinna must be present. All three of these suggested requirements were met when recording our stimuli as our stimuli consisted of speech. We also used KEMAR to record our spatial stimuli in order to ensure the natural resonance of the pinna and ear canal was captured in our recordings.
The minimum audible angle (MAA) is the smallest angle between two stimuli that a listener can reliably identify a difference in location. The on-center MAA for the horizontal plane is 2–3° while vertical plane typically falls between 10–15°. This might indicate stronger salience for stimuli presented in the horizontal plane. However, the stimuli developed for this study were well beyond the MAA; we chose 45° for both planes. As such, the salience of the stimuli in each plane were likely maximized. The impact of congruency on accuracy in only the vertical plane suggests that the reduced salience of the localization cue renders it more vulnerable to errors when incongruency is present. The difficulty listeners experienced with vertical stimuli, therefore, is inherent to the human limitations of processing vertical cues.
The degree to which the horizontal performance was so high as well as being unhindered by words spoken warrants discussion. Participants produced near-perfect scores that were unaffected by the content, noting that the word contradicted the location of the sound. In the current study, there were three locations in this plane: left, center, and right. Given the degree of robustness in the data, a much larger number of elements may be easily discernible with little-to-no interference from the word spoken. This suggests that complex streams of auditory elements can be constructed while maintaining a high-level of performance. If this holds, this concept may then be used to facilitate sequential presentations in AAC devices. First, however, the maximum number of elements and angular distances achievable before performance drops or interference increases needs to be understood. Second, pilot implementations need to be tested with the target population who use AAC devices. Nevertheless, the current study shows clearly that the human performance in the vertical plane, while an attractive option as it allows multi-dimensional, spatial representation of sound, is simply not developed enough to be used as a facilitating measure in AAC devices. Rather, the horizontal plane shows much greater promise, and to a much greater degree than anticipated. There is a great deal of interest in reducing the memory demands of AAC user interfaces particularly with regard to scanning. Since scanning requires users to wait for different items to be presented before making their selection, information that assists users in recalling the order of potential messages is critical. For those with a visual impairment, spatial auditory stimuli would help engage the users in ways to track the locations of messages in a location based array rather than solely having to remember the order of messages serially. In a conversational context, separating speakers by their spatial location on the horizontal plane would help portray a natural group setting for the user.
By using the Stroop test to study human auditory performance in the vertical and horizontal domains, there are three conclusions. Since vertical performance, even in congruent presentations, is not impressive as well as vulnerable to the content of the spoken element, we do not recommend pursuing vertical auditory cues as a means to facilitate AAC representations in auditory space. Given the near-perfect performance as well as its resistance to contradictory spoken-word content, the use of horizontal, auditory-spatial representations for implementation in AAC devices warrants further investigation to improve the user experience.

Figure 1.
Overall accuracy performance is shown for each plane. In the vertical plane, congruent presentations significantly reduced performance. In the horizontal plane, performance was near perfect regardless of congruency.
cacd-2-2-178f1.gif
Figure 2.
Overall RT performance is shown for each plane. In the vertical plane, RTs were significantly longer when presentations were incongruent. RTs were low regardless of congruency.
cacd-2-2-178f2.gif
Table 1.
Descriptive statistics for accuracy and RT
Accuracy (proportion correct) RT (ms)


Congruent Incongruent Congruent Incongruent




M SD M SD M SD M SD
Vertical 0.75 0.20 0.34 0.15 1,403.11 331.96 1,541.40 344.40
Horizontal 0.99 0.03 0.97 0.06 964.83 207.55 963.48 190.13

REFERENCES

1. Stroop JR. Studies of interference in serial verbal reactions. Journal of Experimental Psychology. 1935;18:643–662.
crossref
2. Gregg MK, Purdy KA. Graded auditory Stroop effects generated by gender words. Perceptual and Motor Skills. 2007;5(2):549–555.
crossref
3. Spapé MM, Hommel B. He said, she said: Episodic retrieval induces conflict adaptation in an auditory Stroop task. Psychonomic Bulletin & Review. 2008;15(6):1117–1121.
crossref pmid pdf
4. Morgan AL, Brandt JF. An auditory Stroop effect for pitch, loudness, and time. Brain Language. 1989;36:592–603.
crossref pmid
5. Green EJ, Barber PJ. An auditory Stroop effect with judgements of speaker gender. Perception & Psychophysics. 1981;30:459.
crossref pmid
6. Green EJ, Barber PJ. Interference effects in an auditory Stroop task: Congruence and correspondence. Acta Psychologica. 1983;53:183–194.
crossref pmid
7. Creer S, Enderby P, Judge S, John A. Prevalence of people who could benefit from augmentative and alternative communication (AAC in the UK: determining the need. International Journal of Language & Communication Disorders. 2016;51(6):639–653.
crossref pmid
8. Kovach TM, Kenyon PB. Visual issues and access to AAC. In : Light JC, Beukelman DR, Reichle J, editors. Communicative Competence for Individuals who use Augmentative and Alternative Communication. Baltimore MD: Brookes Pub Co, 2002. p. 277–319.

9. Ratanasit D, Moore M. Representing graphical user interfaces with sound: A review of approaches. Journal of Visual Impairment and Blindness. 2005;99(2):69–84.
crossref
10. Wilkinson KM, Jagaroo V. Contributions of visual cognitive neuroscience to AAC display design. Augmentative and Alternative Communication. 2004;20:123–136.
crossref
11. Jagaroo V, Wilkinson KM. Further considerations of visual cognitive neuroscience in aided AAC: the potential role of motion perception systems in maximizing design display. Augmentative and Alternative Communication. 2008;24(1):29–42.
crossref pmid
12. Wühr P. A Stroop effect for spatial orientation. Journal of General Psychology. 2007;134(3):285–294.
crossref pmid
13. Van der Burg E, Brederoo SG, Nieuwenstein MR, Theeuwes J, Olivers CNL. Audiovisual semantic interference and attention: evidence from the attentional blink paradigm. Acta Psychologica. 2010;134:198–205.
crossref pmid
14. Wilkinson KM, Coombs B. Preliminary exploration of the effect of background color on the speed and accuracy of search for an aided symbol target by typically developing preschoolers. Early Childhood Services. 2010;4:171–183.
pmid
15. MacLeod CM. Half a century of research on the Stroop effect: an integrative review. Psychological Bulletin. 1991;2:163–203.
crossref pdf
16. Lewald J, Hausmann M. Effects of sex and age on auditory spatial scene analysis. Hearing Research. 2013;299:46–52.
crossref pmid
17. Zündorf IC, Karnath HO, Lewald J. Male advantage in sound localization at cocktail parties. Cortex. 2011;47:741–749.
crossref pmid
18. Giguère C, Lavallée R, Plourde J, Vaillancourt V. Vertical sound localization in left, median and right lateral planes. Canadian Acoustics. 2011;39(4):3–12.

19. Roeffler SK, Butler RA. Factors that influence the localization of sound in the vertical plane. Journal of Acoustical Society of America. 1967;43:1255–1259.
crossref
TOOLS
PDF Links  PDF Links
PubReader  PubReader
ePub Link  ePub Link
XML Download  XML Download
Full text via DOI  Full text via DOI
Download Citation  Download Citation
  Print
Share:      
METRICS
2
Crossref
8,024
View
90
Download
Related article
Editorial Office
#409, 102 SK-Hub BULD, 461 Samil-daero, Jongno-gu, Seoul 03147, Korea
FAX: +82-2-795-2726   E-mail: editor@e-cacd.org
About |  Browse Articles |  Current Issue |  For Authors and Reviewers
Copyright © The Korean Association of Speech-Language Pathologists.                 Developed in M2PI