ORCID Profile
0000-0001-8858-946X
Current Organisation
University of Nottingham
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: American Speech Language Hearing Association
Date: 12-01-2023
DOI: 10.1044/2022_JSLHR-22-00133
Abstract: This article provides a tutorial introduction to ordinal pattern analysis, a statistical analysis method designed to quantify the extent to which hypotheses of relative change across experimental conditions match observed data at the level of in iduals. This method may be a useful addition to familiar parametric statistical methods including repeated measures analysis of variance and generalized linear mixed-effects models, particularly when analyzing inherently in idual characteristics, such as perceptual processes, and where experimental effects are usefully modeled in relative rather than absolute terms. Three analyses of increasing complexity are demonstrated using ordinal pattern analysis. An initial analysis of a very small data set is designed to explicate the simple mathematical calculations that make up ordinal pattern analysis, which can be performed without the aid of a computer. Analyses of slightly larger data sets are used to demonstrate familiar concepts, including comparison of competing hypotheses, handling missing data, group comparisons, and pairwise tests. All analyses can be reproduced using provided code and data. Ordinal pattern analysis results are presented, along with an analogous linear mixed-effects analysis, to illustrate the similarities and differences in information provided by ordinal pattern analysis in comparison to familiar parametric methods. Although ordinal pattern analysis does not produce familiar numerical effect sizes, it does provide highly interpretable results in terms of the proportion of in iduals whose results are consistent with a hypothesis, along with in idual and group-level statistics, which quantify hypothesis performance.
Publisher: Elsevier BV
Date: 06-2018
Publisher: American Speech Language Hearing Association
Date: 04-04-2022
DOI: 10.1044/2021_JSLHR-21-00606
Abstract: The purpose of this letter is to draw attention to recent literature regarding the communication abilities and experiences of Autistic people and the potential for detrimental effects on mental health and service provision resulting from behavior modification programs. I will argue that viewing Autistic communication as characterized by pragmatic language impairment is inconsistent with evidence of effective and positive communication between Autistic people and with the social model of disability. Proposals for interventions targeting Autistic people should carefully weigh the costs and benefits for Autistic people and should integrate the perspectives of Autistic people.
Publisher: Acoustical Society of America (ASA)
Date: 10-2016
DOI: 10.1121/1.4971077
Abstract: Speech intelligibility is commonly assessed in rather unrealistic acoustic environments at negative signal-to-noise ratios (SNRs). As a consequence, the results seem unlikely to reflect the subjects’ experience in the real world. To improve the ecological validity of speech tests, different sound reproduction techniques have been used by researchers to recreate field-recorded acoustic environments in the laboratory. Whereas the real-world sound pressure levels of these environments are usually known, this is not necessarily the case for the level of the target speech (and therefore the SNR). In this study, a two-talker conversation task is used to derive realistic target speech levels for given virtual acoustic environments. The talkers communicate with each other while listening to binaural recordings of the environments using highly open headphones. During the conversation their speech is recorded using close-talk microphones. Conversations between ten pairs of young normal-hearing talkers were recorded in this way in 12 different environments and the corresponding speech levels were derived. In this presentation, the methods are introduced and the derived speech levels are compared to results from the literature as well as from real sound-field recordings. The possibility of using this technique to generate environment-specific speech material with realistic vocal effort is discussed.
Publisher: Acoustical Society of America (ASA)
Date: 10-2016
DOI: 10.1121/1.4970120
Abstract: Speech produced in noisy environments (Lombard speech) is characterized by a range of acoustic and phonetic changes. These changes stem from increased speaking effort which reflects communicative intent as well as decreased auditory feedback of the speaker’s own voice. An accurate understanding of real-world Lombard effects is important in hearing science for the development and assessment of signal processing strategies targeting realistic speech signals. While Lombard effects are well known from the literature, studies of Lombard speech have typically been based on relatively unnatural speaking tasks such as reading from a script and have been measured in simplified acoustic backgrounds such as stationary noise or constructed babble noise. Lombard speech produced under such unnatural conditions may differ significantly from speech produced in real-world settings. This study describes a novel method of eliciting natural conversational speech across five highly realistic everyday acoustic environments. Through the increased realism of both the speaking task and acoustic backgrounds this study aims to provide a more ecologically valid approximation of real-world Lombard speech than has been previously reported. Based on recordings of conversations between 10 pairs of young, normal-hearing people, a continuum of ordered acoustic and phonetic changes in speech is described in relation to changes in acoustic environments and is related to self-reported listening effort ratings across acoustic environments.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 13-01-2022
DOI: 10.1097/AUD.0000000000001202
Abstract: Tests of hearing function are typically conducted in conditions very different from those in which people need to hear and communicate. Even when test conditions are more similar, they cannot represent the ersity of situations that may be encountered by in iduals in daily life. As a consequence, it is necessary to consider external validity: the extent to which findings are likely to generalize to conditions beyond those in which data are collected. External validity has long been a concern in many fields and has led to the development of theories and methods aimed at improving generalizability of laboratory findings. Within hearing science, along with related fields, efforts to address generalizability have come to focus heavily on realism: the extent to which laboratory conditions are similar to conditions found in everyday settings of interest. In fact, it seems that realism is now tacitly equated with generalizability. The term that has recently been applied to this approach by many researchers is ecological validity . Recent usage of the term ecological validity within hearing science, as well as other fields, is problematic for three related reasons: (i) it encourages the conflation of the separate concepts of realism and validity (ii) it erts attention from the need for methods of quantifying generalization directly and (iii) it masks a useful longstanding definition of ecological validity within the field of ecological psychology. The definition of ecological validity first used within ecological psychology—the correlation between cues received at the peripheral nervous system and the identity of distant objects or events in the environment—is entirely different from its current usage in hearing science and many related fields. However, as part of an experimental approach known as representative design , the original concept of ecological validity can play a valuable role in facilitating generalizability. This paper will argue that separate existing terms should be used when referring to realism and generalizability, and that the definition of ecological validity provided by the Lens Model may be a valuable conceptual tool within hearing science.
Publisher: Acoustical Society of America (ASA)
Date: 03-2020
DOI: 10.1121/10.0000780
Abstract: To capture the demands of real-world listening, laboratory-based speech-in-noise tasks must better reflect the types of speech and environments listeners encounter in everyday life. This article reports the development of original sentence materials that were produced spontaneously with varying vocal efforts. These sentences were extracted from conversations between a talker pair (female/male) communicating in different realistic acoustic environments to elicit normal, raised and loud vocal efforts. In total, 384 sentences were extracted to provide four equivalent lists of 16 sentences at the three efforts for the two talkers. The sentences were presented to 32 young, normally hearing participants in stationary noise at five signal-to-noise ratios from −8 to 0 dB in 2 dB steps. Psychometric functions were fitted for each sentence, revealing an average 50% speech reception threshold (SRT50) of −5.2 dB, and an average slope of 17.2%/dB. Sentences were then level-normalised to adjust their in idual SRT50 to the mean (−5.2 dB). The sentences may be combined with realistic background noise to provide an assessment method that better captures the perceptual demands of everyday communication.
Publisher: Georg Thieme Verlag KG
Date: 02-2018
DOI: 10.3766/JAAA.16145
Abstract: Previous research suggests that a proportion of children experiencing reading and listening difficulties may have an underlying primary deficit in the way that the central auditory nervous system analyses the perceptually important, rapidly varying, formant frequency components of speech. The Phoneme Identification Test (PIT) was developed to investigate the ability of children to use spectro-temporal cues to perceptually categorize speech sounds based on their rapidly changing formant frequencies. The PIT uses an adaptive two-alternative forced-choice procedure whereby the participant identifies a synthesized consonant-vowel (CV) (/ba/ or /da/) syllable. CV syllables differed only in the second formant (F2) frequency along an 11-step continuum (between 0% and 100%—representing an ideal /ba/ and /da/, respectively). The CV syllables were presented in either quiet (PIT Q) or noise at a 0 dB signal-to-noise ratio (PIT N). Development of the PIT stimuli and test protocols, and collection of normative and test–retest reliability data. Twelve adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 5 mo) and 137 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 76 females. Data were collected using a touchscreen computer. Psychometric functions were automatically fit to in idual data by the PIT software. Performance was determined by the width of the continuum for which responses were neither clearly /ba/ nor /da/ (referred to as the uncertainty region [UR]). A shallower psychometric function slope reflected greater uncertainty. Age effects were determined based on raw scores. Z scores were calculated to account for the effect of age on performance. Outliers, and in idual data for which the confidence interval of the UR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance. Across participants, the median value of the F2 range that resulted in uncertain responses was 33% in quiet and 40% in noise. There was a significant effect of age on the width of this UR (p 0.00001) in both quiet and noise, with performance becoming adult like by age 9 on the PIT Q and age 10 on the PIT N. A skewed distribution toward negative performance occurred in both quiet (p = 0.01) and noise (p = 0.006). Median UR scores were significantly wider in noise than in quiet (T = 2041, p 0.0000001). Performance (z scores) across the two tests was significantly correlated (r = 0.36, p = 0.000009). Test–retest z scores were significantly correlated in both quiet and noise (r = 0.4 and 0.37, respectively, p 0.0001). The PIT normative data show that the ability to identify phonemes based on changes in formant transitions improves with age, and that some children in the general population have performance much worse than their age peers. In children, uncertainty increases when the stimuli are presented in noise. The test is suitable for use in planned studies in a clinical population.
Publisher: Georg Thieme Verlag KG
Date: 02-2018
DOI: 10.3766/JAAA.16146
Abstract: Intensity peaks and valleys in the acoustic signal are salient cues to syllable structure, which is accepted to be a crucial early step in phonological processing. As such, the ability to detect low-rate (envelope) modulations in signal litude is essential to parse an incoming speech signal into smaller phonological units. The Parsing Syllable Envelopes (ParSE) test was developed to quantify the ability of children to recognize syllable boundaries using an litude modulation detection paradigm. The envelope of a 750-msec steady-state /a/ vowel is modulated into two or three pseudo-syllables using notches with modulation depths varying between 0% and 100% along an 11-step continuum. In an adaptive three-alternative forced-choice procedure, the participant identified whether one, two, or three pseudo-syllables were heard. Development of the ParSE stimuli and test protocols, and collection of normative and test–retest reliability data. Eleven adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 10 mo) and 134 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 72 females. Data were collected using a touchscreen computer. Psychometric functions (PFs) were automatically fit to in idual data by the ParSE software. Performance was related to the modulation depth at which syllables can be detected with 88% accuracy (referred to as the upper boundary of the uncertainty region [UBUR]). A shallower PF slope reflected a greater level of uncertainty. Age effects were determined based on raw scores. z Scores were calculated to account for the effect of age on performance. Outliers, and in idual data for which the confidence interval of the UBUR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance. Across participants, the performance criterion (UBUR) was met with a median modulation depth of 42%. The effect of age on the UBUR was significant (p 0.00001). The UBUR ranged from 50% modulation depth for 6-yr-olds to 25% for adults. Children aged 6–10 had significantly higher uncertainty region boundaries than adults. A skewed distribution toward negative performance occurred (p = 0.00007). There was no significant difference in performance on the ParSE between males and females (p = 0.60). Test–retest z scores were strongly correlated (r = 0.68, p 0.0000001). The ParSE normative data show that the ability to identify syllable boundaries based on changes in litude modulation improves with age, and that some children in the general population have performance much worse than their age peers. The test is suitable for use in planned studies in a clinical population.
Publisher: Wissenschaftliche Verlagsgesellschaft mbH
Date: 07-2019
DOI: 10.3813/AAA.919349
Abstract: Everyday listening environments are characterized by far more complex spatial, spectral and temporal sound field distributions than the acoustic stimuli that are typically employed in controlled laboratory settings. As such, the reproduction of acoustic listening environments has become important for several research avenues related to sound perception, such as hearing loss rehabilitation, soundscapes, speech communication, auditory scene analysis, automatic scene classification, and room acoustics. However, the recordings of acoustic environments that are used as test material in these research areas are usually designed specifically for one study, or are provided in custom databases that cannot be universally adapted, beyond their original application. In this work we present the Ambisonic Recordings of Typical Environments (ARTE) database, which addresses several research needs simultaneously: realistic audio recordings that can be reproduced in 3D, 2D, or binaurally, with known acoustic properties, including absolute level and room impulse response. Multichannel higher-order ambisonic recordings of 13 realistic typical environments (e.g., office, cafè, dinner party, train station) were processed, acoustically analyzed, and subjectively evaluated to determine their perceived identity. The recordings are delivered in a generic format that may be reproduced with different hardware setups, and may also be used in binaural, or single-channel setups. Room impulse responses, as well as detailed acoustic analyses, of all environments supplement the recordings. The database is made open to the research community with the explicit intention to expand it in the future and include more scenes.
Publisher: Acoustical Society of America (ASA)
Date: 04-2021
DOI: 10.1121/10.0004441
Abstract: Speech signals employed in clinical and research contexts are thought to be realistic or representative to the extent that they consist of phonetic, lexical, and morphosyntactic content. These characteristics may be assumed to ensure that speech perception tests are representative of the demands of real-world speech perception and understanding, and therefore that these tests are predictive of real-world speech understanding. However, there are numerous reports of discrepancies between the results of speech perception tests and real-world speech understanding and hearing device benefit. To make optimal use of existing speech tests, and to design more predictive speech tests in the future, it is important to consider how clinical and research speech tests do and do not represent the process and demands of everyday speech understanding. Speech and language ex les from a single talker engaged in conversation with a hearing-impaired interlocutor in a range of realistic acoustic environments will be contrasted with widely used recordings of BKB sentences produced by the same talker. The absence of much natural variation in standard speech test materials, as well as a failure to quantify perception of information which cannot be captured orthographically, will be discussed as limitations to the generalizability of standard speech test results.
Publisher: American Speech Language Hearing Association
Date: 26-02-2019
DOI: 10.1044/2018_JSLHR-H-18-0107
Abstract: The purpose of this study was to introduce a method of eliciting conversational behavior with many aspects of realism, which may be used to study the impacts of hearing impairment and noise on verbal communication to describe the characteristics of speech and language participants produced during the task and to assess participants' engagement and motivation while completing the task. Twenty young adults with normal hearing and 20 older adults with hearing impairment took part in face-to-face conversations while completing a referential communication puzzle task designed to elicit natural conversational speech production and language with a number of realistic characteristics. Participants rated the difficulty and relevance of acoustic scenes for communication and their engagement in conversations. The communication task elicited speech production in a natural conversational register and language with many realistic characteristics, including complex linguistic constructions and typical disfluencies found in everyday speech, and approximately balanced contributions within dyads. Subjective ratings suggest that the task is robust to learning and fatigue effects and that participants remained highly engaged throughout the experiment. All participants were able to maintain successful communication regardless of background noise level and degree of hearing impairment. The communication task described here may be used as part of a functional assessment of the ability to communicate in the presence of noise and hearing impairment. Although existing speech assessments have many strengths, they do not take into account the inherently interactive nature of spoken communication or the effects of motivation and engagement.
Publisher: Acoustical Society of America (ASA)
Date: 09-2022
DOI: 10.1121/10.0013896
Abstract: Natural, conversational speech signals contain sources of symbolic and iconic information, both of which are necessary for the full understanding of speech. But speech intelligibility tests, which are generally derived from written language, present only symbolic information sources, including lexical semantics and syntactic structures. Speech intelligibility tests exclude almost all sources of information about talkers, including their communicative intentions and their cognitive states and processes. There is no reason to suspect that either hearing impairment or noise selectively affect perception of only symbolic information. We must therefore conclude that diagnosis of good or poor speech intelligibility on the basis of standard speech tests is based on measurement of only a fraction of the task of speech perception. This paper presents a descriptive comparison of information sources present in three widely used speech intelligibility tests and spontaneous, conversational speech elicited using a referential communication task. The aim of this comparison is to draw attention to the differences in not just the signals, but the tasks of listeners perceiving these different speech signals and to highlight the implications of these differences for the interpretation and generalizability of speech intelligibility test results.
Publisher: Acoustical Society of America (ASA)
Date: 10-2019
DOI: 10.1121/1.5137558
Abstract: Hearing-related questionnaires can reveal much about the daily experience of hearing aid users. Nonetheless, results may not fully reflect the lived experience for several reasons, including: users’ limited awareness of all communication challenges, limitations of memory, and the subjective nature of reporting. Multiple factors can influence results obtained from questionnaires (Nelson et al. ASA Louisville). Consideration of the perspectives of both hearing aid wearers and communication partners may better reflect the challenges of two-way everyday communication. We have developed simulations of challenging conversational scenarios so that clients and their partners can make judgments of sensory aid performance in realistic, but controlled conditions. Listeners with hearing loss and their partners use a client-oriented scale (adapted from the COSI, Dillon, 1997) to report challenging listening conditions such as small group conversations, phone conversations, health reports, and media. Representative scenarios are simulated in the laboratory where clients and partners make ratings of intelligibility, quality, and preference. Results are compared to outcome measures such as the Speech, Spatial and Qualities of Hearing Scale (SSQ, Gatehouse and Noble, 2004) and Social Participation Restrictions Questionnaire (SPaRQ, Heffernan et al., 2018). Results will help refine methods for evaluating the performance of emerging technologies for hearing loss.
Location: United States of America
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Timothy Beechey.