ORCID Profile
0000-0002-7776-0222
Current Organisation
University of Reading
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Springer Science and Business Media LLC
Date: 29-08-2023
DOI: 10.1007/S10803-023-06075-7
Abstract: Previous studies reported mixed findings on autistic in iduals’ pitch perception relative to neurotypical (NT) in iduals. We investigated whether this may be partly due to in idual differences in cognitive abilities by comparing their performance on various pitch perception tasks on a large s le ( n = 164) of autistic and NT children and adults. Our findings revealed that: (i) autistic in iduals either showed similar or worse performance than NT in iduals on the pitch tasks (ii) cognitive abilities were associated with some pitch task performance and (iii) cognitive abilities modulated the relationship between autism diagnosis and pitch perception on some tasks. Our findings highlight the importance of taking an in idual differences approach to understand the strengths and weaknesses of pitch processing in autism.
Publisher: Frontiers Media SA
Date: 09-04-2015
Publisher: Frontiers Media SA
Date: 23-12-2016
Publisher: Wiley
Date: 24-05-2020
DOI: 10.1111/PSYP.13598
Publisher: University of California Press
Date: 04-2016
Abstract: We examined explicit processing of musical syntax and tonality in a group of Han Chinese Mandarin speakers with congenital amusia, and the extent to which pitch discrimination impairments were associated with syntax and tonality processing. In Experiment 1, we assessed whether congenital amusia is associated with impaired explicit processing of musical syntax. Congruity ratings were examined for syntactically regular or irregular endings in harmonic and melodic contexts. Unlike controls, amusic participants failed to explicitly distinguish regular from irregular endings in both contexts. Surprisingly, however, a concurrent manipulation of pitch distance did not affect the processing of musical syntax for amusics, and their impaired music-syntactic processing was uncorrelated with their pitch discrimination thresholds. In Experiment 2, we assessed tonality perception using a probe-tone paradigm. Recovery of the tonal hierarchy was less evident for the amusic group than for the control group, and this reduced sensitivity to tonality in amusia was also unrelated to poor pitch discrimination. These findings support the view that music structure is processed by cognitive and neural resources that operate independently of pitch discrimination, and that these resources are impaired in explicit judgments for in iduals with congenital amusia.
Publisher: Public Library of Science (PLoS)
Date: 08-02-2012
Publisher: Center for Open Science
Date: 30-11-2022
Abstract: What, if any, similarities and differences between music and speech are consistent across cultures? Both music and language are found in all known human societies and are argued to share evolutionary roots and cognitive resources, yet no studies have compared similarities and differences between song, speech, and instrumental music across languages on a global scale. In this Registered Report, we analyze a novel dataset of 300 high-quality annotated audio recordings representing matched sets of singing, recitation, conversational speech, and instrumental music from our 75 coauthors whose 55 1st/heritage languages span 21 language families to find strong evidence for cross-culturally consistent differences and similarities between music and language. Of our six pre-registered predictions, five were strongly supported: relative to speech, songs use 1) higher pitch, 2) slower temporal rate, and 3) more stable pitches, while both songs and speech used similar 4) pitch interval size, and 5) timbral brightness. Our 6th prediction that song and speech would show similar pitch declination was inconclusive, with exploratory analysis suggesting that songs tend to follow an arched contour while speech contours tend to decline overall but end with a slight rise. Because our non-representative language s le and unusual design involving coauthors as participants could affect our results, we also performed robustness analyses - including a parallel reanalysis of a previously published dataset of 418 song/speech recordings from 209 in iduals whose 16 languages span 11 language families (Hilton & Moser et al., 2022, Nature Human Behaviour) - which confirmed that our conclusions are robust to these potential biases. Exploratory analyses identified additional features such as phrase length, intensity, and rhythmic/melodic regularity that also consistently distinguish song from speech, and suggest that such features also vary along a “musi-linguistic” continuum in a cross-culturally consistent manner when including instrumental melodies and recited lyrics. Further exploratory analysis suggests that pitch height is the only consistently sexually dimorphic feature (female singing/speaking is almost one octave higher than male on average), and that other factors such as musical training and recording context may also interact to influence the magnitude of song-speech differences. Our study provides strong empirical evidence for the existence of cross-cultural regularities in music and speech.
Publisher: Acoustical Society of America (ASA)
Date: 12-2020
DOI: 10.1121/10.0002776
Abstract: Many studies have reported a musical advantage in perceiving lexical tones among non-native listeners, but it is unclear whether this advantage also applies to native listeners, who are likely to show ceiling-like performance and thus mask any potential musical advantage. The ongoing tone merging phenomenon in Hong Kong Cantonese provides a unique opportunity to investigate this as merging tone pairs are reported to be difficult to differentiate even among native listeners. In the present study, native Cantonese musicians and non-musicians were compared based on discrimination and identification of merging Cantonese tone pairs to determine whether a musical advantage in perception will be observed, and if so, whether this is seen on the phonetic and/or phonological level. The tonal space of the subjects' lexical tone production was also compared. Results indicated that the musicians outperformed the non-musicians on the two perceptual tasks, as indexed by a higher accuracy and faster reaction time, particularly on the most difficult tone pair. In the production task, however, there was no group difference in various indices of tonal space. Taken together, musical experience appears to facilitate native listeners' perception, but not production, of lexical tones, which partially supports a music-to-language transfer effect.
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Fang Liu.