ORCID Profile
0000-0002-5760-7490
Current Organisations
Université Claude Bernard Lyon 1
,
Western Sydney University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Sensory Processes, Perception and Performance | Psychology | Linguistic Processes (incl. Speech Production and Comprehension)
Expanding Knowledge in Psychology and Cognitive Sciences | Hearing, Vision, Speech and Their Disorders |
Publisher: MDPI AG
Date: 08-04-2022
Abstract: Speech therapy can be part of the care pathway for patients recovering from comas and presenting a disorder of consciousness (DOC). Although there are no official recommendations for speech therapy follow-up, neuroscientific studies suggest that relevant stimuli may have beneficial effects on the behavioral assessment of patients with a DOC. In two case studies, we longitudinally measured (from 4 to 6 weeks) the behavior (observed in a speech therapy session or using items from the Coma Recovery Scale—Revised) of two patients in a minimally conscious state (MCS) when presenting music and/or autobiographical materials. The results highlight the importance of using relevant material during a speech therapy session and suggest that a musical context with a fast tempo could improve behavior evaluation compared to noise. This work supports the importance of adapted speech therapy for MCS patients and encourages larger studies to confirm these initial observations.
Publisher: American Psychological Association (APA)
Date: 11-2021
DOI: 10.1037/NEU0000766
Publisher: Springer Science and Business Media LLC
Date: 17-12-2018
DOI: 10.1038/S41598-018-36076-X
Abstract: Music and language are complex hierarchical systems in which in idual elements are systematically combined to form larger, syntactic structures. Suggestions that music and language share syntactic processing resources have relied on evidence that syntactic violations in music interfere with syntactic processing in language. However, syntactic violations may affect auditory processing in non-syntactic ways, accounting for reported interference effects. To investigate the factors contributing to interference effects, we assessed recall of visually presented sentences and word-lists when accompanied by background auditory stimuli differing in syntactic structure and auditory distraction: melodies without violations, scrambled melodies, melodies that alternate in timbre, and environmental sounds. In Experiment 1, one-timbre melodies interfered with sentence recall, and increasing both syntactic complexity and distraction by scrambling melodies increased this interference. In contrast, three-timbre melodies reduced interference on sentence recall, presumably because alternating instruments interrupted auditory streaming, reducing pressure on long-distance syntactic structure building. Experiment 2 confirmed that participants were better at discriminating syntactically coherent one-timbre melodies than three-timbre melodies. Together, these results illustrate that syntactic processing and auditory streaming interact to influence sentence recall, providing implications for theories of shared syntactic processing and auditory distraction.
Publisher: Springer Science and Business Media LLC
Date: 18-04-2022
Publisher: Elsevier BV
Date: 04-2020
DOI: 10.1016/J.BANDC.2020.105531
Abstract: When listening to temporally regular rhythms, most people are able to extract the beat. Evidence suggests that the neural mechanism underlying this ability is the phase alignment of endogenous oscillations to the external stimulus, allowing for the prediction of upcoming events (i.e., dynamic attending). Relatedly, in iduals with dyslexia may have deficits in the entrainment of neural oscillations to external stimuli, especially at low frequencies. The current experiment investigated rhythmic processing in adults with dyslexia and matched controls. Regular and irregular rhythms were presented to participants while electroencephalography was recorded. Regular rhythms contained the beat at 2 Hz while acoustic energy was maximal at 4 Hz and 8 Hz. These stimuli allowed us to investigate whether the brain responds non-linearly to the beat-level of a rhythmic stimulus, and whether beat-based processing differs between dyslexic and control participants. Both groups showed enhanced stimulus-brain coherence for regular compared to irregular rhythms at the frequencies of interest, with an overrepresentation of the beat-level in the brain compared to the acoustic signal. In addition, we found evidence that controls extracted subtle temporal regularities from irregular stimuli, whereas dyslexics did not. Findings are discussed in relation to dynamic attending theory and rhythmic processing deficits in dyslexia.
Publisher: Elsevier BV
Date: 02-2019
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2019.107324
Abstract: Regular musical rhythms orient attention over time and facilitate processing. Previous research has shown that regular rhythmic stimulation benefits subsequent syntax processing in children with dyslexia and specific language impairment. The present EEG study examined the influence of a rhythmic musical prime on the P600 late evoked-potential, associated with grammatical error detection for dyslexic adults and matched controls. Participants listened to regular or irregular rhythmic prime sequences followed by grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence while EEG was recorded. In addition, tasks on syntax violation detection as well as rhythm perception and production were administered. For both participant groups, ungrammatical sentences evoked a P600 in comparison to grammatical sentences and its mean litude was larger after regular than irregular primes. Peak analyses of the P600 difference wave confirmed larger peak litudes after regular primes for both groups. They also revealed overall a later peak for dyslexic participants, particularly at posterior sites, compared to controls. Results extend rhythmic priming effects on language processing to underlying electrophysiological correlates of morpho-syntactic violation detection in dyslexic adults and matched controls. These findings are interpreted in the theoretical framework of the Dynamic Attending Theory (Jones, 1976, 2019) and the Temporal S ling Framework for developmental disorders (Goswami, 2011).
Publisher: Wiley
Date: 03-04-2020
DOI: 10.1002/WCS.1528
Abstract: Although a growing literature points to substantial variation in speech/language abilities related to in idual differences in musical abilities, mainstream models of communication sciences and disorders have not yet incorporated these in idual differences into childhood speech/language development. This article reviews three sources of evidence in a comprehensive body of research aligning with three main themes: (a) associations between musical rhythm and speech/language processing, (b) musical rhythm in children with developmental speech/language disorders and common comorbid attentional and motor disorders, and (c) in idual differences in mechanisms underlying rhythm processing in infants and their relationship with later speech/language development. In light of converging evidence on associations between musical rhythm and speech/language processing, we propose the Atypical Rhythm Risk Hypothesis, which posits that in iduals with atypical rhythm are at higher risk for developmental speech/language disorders. The hypothesis is framed within the larger epidemiological literature in which recent methodological advances allow for large‐scale testing of shared underlying biology across clinically distinct disorders. A series of predictions for future work testing the Atypical Rhythm Risk Hypothesis are outlined. We suggest that if a significant body of evidence is found to support this hypothesis, we can envision new risk factor models that incorporate atypical rhythm to predict the risk of developing speech/language disorders. Given the high prevalence of speech/language disorders in the population and the negative long‐term social and economic consequences of gaps in identifying children at‐risk, these new lines of research could potentially positively impact access to early identification and treatment. This article is categorized under: Linguistics Language in Mind and Brain Neuroscience Development Linguistics Language Acquisition
Publisher: SAGE Publications
Date: 08-07-2016
Abstract: The effects of music on the brain have been extensively researched, and numerous connections have been found between music and language, music and emotion, and music and cognitive processing. Despite this work, these three research areas have never before been drawn together into a single research paradigm. This is significant as their combination could lead to valuable insights into the effects of musical valence on the cognitive processing of lyrics. This research draws on theories of cognitive processing suggesting that negative moods facilitate systematic and detail-oriented processing, while positive moods facilitate heuristic-based processing. The current study ( n = 56) used an error detection paradigm and found that significantly more error words were detected when paired with negatively valenced sad music compared to positively valenced happy music. Such a result explains previous findings that sad and happy lyrics have differential effects on emotion induction, and suggests this is due to sad lyrics being processed at deeper semantic levels. This study provides a framework in which to understand the interaction of lyrics and music with emotion induction – a primary reason for listening to music.
Publisher: Elsevier BV
Date: 07-2018
DOI: 10.1016/J.IJPSYCHO.2018.05.003
Abstract: Music and language both rely on the processing of spectral (pitch, timbre) and temporal (rhythm) information to create structure and meaning from incoming auditory streams. Behavioral results have shown that interrupting a melodic stream with unexpected changes in timbre leads to reduced syntactic processing. Such findings suggest that syntactic processing is conditional on successful streaming of incoming sequential information. The current study used event-related potentials (ERPs) to investigate whether (1) the effect of alternating timbres on syntactic processing is reflected in a reduced brain response to syntactic violations, and (2) the phenomenon is similar for music and language. Participants listened to melodies and sentences with either one timbre (piano or one voice) or three timbres (piano, guitar, and vibraphone, or three different voices). Half the stimuli contained syntactic violations: an out-of-key note in the melodies, and a phrase-structure violation in the sentences. We found smaller ERPs to syntactic violations in music in the three-timbre compared to the one-timbre condition, reflected in a reduced early right anterior negativity (ERAN). A similar but non-significant pattern was observed for language stimuli in both the early left anterior negativity (ELAN) and the left anterior negativity (LAN) ERPs. The results suggest that disruptions to auditory streaming may interfere with syntactic processing, especially for melodic sequences.
Publisher: SAGE Publications
Date: 06-11-2014
Abstract: The cognitive processing similarities between music and language is an emerging field of study, with research finding evidence for shared processing pathways in the brain, especially in relation to syntax. This research combines theory from the shared syntactic integration resource hypothesis (SSIRH Patel, 2008) and syntactic working memory (SWM) theory (Kljajevic, 2010), and suggests there will be shared processing costs when music and language concurrently access SWM. To examine this, word lists and complex sentences were paired with three music conditions: normal syntactic manipulation (out-of-key chord) and a control condition with an instrument manipulation. As predicted, memory for sentences declined when paired with the syntactic manipulation compared to the other two music manipulations, but the same pattern did not occur in word lists. This suggests that both sentences and music with a syntactic irregularity are accessing SWM. Word lists, however, are thought to be primarily accessing the phonological loop, and therefore did not show effects of shared processing. Musicians performed differently from non-musicians, suggesting that the processing of musical and linguistic syntax differs with musical ability. Such results suggest a separation in processing between the phonological loop and SWM, and give evidence for shared processing mechanisms between music and language syntax.
Publisher: Elsevier BV
Date: 09-2020
Publisher: Springer Science and Business Media LLC
Date: 11-03-2021
Publisher: Springer Science and Business Media LLC
Date: 10-07-2023
DOI: 10.1038/S41539-023-00170-1
Abstract: Recently reported links between rhythm and grammar processing have opened new perspectives for using rhythm in clinical interventions for children with developmental language disorder (DLD). Previous research using the rhythmic priming paradigm has shown improved performance on language tasks after regular rhythmic primes compared to control conditions. However, this research has been limited to effects of rhythmic priming on grammaticality judgments. The current study investigated whether regular rhythmic primes could also benefit sentence repetition, a task requiring proficiency in complex syntax—an area of difficultly for children with DLD. Regular rhythmic primes improved sentence repetition performance compared to irregular rhythmic primes in children with DLD and with typical development—an effect that did not occur with a non-linguistic control task. These findings suggest processing overlap for musical rhythm and linguistic syntax, with implications for the use of rhythmic stimulation for treatment of children with DLD in clinical research and practice.
Publisher: Walter de Gruyter GmbH
Date: 20-03-2019
Abstract: How did human vocalizations come to acquire meaning in the evolution of our species? Charles Darwin proposed that language and music originated from a common emotional signal system based on the imitation and modification of sounds in nature. This protolanguage is thought to have erged into two separate systems, with speech prioritizing referential functionality and music prioritizing emotional functionality. However, there has never been an attempt to empirically evaluate the hypothesis that a single communication system can split into two functionally distinct systems that are characterized by music- and languagelike properties. Here, we demonstrate that when referential and emotional functions are introduced into an artificial communication system, that system will erge into vocalization forms with speech- and music-like properties, respectively. Participants heard novel vocalizations as part of a learning task. Half referred to physical entities and half functioned to communicate emotional states. Participants then reproduced each sound with the defined communicative intention in mind. Each recorded vocalization was used as the input for another participant in a serial reproduction paradigm, and this procedure was iterated to create 15 chains of five participants each. Referential vocalizations were rated as more speech-like , whereas emotional vocalizations were rated as more music-like , and this association was observed cross-culturally. In addition, a stable separation of the acoustic profiles of referential and emotional vocalizations emerged, with some attributes erging immediately and others erging gradually across iterations. The findings align with Darwin’s hypothesis and provide insight into the roles of biological and cultural evolution in the ergence of language and music.
Start Date: 12-2022
End Date: 12-2025
Amount: $409,781.00
Funder: Australian Research Council
View Funded Activity