ORCID Profile
0000-0001-7458-0229
Current Organisation
Max Planck Institute for Empirical Aesthetics
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Springer Science and Business Media LLC
Date: 13-10-2021
DOI: 10.1038/S41598-021-98571-Y
Abstract: The frontopolar cortex (FPC) contributes to tracking the reward of alternative choices during decision making, as well as their reliability. Whether this FPC function extends to reward gradients associated with continuous movements during motor learning remains unknown. We used anodal transcranial direct current stimulation (tDCS) over the right FPC to investigate its role in reward-based motor learning. Nineteen healthy human participants practiced novel sequences of finger movements on a digital piano with corresponding auditory feedback. Their aim was to use trialwise reward feedback to discover a hidden performance goal along a continuous dimension: timing. We additionally modulated the contralateral motor cortex (left M1) activity, and included a control sham stimulation. Right FPC-tDCS led to faster learning compared to lM1-tDCS and sham through regulation of motor variability. Bayesian computational modelling revealed that in all stimulation protocols, an increase in the trialwise expectation of reward was followed by greater exploitation, as shown previously. Yet, this association was weaker in lM1-tDCS suggesting a less efficient learning strategy. The effects of frontopolar stimulation were dissociated from those induced by lM1-tDCS and sham, as motor exploration was more sensitive to inferred changes in the reward tendency (volatility). The findings suggest that rFPC-tDCS increases the sensitivity of motor exploration to updates in reward volatility, accelerating reward-based motor learning.
Publisher: Oxford University Press (OUP)
Date: 30-06-2022
Abstract: Joint music performance requires flexible sensorimotor coordination between self and other. Cognitive and sensory parameters of joint action-such as shared knowledge or temporal (a)synchrony-influence this coordination by shifting the balance between self-other segregation and integration. To investigate the neural bases of these parameters and their interaction during joint action, we asked pianists to play on an MR-compatible piano, in duet with a partner outside of the scanner room. Motor knowledge of the partner's musical part and the temporal compatibility of the partner's action feedback were manipulated. First, we found stronger activity and functional connectivity within cortico-cerebellar audio-motor networks when pianists had practiced their partner's part before. This indicates that they simulated and anticipated the auditory feedback of the partner by virtue of an internal model. Second, we observed stronger cerebellar activity and reduced behavioral adaptation when pianists encountered subtle asynchronies between these model-based anticipations and the perceived sensory outcome of (familiar) partner actions, indicating a shift towards self-other segregation. These combined findings demonstrate that cortico-cerebellar audio-motor networks link motor knowledge and other-produced sounds depending on cognitive and sensory factors of the joint performance, and play a crucial role in balancing self-other integration and segregation.
Publisher: Elsevier BV
Date: 12-2015
DOI: 10.1016/J.CUB.2015.10.009
Abstract: Our vocal tone--the prosody--contributes a lot to the meaning of speech beyond the actual words. Indeed, the hesitant tone of a "yes" may be more telling than its affirmative lexical meaning. The human brain contains dorsal and ventral processing streams in the left hemisphere that underlie core linguistic abilities such as phonology, syntax, and semantics. Whether or not prosody--a reportedly right-hemispheric faculty--involves analogous processing streams is a matter of debate. Functional connectivity studies on prosody leave no doubt about the existence of such streams, but opinions erge on whether information travels along dorsal or ventral pathways. Here we show, with a novel paradigm using audio morphing combined with multimodal neuroimaging and brain stimulation, that prosody perception takes dual routes along dorsal and ventral pathways in the right hemisphere. In experiment 1, categorization of speech stimuli that gradually varied in their prosodic pitch contour (between statement and question) involved (1) an auditory ventral pathway along the superior temporal lobe and (2) auditory-motor dorsal pathways connecting posterior temporal and inferior frontal remotor areas. In experiment 2, inhibitory stimulation of right premotor cortex as a key node of the dorsal stream decreased participants' performance in prosody categorization, arguing for a motor involvement in prosody perception. These data draw a dual-stream picture of prosodic processing that parallels the established left-hemispheric multi-stream architecture of language, but with relative rightward asymmetry.
Publisher: Informa UK Limited
Date: 11-10-2016
DOI: 10.1080/13554794.2016.1237660
Abstract: Song and speech represent two auditory categories the brain usually classifies fairly easily. Functionally, this classification ability may depend to a great extent on characteristic features of pitch patterns present in song melody and speech prosody. Anatomically, the temporal lobe (TL) has been discussed as playing a prominent role in the processing of both. Here we tested in iduals with congenital amusia and patients with unilateral left and right TL lesions in their ability to categorize song and speech. In a forced-choice paradigm, specifically designed auditory stimuli representing sung, spoken and "ambiguous" stimuli (being perceived as "halfway between" song and speech), had to be classified as either "song" or "speech". Congenital amusics and TL patients, contrary to controls, exhibited a surprising bias to classifying the ambiguous stimuli as "song" despite their apparent deficit to correctly process features typical for song. This response bias possibly reflects a strategy where, based on available context information (here: forced choice for either speech or song), classification of non-processable items may be achieved through elimination of processable classes. This speech-based strategy masks the pitch processing deficit in congenital amusics and TL lesion patients.
Publisher: Frontiers Media SA
Date: 2014
Publisher: Elsevier BV
Date: 2008
DOI: 10.1016/J.CLINPH.2007.09.131
Abstract: We investigated electroencephalographic (EEG) correlates of moderate intermittent explosive disorder (mIED), which is characterized by uncontrollable, impulsive attacks that either manifest in aggressive outbursts of temper, or in implosive, auto-aggressive behaviour. In two Experiments, EEG data were recorded during rest conditions, and while subjects were presented with auditory and visual stimuli. Additionally, scores of the I7 impulsivity scale (designed to capture acting on impulse) were obtained. In Experiment 1, in iduals with mIED showed a stronger increase in the power of oscillatory activity in the beta band, along with a stronger power decrease in the theta band in response to both visual and auditory stimuli. Based on discriminant function analysis, a model of discriminant functions was derived that clearly separated the mIED group from the control group. In Experiment 2, subjects were categorized into either of two groups (supposedly without mIED, with mIED) based on this model of discriminant functions. Results showed that I7 impulsivity scores clearly differed between groups. The present data show a relation between oscillatory brain activity and mIED. They indicate that this brain activity is related to the impulsivity facet of impulsive action, and suggest that mIED can be assessed based on the analysis of electrophysiological data. To our knowledge, this is the first study on EEG correlates of (m)IED. Results open up new perspectives for future investigations on disorders characterized by substantial impulsivity.
Publisher: Wiley
Date: 16-02-2007
DOI: 10.1111/J.1469-8986.2007.00497.X
Abstract: Human emotion and its electrophysiological correlates are still poorly understood. The present study examined whether the valence of perceived emotions would differentially influence EEG power spectra and heart rate (HR). Pleasant and unpleasant emotions were induced by consonant and dissonant music. Unpleasant (compared to pleasant) music evoked a significant decrease of HR, replicating the pattern of HR responses previously described for the processing of emotional pictures, sounds, and films. In the EEG, pleasant (contrasted to unpleasant) music was associated with an increase of frontal midline (Fm) theta power. This effect is taken to reflect emotional processing in close interaction with attentional functions. These findings show that Fm theta is modulated by emotion more strongly than previously believed.
Publisher: Oxford University Press (OUP)
Date: 16-05-2018
DOI: 10.1093/SCAN/NSY034
Publisher: Elsevier BV
Date: 05-2013
DOI: 10.1016/J.CORTEX.2012.06.007
Abstract: Syntactic operations in language and music are well established and known to be linked in cognitive and neuroanatomical terms. What remains a matter of debate is whether the notion of syntax also applies to human actions and how those may be linked to syntax in language and music. The present electroencephalography (EEG) study explored syntactic processes during the observation, motor programming, and execution of musical actions. Therefore, expert pianists watched and imitated silent videos of a hand playing 5-chord sequences in which the last chord was syntactically congruent or incongruent with the preceding harmonic context. 2-chord sequences that diluted the syntactic predictability of the last chord (by reducing the harmonic context) served as a control condition. We assumed that behavioural and event-related potential (ERP) effects (i.e., differences between congruent and incongruent trials) that were significantly stronger in the 5-chord compared to the 2-chord sequences are related to syntactic processing. According to this criterion, the present results show an influence of syntactic context on ERPs related to (i) action observation and (ii) the motor programming for action imitation, as well as (iii) participants' execution times and accuracy. In particular, the occurrence of electrophysiological indices of action inhibition and reprogramming when an incongruent chord had to be imitated implies that the pianist's motor system anticipated (and revoked) the congruent chord during action observation. Notably, this well-known anticipatory potential of the motor system seems to be strongly based upon the observer's music-syntactic knowledge, thus suggesting the "embodied" processing of musical syntax. The combined behavioural and electrophysiological data show that the notion of musical syntax not only applies to the auditory modality but transfers--in trained musicians--to a "grammar of musical action".
Publisher: Wiley
Date: 07-2009
DOI: 10.1111/J.1749-6632.2009.04792.X
Abstract: The present study investigated the co-localization of musical and linguistic syntax processing in the human brain. EEGs were recorded from subdural electrodes placed on the left and right perisylvian cortex. The neural generators of the early potentials elicited by syntactic errors in music and language were localized by means of distributed source modeling and compared within subjects. The combined results indicated a partial overlap of the sources within the bilateral superior temporal gyrus, and, to a lesser extent, in the left inferior frontal gyrus, qualifying these areas as shared anatomic substrates of early syntactic error detection in music and language.
Publisher: Springer Science and Business Media LLC
Date: 29-09-2015
Publisher: MDPI AG
Date: 02-08-2020
Abstract: Neurocomparative music and language research has seen major advances over the past two decades. The goal of this Special Issue “Advances in the Neurocognition of Music and Language” was to showcase the multiple neural analogies between musical and linguistic information processing, their entwined organization in human perception and cognition and to infer the applicability of the combined knowledge in pedagogy and therapy. Here, we summarize the main insights provided by the contributions and integrate them into current frameworks of rhythm processing, neuronal entrainment, predictive coding and cognitive control.
Publisher: MIT Press - Journals
Date: 2016
DOI: 10.1162/JOCN_A_00873
Abstract: Complex human behavior is hierarchically organized. Whether or not syntax plays a role in this organization is currently under debate. The present ERP study uses piano performance to isolate syntactic operations in action planning and to demonstrate their priority over nonsyntactic levels of movement selection. Expert pianists were asked to execute chord progressions on a mute keyboard by copying the posture of a performing model hand shown in sequences of photos. We manipulated the final chord of each sequence in terms of Syntax (congruent/incongruent keys) and Manner (conventional/unconventional fingering), as well as the strength of its predictability by varying the length of the Context (five-chord/two-chord progressions). The production of syntactically incongruent compared to congruent chords showed a response delay that was larger in the long compared to the short context. This behavioral effect was accompanied by a centroparietal negativity in the long but not in the short context, suggesting that a syntax-based motor plan was prepared ahead. Conversely, the execution of the unconventional manner was not delayed as a function of Context and elicited an opposite electrophysiological pattern (a posterior positivity). The current data support the hypothesis that motor plans operate at the level of musical syntax and are incrementally translated to lower levels of movement selection.
Publisher: Elsevier BV
Date: 11-2016
DOI: 10.1016/J.NEUROIMAGE.2016.08.025
Abstract: The ability to predict upcoming structured events based on long-term knowledge and contextual priors is a fundamental principle of human cognition. Tonal music triggers predictive processes based on structural properties of harmony, i.e., regularities defining the arrangement of chords into well-formed musical sequences. While the neural architecture of structure-based predictions during music perception is well described, little is known about the neural networks for analogous predictions in musical actions and how they relate to auditory perception. To fill this gap, expert pianists were presented with harmonically congruent or incongruent chord progressions, either as musical actions (photos of a hand playing chords) that they were required to watch and imitate without sound, or in an auditory format that they listened to without playing. By combining task-based functional magnetic resonance imaging (fMRI) with functional connectivity at rest, we identified distinct sub-regions in right inferior frontal gyrus (rIFG) interconnected with parietal and temporal areas for processing action and audio sequences, respectively. We argue that the differential contribution of parietal and temporal areas is tied to motoric and auditory long-term representations of harmonic regularities that dynamically interact with computations in rIFG. Parsing of the structural dependencies in rIFG is co-determined by both stimulus- or task-demands. In line with contemporary models of prefrontal cortex organization and dual stream models of visual-spatial and auditory processing, we show that the processing of musical harmony is a network capacity with dissociated dorsal and ventral motor and auditory circuits, which both provide the infrastructure for predictive mechanisms optimising action and perception performance.
Publisher: Elsevier BV
Date: 06-2016
Publisher: Public Library of Science (PLoS)
Date: 09-07-2008
Publisher: Wiley
Date: 16-11-2007
DOI: 10.1111/J.1460-9568.2007.05889.X
Abstract: Human personality has brain correlates that exert manifold influences on biological processes. This study investigates relations between emotional personality and heart activity. Our data demonstrate that emotional personality is related to a specific cardiac litude signature in the resting electrocardiogram (ECG). Two experiments using functional magnetic resonance imaging show that this signature correlates with brain activity in the amygdala and the hippoc us during the processing of musical stimuli with emotional valence. Additionally, this cardiac signature correlates with subjective indices of emotionality (as measured by the Revised Toronto Alexithymia Scale), and with both time and frequency domain measures of the heart rate variability. The results demonstrate intricate connections between emotional personality and the heart by showing that ECG litude patterns provide considerably more information about an in idual's emotionality than previously believed. The finding of a cardiac signature of emotional personality opens new perspectives for the investigation of relations between emotional dysbalance and cardiovascular disease.
Publisher: Wiley
Date: 20-01-2020
DOI: 10.1002/HBM.24916
Publisher: Wiley
Date: 31-01-2018
DOI: 10.1111/EPI.14012
Abstract: The objective of our study was to assess alterations in speech as a possible localizing sign in frontal lobe epilepsy. Ictal speech was analyzed in 18 patients with frontal lobe epilepsy (FLE) during seizures and in the interictal period. Matched identical words were analyzed regarding alterations in fundamental frequency (ƒo) as an approximation of pitch. In patients with FLE, ƒo of ictal utterances was significantly higher than ƒo in interictal recordings (p = 0.016). Ictal ƒo increases occurred in both FLE of right and left seizure origin. In contrast, a matched temporal lobe epilepsy (TLE) group showed less pronounced increases in ƒo, and only in patients with right-sided seizure foci. This study for the first time shows significant voice alterations in ictal speech in a cohort of patients with FLE. This may contribute to the localization of the epileptic focus. Increases in ƒo were interestingly found in frontal lobe seizures with origin in either hemisphere, suggesting a bilateral involvement to the planning of speech production, in contrast to a more right-sided lateralization of pitch perception in prosodic processing.
Publisher: Elsevier BV
Date: 2019
DOI: 10.1016/J.NEUROIMAGE.2018.10.037
Abstract: Neural activity phase-locks to rhythm in both music and speech. However, the literature currently lacks a direct test of whether cortical tracking of comparable rhythmic structure is comparable across domains. Moreover, although musical training improves multiple aspects of music and speech perception, the relationship between musical training and cortical tracking of rhythm has not been compared directly across domains. We recorded the electroencephalograms (EEG) from 28 participants (14 female) with a range of musical training who listened to melodies and sentences with identical rhythmic structure. We compared cerebral-acoustic coherence (CACoh) between the EEG signal and single-trial stimulus envelopes (as measure of cortical entrainment) across domains and correlated years of musical training with CACoh. We hypothesized that neural activity would be comparably phase-locked across domains, and that the amount of musical training would be associated with increasingly strong phase locking in both domains. We found that participants with only a few years of musical training had a comparable cortical response to music and speech rhythm, partially supporting the hypothesis. However, the cortical response to music rhythm increased with years of musical training while the response to speech rhythm did not, leading to an overall greater cortical response to music rhythm across all participants. We suggest that task demands shaped the asymmetric cortical tracking across domains.
Publisher: Wiley
Date: 30-09-2021
DOI: 10.1002/HBM.25214
Publisher: Elsevier BV
Date: 06-2011
DOI: 10.1016/J.CORTEX.2010.04.007
Abstract: An increasing number of neuroimaging studies in music cognition research suggest that "language areas" are involved in the processing of musical syntax, but none of these studies clarified whether these areas are a prerequisite for normal syntax processing in music. The present electrophysiological experiment tested whether patients with lesions in Broca's area (N=6) or in the left anterior temporal lobe (N=7) exhibit deficits in the processing of structure in music compared to matched healthy controls (N=13). A chord sequence paradigm was applied, and the litude and scalp topography of the Early Right Anterior Negativity (ERAN) was examined, an electrophysiological marker of musical syntax processing that correlates with activity in Broca's area and its right hemisphere homotope. Left inferior frontal gyrus (IFG) (but not anterior superior temporal gyrus - aSTG) patients with lesions older than 4 years showed an ERAN with abnormal scalp distribution, and subtle behavioural deficits in detecting music-syntactic irregularities. In one IFG patient tested 7 months post-stroke, the ERAN was extinguished and the behavioural performance remained at chance level. These combined results suggest that the left IFG, known to be crucial for syntax processing in language, plays also a functional role in the processing of musical syntax. Hence, the present findings are consistent with the notion that Broca's area supports the processing of syntax in a rather domain-general way.
Publisher: Cold Spring Harbor Laboratory
Date: 21-10-2020
DOI: 10.1101/2020.10.21.348243
Abstract: Complex sequential behaviours, such as speaking or playing music, often entail the flexible, rule-based chaining of single acts. However, it remains unclear how the brain translates abstract structural rules into concrete series of movements. Here we demonstrate a multi-level contribution of anatomically distinct cognitive and motor networks to the execution of novel musical sequences. We combined functional and diffusion-weighted neuroimaging to dissociate high-level structural and low-level motor planning of musical chord sequences executed on a piano. Fronto-temporal and fronto-parietal neural networks were involved when sequences violated pianists’ structural or motor plans, respectively. Prefrontal cortex is identified as a hub where both networks converge within an anterior-to-posterior gradient of action control linking abstract structural rules to concrete movement sequences.
Publisher: Elsevier BV
Date: 08-2015
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2015.05.020
Abstract: Sentences, musical phrases and goal-directed actions are composed of elements that are linked by specific rules to form meaningful outcomes. In goal-directed actions including a non-canonical element or scrambling the order of the elements alters the action's content and structure, respectively. In the present study we investigated event-related potentials of the electroencephalographic (EEG) activity recorded during observation of both alterations of the action content (obtained by violating the semantic components of an action, e.g. making coffee with cola) and alterations of the action structure (obtained by inverting the order of two temporally adjacent pictures of sequences depicting daily life actions) interfering with the normal flow of the motor acts that compose an action. Action content alterations elicited a bilateral posterior distributed EEG negativity, peaking at around 400 ms after stimulus onset similar to the ERPs evoked by semantic violations in language studies. Alteration of the action structure elicited an early left anterior negativity followed by a late left anterior positivity, which closely resembles the ERP pattern found in language syntax violation studies. Our results suggest a functional dissociation between the processing of action content and structure, reminiscent of a similar dissociation found in the language or music domains. Importantly, this study provides further support to the hypothesis that some basic mechanisms, such as the rule-based structuring of sequential events, are shared between different cognitive domains.
Publisher: Society for Neuroscience
Date: 10-03-2010
DOI: 10.1523/JNEUROSCI.2751-09.2010
Abstract: The cognitive relationship between lyrics and tunes in song is currently under debate, with some researchers arguing that lyrics and tunes are represented as separate components, while others suggest that they are processed in integration. The present study addressed this issue by means of a functional magnetic resonance adaptation paradigm during passive listening to unfamiliar songs. The repetition and variation of lyrics and/or tunes in blocks of six songs was crossed in a 2 × 2 factorial design to induce selective adaptation for each component. Reductions of the hemodynamic response were observed along the superior temporal sulcus and gyrus (STS/STG) bilaterally. Within these regions, the left mid-STS showed an interaction of the adaptation effects for lyrics and tunes, suggesting an integrated processing of the two components at prelexical, phonemic processing levels. The degree of integration decayed toward more anterior regions of the left STS, where the lack of such an interaction and the stronger adaptation for lyrics than for tunes was suggestive of an independent processing of lyrics, perhaps resulting from the processing of meaning. Finally, evidence for an integrated representation of lyrics and tunes was found in the left dorsal precentral gyrus (PrCG), possibly relating to the build-up of a vocal code for singing in which musical and linguistic features of song are fused. Overall, these results demonstrate that lyrics and tunes are processed at varying degrees of integration (and separation) through the consecutive processing levels allocated along the posterior–anterior axis of the left STS and the left PrCG.
Publisher: Elsevier BV
Date: 04-2018
DOI: 10.1016/J.NEUROIMAGE.2017.12.058
Abstract: It is well established that musical training induces sensorimotor plasticity. However, there are remarkable differences in how musicians train for proficient stage performance. The present EEG study outlines for the first time clear-cut neurobiological differences between classical and jazz musicians at high and low levels of action planning, revealing genre-specific cognitive strategies adopted in production. Pianists imitated chord progressions without sound that were manipulated in terms of harmony and context length to assess high-level planning of sequence-structure, and in terms of the manner of playing to assess low-level parameter specification of single acts. Jazz pianists revised incongruent harmonies faster as revealed by an earlier reprogramming negativity and beta power decrease, hence neutralising response costs, albeit at the expense of a higher number of manner errors. Classical pianists in turn experienced more conflict during incongruent harmony, as shown by theta power increase, but were more ready to implement the required manner of playing, as indicated by higher accuracy and beta power decrease. These findings demonstrate that specific demands and action focus of training lead to differential weighting of hierarchical action planning. This suggests different enduring markers impressed in the brain when a musician practices one or the other style.
Publisher: Oxford University Press (OUP)
Date: 30-12-2021
Abstract: Complex sequential behaviors, such as speaking or playing music, entail flexible rule-based chaining of single acts. However, it remains unclear how the brain translates abstract structural rules into movements. We combined music production with multimodal neuroimaging to dissociate high-level structural and low-level motor planning. Pianists played novel musical chord sequences on a muted MR-compatible piano by imitating a model hand on screen. Chord sequences were manipulated in terms of musical harmony and context length to assess structural planning, and in terms of fingers used for playing to assess motor planning. A model of probabilistic sequence processing confirmed temporally extended dependencies between chords, as opposed to local dependencies between movements. Violations of structural plans activated the left inferior frontal and middle temporal gyrus, and the fractional anisotropy of the ventral pathway connecting these two regions positively predicted behavioral measures of structural planning. A bilateral frontoparietal network was instead activated by violations of motor plans. Both structural and motor networks converged in lateral prefrontal cortex, with anterior regions contributing to musical structure building, and posterior areas to movement planning. These results establish a promising approach to study sequence production at different levels of action representation.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 05-2001
DOI: 10.1097/00001756-200105250-00019
Abstract: In the present study, the early right-anterior negativity (ERAN) elicited by harmonically inappropriate chords during listening to music was compared to the frequency mismatch negativity (MMN) and the abstract-feature MMN. Results revealed that the litude of the ERAN, in contrast to the MMN, is specifically dependent on the degree of harmonic appropriateness. Thus, the ERAN is correlated with the cognitive processing of complex rule-based information, i.e. with the application of music-syntactic rules. Moreover, results showed that the ERAN, compared to the abstract-feature MMN, had both a longer latency, and a larger litude. The combined findings indicate that ERAN and MMN reflect different mechanisms of pre-attentive irregularity detection, and that, although both components have several features in common, the ERAN does not easily fit into the classical MMN framework. The present ERPs thus provide evidence for a differentiation of cognitive processes underlying the fast and pre-attentive processing of auditory information.
Publisher: Frontiers Media SA
Date: 18-06-2019
Publisher: Frontiers Media SA
Date: 2012
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 27-10-2003
Publisher: Elsevier BV
Date: 2018
Publisher: Elsevier BV
Date: 04-2009
DOI: 10.1016/J.CUB.2009.02.058
Abstract: It has long been debated which aspects of music perception are universal and which are developed only after exposure to a specific musical culture. Here, we report a crosscultural study with participants from a native African population (Mafa) and Western participants, with both groups being naive to the music of the other respective culture. Experiment 1 investigated the ability to recognize three basic emotions (happy, sad, scared/fearful) expressed in Western music. Results show that the Mafas recognized happy, sad, and scared/fearful Western music excerpts above chance, indicating that the expression of these basic emotions in Western music can be recognized universally. Experiment 2 examined how a spectral manipulation of original, naturalistic music affects the perceived pleasantness of music in Western as well as in Mafa listeners. The spectral manipulation modified, among other factors, the sensory dissonance of the music. The data show that both groups preferred original Western music and also original Mafa music over their spectrally manipulated versions. It is likely that the sensory dissonance produced by the spectral manipulation was at least partly responsible for this effect, suggesting that consonance and permanent sensory dissonance universally influence the perceived pleasantness of music.
Publisher: Cold Spring Harbor Laboratory
Date: 09-03-2023
DOI: 10.1101/2023.03.08.531723
Abstract: Considerable debate surrounds syntactic processing similarities in language and music. Yet few studies have investigated how syntax interacts with meter considering that metrical regularity varies across domains. Furthermore, there are reports on in idual differences in syntactic and metrical structure processing in music and language. Thus, a direct comparison of in idual variation in syntax and meter processing across domains is warranted. In a behavioral (Experiment 1) and EEG study (Experiment 2), participants engaged in syntactic processing tasks with sentence- and melody stimuli that were more or less metrically regular, and followed a preferred or non-preferred (but correct) syntactic structure. We further employed a range of cognitive diagnostic tests, parametrically indexed verbal- and musical abilities using a principal component analysis, and correlated cognitive factors with the behavioral and ERP results (Experiment 3). Based on previous results in the language domain, we expected that a regular meter would facilitate the syntactic integration of non-preferred syntax. While syntactic discrimination was better in regular than irregular meter conditions in both domains (Experiment 1), a P600 effect indicated different integration costs during the processing of syntactic complexities in the two domains (Experiment 2). Metrical regularity altered the P600 response to preferred syntax in language while it modulated non-preferred syntax processing in music. Moreover, experimental results yielded within-domain in idual differences, and identified continuous metrics of musical ability more beneficial than grouping musicians or non-musicians (Experiment 3). These combined results suggest that the meter-syntax interface differs uniquely in how it forms syntactic preferences in language and music.
Publisher: Elsevier BV
Date: 2013
DOI: 10.1016/J.NEUROIMAGE.2012.09.035
Abstract: Despite general agreement on shared syntactic resources in music and language, the neuroanatomical underpinnings of this overlap remain largely unexplored. While previous studies mainly considered frontal areas as supramodal grammar processors, the domain-general syntactic role of temporal areas has been so far neglected. Here we capitalized on the excellent spatial and temporal resolution of subdural EEG recordings to co-localize low-level syntactic processes in music and language in the temporal lobe in a within-subject design. We used Brain Surface Current Density mapping to localize and compare neural generators of the early negativities evoked by violations of phrase structure grammar in both music and spoken language. The results show that the processing of syntactic violations relies in both domains on bilateral temporo-fronto-parietal neural networks. We found considerable overlap of these networks in the superior temporal lobe, but also differences in the hemispheric timing and relative weighting of their fronto-temporal constituents. While alluding to the dissimilarity in how shared neural resources may be configured depending on the musical or linguistic nature of the perceived stimulus, the combined data lend support for a co-localization of early musical and linguistic syntax processing in the temporal lobe.
Publisher: Frontiers Media SA
Date: 27-11-2018
Publisher: Wiley
Date: 10-03-2009
DOI: 10.1002/HBM.20550
Publisher: American Association for the Advancement of Science (AAAS)
Date: 28-02-2020
Abstract: Brain asymmetries for words and melodies of songs depend on opposite acoustic cues
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 07-08-2020
DOI: 10.1097/AUD.0000000000000763
Abstract: A major issue in the rehabilitation of children with cochlear implants (CIs) is unexplained variance in their language skills, where many of them lag behind children with normal hearing (NH). Here, we assess links between generative language skills and the perception of prosodic stress, and with musical and parental activities in children with CIs and NH. Understanding these links is expected to guide future research and toward supporting language development in children with a CI. Twenty-one unilaterally and early-implanted children and 31 children with NH, aged 5 to 13, were classified as musically active or nonactive by a questionnaire recording regularity of musical activities, in particular singing, and reading and other activities shared with parents. Perception of word and sentence stress, performance in word finding, verbal intelligence (Wechsler Intelligence Scale for Children (WISC) vocabulary), and phonological awareness (production of rhymes) were measured in all children. Comparisons between children with a CI and NH were made against a subset of 21 of the children with NH who were matched to children with CIs by age, gender, socioeconomic background, and musical activity. Regression analyses, run separately for children with CIs and NH, assessed how much variance in each language task was shared with each of prosodic perception, the child’s own music activity, and activities with parents, including singing and reading. All statistical analyses were conducted both with and without control for age and maternal education. Musically active children with CIs performed similarly to NH controls in all language tasks, while those who were not musically active performed more poorly. Only musically nonactive children with CIs made more phonological and semantic errors in word finding than NH controls, and word finding correlated with other language skills. Regression analysis results for word finding and VIQ were similar for children with CIs and NH. These language skills shared considerable variance with the perception of prosodic stress and musical activities. When age and maternal education were controlled for, strong links remained between perception of prosodic stress and VIQ (shared variance: CI, 32%/NH, 16%) and between musical activities and word finding (shared variance: CI, 53%/NH, 20%). Links were always stronger for children with CIs, for whom better phonological awareness was also linked to improved stress perception and more musical activity, and parental activities altogether shared significantly variance with word finding and VIQ. For children with CIs and NH, better perception of prosodic stress and musical activities with singing are associated with improved generative language skills. In addition, for children with CIs, parental singing has a stronger positive association to word finding and VIQ than parental reading. These results cannot address causality, but they suggest that good perception of prosodic stress, musical activities involving singing, and parental singing and reading may all be beneficial for word finding and other generative language skills in implanted children.
Publisher: Elsevier BV
Date: 08-2016
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2016.07.027
Abstract: Shared knowledge and interpersonal coordination are prerequisites for most forms of social behavior. Influential approaches to joint action have conceptualized these capacities in relation to the separate constructs of co-representation (knowledge) and self-other entrainment (coordination). Here we investigated how brain mechanisms involved in co-representation and entrainment interact to support joint action. To do so, we used a musical joint action paradigm to show that the neural mechanisms underlying co-representation and self-other entrainment are linked via a process - indexed by EEG alpha oscillations - regulating the balance between self-other integration and segregation in real time. Pairs of pianists performed short musical items while action familiarity and interpersonal (behavioral) synchronization accuracy were manipulated in a factorial design. Action familiarity referred to whether or not pianists had rehearsed the musical material performed by the other beforehand. Interpersonal synchronization was manipulated via congruent or incongruent tempo change instructions that biased performance timing towards the impending, new tempo. It was observed that, when pianists were familiar with each other's parts, millisecond variations in interpersonal synchronized behavior were associated with a modulation of alpha power over right centro-parietal scalp regions. Specifically, high behavioral entrainment was associated with self-other integration, as indexed by alpha suppression. Conversely, low behavioral entrainment encouraged reliance on internal knowledge and thus led to self-other segregation, indexed by alpha enhancement. These findings suggest that alpha oscillations index the processing of information about self and other depending on the compatibility of internal knowledge and external (environmental) events at finely resolved timescales.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 03-2004
DOI: 10.1097/00000542-200403000-00023
Abstract: It is an open question whether cognitive processes of auditory perception that are mediated by functionally different cortices exhibit the same sensitivity to sedation. The auditory event-related potentials P1, mismatch negativity (MMN), and early right anterior negativity (ERAN) originate from different cortical areas and reflect different stages of auditory processing. The P1 originates mainly from the primary auditory cortex. The MMN is generated in or in the close vicinity of the primary auditory cortex but is also dependent on frontal sources. The ERAN mainly originates from frontal generators. The purpose of the study was to investigate the effects of increasing propofol sedation on different stages of auditory processing as reflected in P1, MMN, and ERAN. The P1, the MMN, and the ERAN were recorded preoperatively in 18 patients during four levels of anesthesia adjusted with target-controlled infusion: awake state (target concentration of propofol 0.0 microg/ml), light sedation (0.5 microg/ml), deep sedation (1.5 microg/ml), and unconsciousness (2.5-3.0 microg/ml). Simultaneously, propofol anesthesia was assessed using the Bispectral Index. Propofol sedation resulted in a progressive decrease in litudes and an increase of latencies with a similar pattern for MMN and ERAN. MMN and ERAN were elicited during sedation but were abolished during unconsciousness. In contrast, the litude of the P1 was unchanged by sedation but markedly decreased during unconsciousness. The results indicate differential effects of propofol sedation on cognitive functions that involve mainly the auditory cortices and cognitive functions that involve the frontal cortices.
Publisher: Elsevier BV
Date: 08-2018
DOI: 10.1016/J.BANDL.2018.05.001
Abstract: The relevance of left dorsal and ventral fiber pathways for syntactic and semantic comprehension is well established, while pathways for prosody are little explored. The present study examined linguistic prosodic structure building in a patient whose right arcuate/superior longitudinal fascicles and posterior corpus callosum were transiently compromised by a vasogenic peritumoral edema. Compared to ten matched healthy controls, the patient's ability to detect irregular prosodic structure significantly improved between pre- and post-surgical assessment. This recovery was accompanied by an increase in average fractional anisotropy (FA) in right dorsal and posterior transcallosal fiber tracts. Neither general cognitive abilities nor (non-prosodic) syntactic comprehension nor FA in right ventral and left dorsal fiber tracts showed a similar pre-post increase. Together, these findings suggest a contribution of right dorsal and inter-hemispheric pathways to prosody perception, including the right-dorsal tracking and structuring of prosodic pitch contours that is transcallosally informed by concurrent syntactic information.
Publisher: Frontiers Media SA
Date: 05-02-2019
Publisher: Oxford University Press (OUP)
Date: 08-02-2022
Abstract: During conversations, speech prosody provides important clues about the speaker’s communicative intentions. In many languages, a rising vocal pitch at the end of a sentence typically expresses a question function, whereas a falling pitch suggests a statement. Here, the neurophysiological basis of intonation and speech act understanding were investigated with high-density electroencephalography (EEG) to determine whether prosodic features are reflected at the neurophysiological level. Already approximately 100 ms after the sentence-final word differing in prosody, questions, and statements expressed with the same sentences led to different neurophysiological activity recorded in the event-related potential. Interestingly, low-pass filtered sentences and acoustically matched nonvocal musical signals failed to show any neurophysiological dissociations, thus suggesting that the physical intonation alone cannot explain this modulation. Our results show rapid neurophysiological indexes of prosodic communicative information processing that emerge only when pragmatic and lexico-semantic information are fully expressed. The early enhancement of question-related activity compared with statements was due to sources in the articulatory-motor region, which may reflect the richer action knowledge immanent to questions, namely the expectation of the partner action of answering the question. The present findings demonstrate a neurophysiological correlate of prosodic communicative information processing, which enables humans to rapidly detect and understand speaker intentions in linguistic interactions.
Publisher: Elsevier BV
Date: 10-2014
DOI: 10.1016/J.NEUROIMAGE.2014.04.071
Abstract: Our knowledge on temporal lobe epilepsy (TLE) with hippoc al sclerosis has evolved towards the view that this syndrome affects widespread brain networks. Diffusion weighted imaging studies have shown alterations of large white matter tracts, most notably in left temporal lobe epilepsy, but the degree of altered connections between cortical and subcortical structures remains to be clarified. We performed a whole brain connectome analysis in 39 patients with refractory temporal lobe epilepsy and unilateral hippoc al sclerosis (20 right and 19 left) and 28 healthy subjects. We performed whole-brain probabilistic fiber tracking using MRtrix and segmented 164 cortical and subcortical structures with Freesurfer. In idual structural connectivity graphs based on these 164 nodes were computed by mapping the mean fractional anisotropy (FA) onto each tract. Connectomes were then compared using two complementary methods: permutation tests for pair-wise connections and Network Based Statistics to probe for differences in large network components. Comparison of pair-wise connections revealed a marked reduction of connectivity between left TLE patients and controls, which was strongly lateralized to the ipsilateral temporal lobe. Specifically, infero-lateral cortex and temporal pole were strongly affected, and so was the perisylvian cortex. In contrast, for right TLE, focal connectivity loss was much less pronounced and restricted to bilateral limbic structures and right temporal cortex. Analysis of large network components revealed furthermore that both left and right hippoc al sclerosis affected diffuse global and interhemispheric connectivity. Thus, left temporal lobe epilepsy was associated with a much more pronounced pattern of reduced FA, that included major landmarks of perisylvian language circuitry. These distinct patterns of connectivity associated with unilateral hippoc al sclerosis show how a focal pathology influences global network architecture, and how left or right-sided lesions may have differential and specific impacts on cerebral connectivity.
Publisher: Wiley
Date: 08-04-2007
DOI: 10.1111/J.1469-8986.2007.00517.X
Abstract: The present study investigated music-syntactic processing with chord sequences that ended on either regular or irregular chord functions. Sequences were composed such that perceived differences in the cognitive processing between syntactically regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition, pitch commonality (the major component of "sensory dissonance"), or roughness. Three experiments with independent groups of subjects were conducted: a behavioral experiment and two experiments using electroencephalography. Irregular chords elicited an early right anterior negativity (ERAN) in the event-related brain potentials (ERPs) under both task-relevant and task-irrelevant conditions. Behaviorally, participants detected around 75% of the irregular chords, indicating that these chords were only moderately salient. Nevertheless, the irregular chords reliably elicited clear ERP effects. Amateur musicians were slightly more sensitive to musical irregularities than nonmusicians, supporting previous studies demonstrating effects of musical training on music-syntactic processing. The findings indicate that the ERAN is an index of music-syntactic processing and that the ERAN can be elicited even when irregular chords are not detectable based on acoustical factors such as pitch repetition, sensory dissonance, or roughness.
Publisher: Wiley
Date: 05-03-2019
DOI: 10.1002/HBM.24549
Publisher: Springer Science and Business Media LLC
Date: 22-02-2004
DOI: 10.1038/NN1197
Publisher: Elsevier BV
Date: 08-2006
DOI: 10.1016/J.CLINPH.2006.05.009
Abstract: Using evoked potentials, this study investigated effects of deep propofol sedation, and effects of recovery from unconsciousness, on the processing of auditory information with stimuli suited to elicit a physical MMN, and a (music-syntactic) ERAN. Levels of sedation were assessed using the Bispectral Index (BIS) and the Modified Observer's Assessment of Alertness and Sedation Scale (MOAAS). EEG-measurements were performed during wakefulness, deep propofol sedation (MOAAS 2-3, mean BIS=68), and a recovery period. Between deep sedation and recovery period, the infusion rate of propofol was increased to achieve unconsciousness (MOAAS 0-1, mean BIS=35) EEG measurements of recovery period were performed after subjects regained consciousness. During deep sedation, the physical MMN was markedly reduced, but still significant. No ERAN was observed in this level. A clear P3a was elicited during deep sedation by those deviants, which were task-relevant during the awake state. As soon as subjects regained consciousness during the recovery period, a normal MMN was elicited. By contrast, the P3a was absent in the recovery period, and the P3b was markedly reduced. Results indicate that the auditory sensory memory (as indexed by the physical MMN) is still active, although strongly reduced, during deep sedation (MOAAS 2-3). The presence of the P3a indicates that attention-related processes are still operating during this level. Processes of syntactic analysis appear to be abolished during deep sedation. After propofol-induced anesthesia, the auditory sensory memory appears to operate normal as soon as subjects regain consciousness, whereas the attention-related processes indexed by P3a and P3b are markedly impaired. Results inform about effects of sedative drugs on auditory and attention-related mechanisms. The findings are important because these mechanisms are prerequisites for auditory awareness, auditory learning and memory, as well as language perception during anesthesia.
Publisher: MIT Press - Journals
Date: 10-2005
DOI: 10.1162/089892905774597290
Abstract: The present study investigated simultaneous processing of language and music using visually presented sentences and auditorily presented chord sequences. Music-syntactically regular and irregular chord functions were presented synchronously with syntactically correct or incorrect words, or with words that had either a high or a low semantic cloze probability. Music-syntactically irregular chords elicited an early right anterior negativity (ERAN). Syntactically incorrect words elicited a left anterior negativity (LAN). The LAN was clearly reduced when words were presented simultaneously with music-syntactically irregular chord functions. Processing of high and low cloze-probability words as indexed by the N400 was not affected by the presentation of irregular chord functions. In a control experiment, the LAN was not affected by physically deviant tones that elicited a mismatch negativity (MMN). Results demonstrate that processing of musical syntax (as reflected in the ERAN) interacts with the processing of linguistic syntax (as reflected in the LAN), and that this interaction is not due to a general effect of deviance-related negativities that precede an LAN. Findings thus indicate a strong overlap of neural resources involved in the processing of syntax in language and music.
Publisher: Oxford University Press (OUP)
Date: 26-08-2010
DOI: 10.1093/BRAIN/AWQ231
Abstract: Contemporary neural models of auditory language comprehension proposed that the two hemispheres are differently specialized in the processing of segmental and suprasegmental features of language. While segmental processing of syntactic and lexical semantic information is predominantly assigned to the left hemisphere, the right hemisphere is thought to have a primacy for the processing of suprasegmental prosodic information such as accentuation and boundary marking. A dynamic interplay between the hemispheres is assumed to allow for the timely coordination of both information types. The present event-related potential study investigated whether the anterior and/or posterior portion of the corpus callosum provide the crucial brain basis for the online interaction of syntactic and prosodic information. Patients with lesions in the anterior two-thirds of the corpus callosum connecting orbital and frontal structures, or the posterior third of the corpus callosum connecting temporal, parietal and occipital areas, as well as matched healthy controls, were tested in a paradigm that crossed syntactic and prosodic manipulations. An anterior negativity elicited by a mismatch between syntactically predicted phrase structure and prosodic intonation was analysed as a marker for syntax-prosody interaction. Healthy controls and patients with lesions in the anterior corpus callosum showed this anterior negativity demonstrating an intact interplay between syntax and prosody. No such effect was found in patients with lesions in the posterior corpus callosum, although they exhibited intact, prosody-independent syntactic processing comparable with healthy controls and patients with lesions in the anterior corpus callosum. These data support the interplay between the speech processing streams in the left and right hemispheres via the posterior portion of the corpus callosum, building the brain basis for the coordination and integration of local syntactic and prosodic features during auditory speech comprehension.
Location: United Kingdom of Great Britain and Northern Ireland
Location: France
Location: Germany
Location: Germany
No related grants have been discovered for Daniela Sammler.