ORCID Profile
0000-0002-9612-9110
Current Organisations
INSERM
,
University of Pittsburgh
,
Queensland University of Technology
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Neurocognitive Patterns and Neural Networks | Biological Psychology (Neuropsychology, Psychopharmacology, Physiological Psychology) | Cognitive Science | Linguistic Processes (incl. Speech Production and Comprehension)
Nervous System and Disorders | Expanding Knowledge in Psychology and Cognitive Sciences | Hearing, Vision, Speech and Their Disorders |
Publisher: Society for Neuroscience
Date: 10-06-2009
Publisher: Cold Spring Harbor Laboratory
Date: 08-08-2019
DOI: 10.1101/730002
Abstract: Neural oscillations in auditory cortex are argued to support parsing and representing speech constituents at their corresponding temporal scales. Yet, how incoming sensory information interacts with ongoing spontaneous brain activity, what features of the neuronal microcircuitry underlie spontaneous and stimulus-evoked spectral fingerprints, and what these fingerprints entail for stimulus encoding, remain largely open questions. We used a combination of human invasive electrophysiology, computational modeling and decoding techniques to assess the information encoding properties of brain activity and to relate them to a plausible underlying neuronal microarchitecture. We analyzed intracortical auditory EEG activity from 10 patients while they were listening to short sentences. Pre-stimulus neural activity in early auditory cortical regions often exhibited power spectra with a shoulder in the delta range and a small bump in the beta range. Speech decreased power in the beta range, and increased power in the delta-theta and gamma ranges. Using multivariate machine learning techniques, we assessed the spectral profile of information content for two aspects of speech processing: detection and discrimination. We obtained better phase than power information decoding, and a bimodal spectral profile of information content with better decoding at low (delta-theta) and high (gamma) frequencies than at intermediate (beta) frequencies. These experimental data were reproduced by a simple rate model made of two subnetworks with different timescales, each composed of coupled excitatory and inhibitory units, and connected via a negative feedback loop. Modeling and experimental results were similar in terms of pre-stimulus spectral profile (except for the iEEG beta bump), spectral modulations with speech, and spectral profile of information content. Altogether, we provide converging evidence from both univariate spectral analysis and decoding approaches for a dual timescale processing infrastructure in human auditory cortex, and show that it is consistent with the dynamics of a simple rate model. Like most animal vocalizations, speech results from a pseudo-rhythmic process that reflects the convergence of motor and auditory neural substrates and the natural resonance properties of the vocal apparatus towards efficient communication. Here, we leverage the excellent temporal and spatial resolution of intracranial EEG to demonstrate that neural activity in human early auditory cortical areas during speech perception exhibits a dual-scale spectral profile of power changes, with speech increasing power in low (delta-theta) and high (gamma - high-gamma) frequency ranges, while decreasing power in intermediate (alpha-beta) frequencies. Single-trial multivariate decoding also resulted in a bimodal spectral profile of information content, with better decoding at low and high frequencies than at intermediate ones. From both spectral and informational perspectives, these patterns are consistent with the activity of a relatively simple computational model comprising two reciprocally connected excitatory/inhibitory sub-networks operating at different (low and high) timescales. By combining experimental, decoding and modeling approaches, we provide consistent evidence for the existence, information coding value and underlying neuronal architecture of dual timescale processing in human auditory cortex.
Publisher: Elsevier BV
Date: 08-2014
DOI: 10.1016/J.BANDL.2014.05.007
Abstract: Access to an object's name requires the retrieval of an arbitrary association between it's identity and a word-label. The hippoc us is essential in retrieving arbitrary associations, and thus could be involved in retrieving the link between an object and its name. To test this hypothesis we recorded the iEEG signal from epileptic patients, directly implanted in the hippoc us, while they performed a picture naming task. High-frequency broadband gamma (50-150 Hz) responses were computed as an index of population-level spiking activity. Our results show, for the first time, single-trial hippoc al dynamics between visual confrontation and naming. Remarkably, the latency of the hippoc al response predicts naming latency, while inefficient hippoc al activation is associated with "tip-of-the-tongue" states (a failure to retrieve the name of a recognized object) suggesting that the hippoc us is an active component of the naming network and that its dynamics are closely related to efficient word production.
Publisher: Frontiers Media SA
Date: 2011
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 03-2004
DOI: 10.1097/00001756-200403220-00006
Abstract: Stop-consonant discrimination was investigated in normal-hearing listeners and cochlear-implanted patients (CIP) by recording auditory evoked potentials (AEPs) to /b epsilon/ and epsilon/ syllables. This study demonstrates that: (i) AEPs show time-locked components that mimic the temporal structure of the stimuli, indicating that both patients and control subjects encode those syllables according to the temporal cue (voice onset time) characterizing the voiced/voiceless contrast (ii) the side of implantation does not affect the general structure of AEPs and /b epsilon/- epsilon/ discrimination thresholds (measured separately with a psychophysical procedure) (iii) poor time-locking to the syllables' temporal structure is associated with poor discrimination. This suggests that EEG investigation of temporal-processing provides an objective index of speech perception in CIP and could be used in implanted children.
Publisher: Oxford University Press (OUP)
Date: 02-03-2006
Publisher: Elsevier BV
Date: 11-2014
DOI: 10.1016/J.CORTEX.2014.06.002
Abstract: Music is a sound structure of remarkable acoustical and temporal complexity. Although it cannot denote specific meaning, it is one of the most potent and universal stimuli for inducing mood. How the auditory and limbic systems interact, and whether this interaction is lateralized when feeling emotions related to music, remains unclear. We studied the functional correlation between the auditory cortex (AC) and amygdala (AMY) through intracerebral recordings from both hemispheres in a single patient while she listened attentively to musical excerpts, which we compared to passive listening of a sequence of pure tones. While the left primary and secondary auditory cortices (PAC and SAC) showed larger increases in gamma-band responses than the right side, only the right side showed emotion-modulated gamma oscillatory activity. An intra- and inter-hemisphere correlation was observed between the auditory areas and AMY during the delivery of a sequence of pure tones. In contrast, a strikingly right-lateralized functional network between the AC and the AMY was observed to be related to the musical excerpts the patient experienced as happy, sad and peaceful. Interestingly, excerpts experienced as angry, which the patient disliked, were associated with widespread de-correlation between all the structures. These results suggest that the right auditory-limbic interactions result from the formation of oscillatory networks that bind the activities of the network nodes into coherence patterns, resulting in the emergence of a feeling.
Publisher: Elsevier BV
Date: 2017
DOI: 10.1016/J.YEBEH.2017.08.022
Abstract: Ictal language disturbances may occur in dominant hemisphere temporal lobe epilepsy (TLE), but little is known about the precise anatomoelectroclinical correlations. This study investigated the different facets of ictal aphasia in intracerebrally recorded TLE. Video-stereoelectroencephalography (SEEG) recordings of 37 seizures in 17 right-handed patients with drug-resistant TLE were analyzed SEEG electroclinical correlations between language disturbance and involvement of temporal lobe structures were assessed. In the clinical analysis, we separated speech disturbance from loss of consciousness. According to the region involved, different patterns of ictal aphasia in TLE were identified. Impaired speech comprehension was associated with posterior lateral involvement, anomia and reduced verbal fluency with anterior mediobasal structures, and jargonaphasia with basal temporal involvement. The language production deficits, such as anomia and low fluency, cannot be simply explained by an involvement of Broca's area, since this region was not affected by seizure discharge. Assessment of language function in the early ictal state can be successfully performed and provides valuable information on seizure localization within the temporal lobe as well as potentially useful information for guiding surgery.
Publisher: Elsevier BV
Date: 08-2008
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2008.04.009
Abstract: The right and left anteromedial temporal lobes have been shown to participate in emotion processing. The aim of the study was to further address their role in music emotion perception/recognition, and assessment by two emotional determinants, i.e., arousal (relaxing versus stimulating aspects) and valence (pleasantness degree). Epileptic patients with right or left anterior mesio-temporal resection (including the amygdala), and control subjects were presented with happy musical (chosen highly stimulating) or sad excerpts (chosen to be relaxing), that were either consonant (pleasant) or dissonant (unpleasant). The patients demonstrated an abnormal perception of dissonant music disregarding of the side of the resection thereby confirming the role of the parahippoc al gyrus in the perception of unpleasantness. Moreover, the pleasantness of musical excerpts, in particular the happy consonant ones, was overestimated by patients with right temporal damage. In contrast, the arousal rating for happy consonant excerpts was reduced only in the group with left-resections. This modified perception of arousal might be related to the decreased ability of those patients to recognize happy and sad music. Indeed, both right and left temporal resections impaired sadness recognition, whereas happiness recognition was only reduced by the left-resections. The main result was that for the first time, the mesio-temporal structures were demonstrated to be asymmetrically involved in positive musical emotion recognition.
Publisher: Oxford University Press (OUP)
Date: 22-08-2007
Abstract: To better understand face recognition, it is necessary to identify not only which brain structures are implicated but also the dynamics of the neuronal activity in these structures. Latencies can then be compared to unravel the temporal dynamics of information processing at the distributed network level. To achieve high spatial and temporal resolution, we used intracerebral recordings in epileptic subjects while they performed a famous/unfamiliar face recognition task. The first components peaked at 110 ms in the fusiform gyrus (FG) and simultaneously in the inferior frontal gyrus, suggesting the early establishment of a large-scale network. This was followed by components peaking at 160 ms in 2 areas along the FG. Important stages of distributed parallel processes ensued at 240 and 360 ms involving up to 6 regions along the ventral visual pathway. The final components peaked at 480 ms in the hippoc us. These stages largely overlapped. Importantly, event-related potentials to famous faces differed from unfamiliar faces and control stimuli in all medial temporal lobe structures. The network was bilateral but more right sided. Thus, recognition of famous faces takes place through the establishment of a complex set of local and distributed processes that interact dynamically and may be an emergent property of these interactions.
Publisher: Elsevier BV
Date: 03-2009
DOI: 10.1016/J.CLINPH.2008.12.042
Abstract: Regions involved in language processing have been observed in the inferior part of the left temporal lobe. Although collectively labelled 'the Basal Temporal Language Area' (BTLA), these territories are functionally heterogeneous and are involved in language perception (i.e. reading or semantic task) or language production (speech arrest after stimulation). The objective of this study was to clarify the role of BTLA in the language network in an epileptic patient who displayed jargonaphasia. Intracerebral evoked related potentials to verbal and non-verbal stimuli in auditory and visual modalities were recorded from BTLA. Time-frequency analysis was performed during ictal events. Evoked potentials and induced gamma-band activity provided direct evidence that BTLA is sensitive to language stimuli in both modalities, 350 ms after stimulation. In addition, spontaneous gamma-band discharges were recorded from this region during which we observed phonological jargon. The findings emphasize the multimodal nature of this region in speech perception. In the context of transient dysfunction, the patient's lexical semantic processing network is disrupted, reducing spoken output to meaningless phoneme combinations. This rare opportunity to study the BTLA "in vivo" demonstrates its pivotal role in lexico-semantic processing for speech production and its multimodal nature in speech perception.
Publisher: MIT Press - Journals
Date: 04-2010
Abstract: Through study of clinical cases with brain lesions as well as neuroimaging studies of cognitive processing of words and pictures, it has been established that material-specific hemispheric specialization exists. It remains however unclear whether such specialization holds true for all processes involved in complex tasks, such as recognition memory. To investigate neural signatures of transition from perception to recognition, according to type of material (words or abstract pictures), high-resolution scalp ERPs were recorded in adult humans engaged either in categorization or in memory recognition tasks within the same experimental setup. Several steps in the process from perception to recognition were identified. Source localization showed that the early stage of perception processing (N170) takes place in the fusiform gyrus and is lateralized according to the nature of stimuli (left side for words and right side for pictures). Late stages of processing (N400/P600) corresponding to recognition are material independent and involve anterior medial-temporal and ventral prefrontal structures bilaterally. A crucial transitional process between perception (N170) and recognition (N400/P600) is reflected by the N270, an often overlooked component, which occurs in anterior rhinal cortices and shows material-specific hemispheric lateralization.
Publisher: Elsevier BV
Date: 10-2014
DOI: 10.1016/J.NEUROIMAGE.2014.05.075
Abstract: Simultaneous EEG-fMRI has opened up new avenues for improving the spatio-temporal resolution of functional brain studies. However, this method usually suffers from poor EEG quality, especially for evoked potentials (ERPs), due to specific artifacts. As such, the use of EEG-informed fMRI analysis in the context of cognitive studies has particularly focused on optimizing narrow ERP time windows of interest, which ignores the rich erse temporal information of the EEG signal. Here, we propose to use simultaneous EEG-fMRI to investigate the neural cascade occurring during face recognition in 14 healthy volunteers by using the successive ERP peaks recorded during the cognitive part of this process. N170, N400 and P600 peaks, commonly associated with face recognition, were successfully and reproducibly identified for each trial and each subject by using a group independent component analysis (ICA). For the first time we use this group ICA to extract several independent components (IC) corresponding to the sequence of activation and used single-trial peaks as modulation parameters in a general linear model (GLM) of fMRI data. We obtained an occipital-temporal-frontal stream of BOLD signal modulation, in accordance with the three successive IC-ERPs providing an unprecedented spatio-temporal characterization of the whole cognitive process as defined by BOLD signal modulation. By using this approach, the pattern of EEG-informed BOLD modulation provided improved characterization of the network involved than the fMRI-only analysis or the source reconstruction of the three ERPs the latter techniques showing only two regions in common localized in the occipital lobe.
Publisher: Elsevier BV
Date: 10-2014
DOI: 10.1016/J.NEUROIMAGE.2014.05.055
Abstract: Electroencephalography (EEG), magnetoencephalography (MEG), and intracerebral stereotaxic EEG (SEEG) are the three neurophysiological recording techniques, which are thought to capture the same type of brain activity. Still, the relationships between non-invasive (EEG, MEG) and invasive (SEEG) signals remain to be further investigated. In early attempts at comparing SEEG with either EEG or MEG, the recordings were performed separately for each modality. However such an approach presents substantial limitations in terms of signal analysis. The goal of this technical note is to investigate the feasibility of simultaneously recording these three signal modalities (EEG, MEG and SEEG), and to provide strategies for analyzing this new kind of data. Intracerebral electrodes were implanted in a patient with intractable epilepsy for presurgical evaluation purposes. This patient was presented with a visual stimulation paradigm while the three types of signals were simultaneously recorded. The analysis started with a characterization of the MEG artifact caused by the SEEG equipment. Next, the average evoked activities were computed at the sensor level, and cortical source activations were estimated for both the EEG and MEG recordings these were shown to be compatible with the spatiotemporal dynamics of the SEEG signals. In the average time-frequency domain, concordant patterns between the MEG/EEG and SEEG recordings were found below the 40 Hz level. Finally, a fine-grained coupling between the litudes of the three recording modalities was detected in the time domain, at the level of single evoked responses. Importantly, these correlations have shown a high level of spatial and temporal specificity. These findings provide a case for the ability of trimodal recordings (EEG, MEG, and SEEG) to reach a greater level of specificity in the investigation of brain signals and functions.
Publisher: Elsevier BV
Date: 03-2005
DOI: 10.1016/J.HEARES.2004.08.021
Abstract: This study investigated the ability of cochlear-implanted patients to discriminate tone bursts in free field using the electrophysiological recordings of mismatch negativity (MMN). Seven cochlear-implanted patients (CIP) and eight control subjects (CS) were tested. Event-related potentials were recorded from either 32 or 64 electrodes in response to binaural stimuli using a passive oddball paradigm. Two stimulus-contrast conditions were used to produce MMN: The standard-tone frequency was fixed at 1 kHz, and the deviant-tone frequency was set at 2 or 1.5 kHz. The results show that response waveforms (N1/P2) are similar in latency and litude for CS and CIP, suggesting that pure-tone detection is performed over the same time window in both groups. These waveforms are also similar in left- and right-implanted patients, suggesting that electric stimulation of the auditory nerve activates both hemispheres in profound, bilateral hearing loss. Pure-tone audiograms and word-discrimination scores were also measured in each subject in an anechoic room and their relations with MMN data were examined. Correlations were found between the latency of MMN for a 1.5 kHz deviant and the thresholds obtained for pure-tone detection and word discrimination. MMN appears as a possible complementary clinical tool to objectively assess auditory sensitivity in cochlear-implanted populations. However, further improvements are still necessary before it can be used as a standard clinical examination.
Publisher: Elsevier BV
Date: 08-2005
DOI: 10.1016/J.NEUROIMAGE.2004.12.064
Abstract: Auditory-evoked potential (AEP)s elicited to French-language voiced stop consonant (/ba/) and voiceless stop consonant ( a/) were studied in non-language-impaired epileptic patients and non-epileptic volunteers. First, depth AEPs recorded from the primary auditory cortex during pre-surgical exploration and scalp AEPs recordings using high resolution EEG (HR EEG-64 channels scalp EEG) were compared in the same patients. Both methods indicated that the processing of voiced and voiceless consonants was based on a temporal auditory coding. /Ba/ elicited a first complex (N1) at the onset of voicing and a second component [release component (RC)] time-locked to release. This processing took place specifically in the left primary auditory cortex. Source modeling of the RC showed that a left-greater-than-right litude of source probes (SP) both in epileptic patients with left-hemispheric language dominance [established by means of invasive tests (WADA test) and/or clinical data] and right-handed non-epileptic subjects. Our data suggest that the processing of VOT is related to hemispheric dominance for language and that scalp-recorded AEPs may represent an effective, non-invasive method to establish hemispheric dominance for language in clinical settings. This procedure could complement existing methods and could help to detect the dissociation between receptive and expressive language sometimes observed in patients with epilepsy.
Publisher: American Association for the Advancement of Science (AAAS)
Date: 21-02-2014
Abstract: Evaluating our actions, and detecting our errors, is crucial for adaptive behavior. These fundamental executive functions are intensively studied in cognitive and social neuroscience, but their anatomical basis remains poorly characterized. Using intracerebral electroencephalography in patients being prepared for epilepsy surgery, Bonini et al. (p. 888 ) found that, contrary to what is widely assumed, the supplementary motor area, and not the anterior cingulate cortex, plays a leading role in these processes. The data provide a precise spatio-temporal description of the cortical network underlying action monitoring and error processing.
Publisher: Elsevier BV
Date: 03-2008
DOI: 10.1016/J.HEARES.2007.12.003
Abstract: Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to litude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to litude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of litude modulation.
Publisher: Wiley
Date: 02-10-2006
DOI: 10.1111/J.1528-1167.2006.00647.X
Abstract: We report the case of a 49-year-old right-handed woman with brief partial seizures in which the clinical semiology was marked by an early humming automatism. MRI fusion of the registered ictal and interictal single-photon emission computed tomography (SPECT) substraction exhibited a left neural network involving lateral temporal, inferior frontal, and inferior parietal cortices.
Publisher: Public Library of Science (PLoS)
Date: 02-03-2020
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 12-2005
DOI: 10.1097/00001756-200512190-00002
Abstract: Here, we used functional magnetic resonance imaging to test for the lateralization of the brain regions specifically involved in the recognition of negatively and positively valenced musical emotions. The manipulation of two major musical features (mode and tempo), resulting in the variation of emotional perception along the happiness-sadness axis, was shown to principally involve subcortical and neocortical brain structures, which are known to intervene in emotion processing in other modalities. In particular, the minor mode (sad excerpts) involved the left orbito and mid-dorsolateral frontal cortex, which does not confirm the valence lateralization model. We also show that the recognition of emotions elicited by variations of the two perceptual determinants rely on both common (BA 9) and distinct neural mechanisms.
Publisher: Elsevier BV
Date: 06-2014
Publisher: Springer Science and Business Media LLC
Date: 13-10-2018
DOI: 10.1007/S10827-018-0699-3
Abstract: Language is mediated by pathways connecting distant brain regions that have erse functional roles. For word production, the network includes a ventral pathway, connecting temporal and inferior frontal regions, and a dorsal pathway, connecting parietal and frontal regions. Despite the importance of word production for scientific and clinical purposes, the functional connectivity underlying this task has received relatively limited attention, and mostly from techniques limited in either spatial or temporal resolution. Here, we exploited data obtained from depth intra-cerebral electrodes stereotactically implanted in eight epileptic patients. The signal was recorded directly from various structures of the neocortex with high spatial and temporal resolution. The neurophysiological activity elicited by a picture naming task was analyzed in the time-frequency domain (10-150 Hz), and functional connectivity between brain areas among ten regions of interest was examined. Task related-activities detected within a network of the regions of interest were consistent with findings in the literature, showing task-evoked desynchronization in the beta band and synchronization in the gamma band. Surprisingly, long-range functional connectivity was not particularly stronger in the beta than in the high-gamma band. The latter revealed meaningful sub-networks involving, notably, the temporal pole and the inferior frontal gyrus (ventral pathway), and parietal regions and inferior frontal gyrus (dorsal pathway). These findings are consistent with the hypothesized network, but were not detected in every patient. Further research will have to explore their robustness with larger s les.
Publisher: Oxford University Press (OUP)
Date: 28-03-2004
Publisher: Cold Spring Harbor Laboratory
Date: 18-03-2019
DOI: 10.1101/581520
Abstract: Speech perception is mediated by both left and right auditory cortices, but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex. We presented short acoustic transients to reveal the stereotyped spectro-spatial oscillatory response profile of the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with preferred processing modes in the theta (∼4-8 Hz) and low gamma (∼25-50 Hz) ranges. These results highlight that the human auditory system employs a two-timescale processing mode. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (∼3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (∼6/40 Hz) activity in the left. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the two hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy. Speech processing is now known to be distributed across the two hemispheres, but the origin and function of lateralization continues to be vigorously debated. The asymmetric s ling in time (AST) hypothesis predicts that (1) the auditory system employs a two-timescales processing mode, (2) present in both hemispheres but with a different ratio of fast and slow timescales, (3) that emerges outside of primary cortical regions. Capitalizing on intracranial data from 96 epileptic patients we sensitively validated each of these predictions and provide a precise estimate of the processing timescales. In particular, we reveal that asymmetric s ling in associative areas is subtended by distinct two-timescales processing modes. Overall, our results shed light on the neurofunctional architecture of cortical auditory processing.
Publisher: Wiley
Date: 11-04-2011
DOI: 10.1111/J.1528-1167.2011.03052.X
Abstract: Performance in recognition memory differs among patients with medial temporal lobe epilepsy (MTLE). We aimed to determine if distinct recognition performances (normal vs. impaired) could be related to distinct patterns of brain activation during encoding. Event-related functional magnetic resonance imaging (fMRI) activation profiles were obtained during successful encoding of non-material-specific items, in 14 MTLE patients tested for recognition of stimuli afterward. Findings were compared to those of 25 healthy subjects, and voxel-based correlations were assessed between brain activation and performance. Patients with left and right MTLE showed similar activations and similar performances. As a whole, the group of patients demonstrated altered recognition scores, but three of the seven patients with left MTLE and three of the seven patients with right MTLE exhibited normal performance relative to controls. In comparison to healthy subjects and patients with impaired recognition, patients with normal recognition showed weaker activations in left opercular cortex, but stronger activations in bilateral parahippoc al region/fusiform gyrus (PH/FG). By contrast, patients with impaired performance showed weaker activations in bilateral PH/FG, but stronger activations in a frontal/cingulate and parietal network. Recognition performance was correlated positively to bilateral PH/FG activations, and negatively correlated to bilateral frontal/cingulate activations, in the whole group of patients, as well as in subgroups of patients with either left or right MTLE. These results suggest occurrence of effective functional compensation within bilateral PH/FG in MTLE, allowing patients to maintain recognition capability. In contrast, impairment of this perceptive-memory system may lead to alternative activation of an inefficient nonspecific attentional network in patients with altered performance.
Publisher: Wiley
Date: 23-01-2017
DOI: 10.1002/HIPO.22699
Abstract: The hippoc us plays a pivotal role both in novelty detection and in long‐term memory. The physiological mechanisms underlying these behaviors have yet to be understood in humans. We recorded intracerebral evoked potentials within the hippoc us of epileptic patients ( n = 10) during both memory and novelty detection tasks (targets in oddball tasks). We found that memory and detection tasks elicited late local field potentials in the hippoc us during the same period, but of opposite polarity (negative during novelty detection tasks, positive during memory tasks, ∼260–600 ms poststimulus onset, P 0.05). Critically, these potentials had maximal litude on the same contact in the hippoc us for each patient. This pattern did not depend on the task as different types of memory and novelty detection tasks were used. It did not depend on the novelty of the stimulus or the difficulty of the task either. Two different hypotheses are discussed to account for this result: it is either due to the activation of CA1 pyramidal neurons by two different pathways such as the monosynaptic and trisynaptic entorhinal‐hippoc us pathways, or to the activation of different neuronal populations, that is, differing either functionally (e.g., novelty/familiarity neurons) or located in different regions of the hippoc us (e.g., CA1/subiculum). In either case, these activities may integrate the activity of two distinct large‐scale networks implementing externally or internally oriented, mutually exclusive, brain states. © 2017 Wiley Periodicals, Inc.
Publisher: Elsevier BV
Date: 11-2013
DOI: 10.1016/J.BANDL.2013.04.007
Abstract: Recent theory of physiology of language suggests a dual stream dorsal/ventral organization of speech perception. Using intra-cerebral Event-related potentials (ERPs) during pre-surgical assessment of twelve drug-resistant epileptic patients, we aimed to single out electrophysiological patterns during both lexical-semantic and phonological monitoring tasks involving ventral and dorsal regions respectively. Phonological information processing predominantly occurred in the left supra-marginal gyrus (dorsal stream) and lexico-semantic information occurred in anterior/middle temporal and fusiform gyri (ventral stream). Similar latencies were identified in response to phonological and lexico-semantic tasks, suggesting parallel processing. Typical ERP components were strongly left lateralized since no evoked responses were recorded in homologous right structures. Finally, ERP patterns suggested the inferior frontal gyrus as the likely final common pathway of both dorsal and ventral streams. These results brought out detailed evidence of the spatial-temporal information processing in the dual pathways involved in speech perception.
Publisher: Oxford University Press (OUP)
Date: 11-03-2013
DOI: 10.1093/SCAN/NST034
Publisher: Frontiers Media SA
Date: 2012
Publisher: Oxford University Press (OUP)
Date: 21-04-2009
DOI: 10.1093/BRAIN/AWP083
Abstract: Word finding difficulties are often reported by epileptic patients with seizures originating from the language dominant cerebral hemisphere, for ex le, in temporal lobe epilepsy. Evidence regarding the brain regions underlying this deficit comes from studies of peri-operative electro-cortical stimulation, as well as post-surgical performance. This evidence has highlighted a role for the anterior part of the dominant temporal lobe in oral word production. These conclusions contrast with findings from activation studies involving healthy speakers or acute ischaemic stroke patients, where the region most directly related to word retrieval appears to be the posterior part of the left temporal lobe. To clarify the neural basis of word retrieval in temporal lobe epilepsy, we tested forty-three drug-resistant temporal lobe epilepsy patients (28 left, 15 right). Comprehensive neuropsychological and language assessments were performed. Single spoken word production was elicited with picture or definition stimuli. Detailed analysis allowed the distinction of impaired word retrieval from other possible causes of naming failure. Finally, the neural substrate of the deficit was assessed by correlating word retrieval performance and resting-state brain metabolism in 18 fluoro-2-deoxy-d-glucose-Positron Emission Tomography. Naming difficulties often resulted from genuine word retrieval failures (anomic states), both in picture and in definition tasks. Left temporal lobe epilepsy patients showed considerably worse performance than right temporal lobe epilepsy patients. Performance was poorer in the definition than in the picture task. Across patients and the left temporal lobe epilepsy subgroup, frequency of anomic state was negatively correlated with resting-state brain metabolism in left posterior and basal temporal regions (Brodmann's area 20-37-39). These results show the involvement of posterior temporal regions, within a larger antero-posterior-basal temporal network, in the specific process of word retrieval in temporal lobe epilepsy. A tentative explanation for these findings is that epilepsy induces functional deafferentation between anterior temporal structures devoted to semantic processing and neocortical posterior temporal structures devoted to lexical processing.
Publisher: Wiley
Date: 12-02-2007
DOI: 10.1002/HBM.20289
Abstract: There has recently been a growing interest in the use of simultaneous electroencephalography (EEG) and functional MRI (fMRI) for evoked activity in cognitive paradigms, thereby obtaining functional datasets with both high spatial and temporal resolution. The simultaneous recording permits obtaining event-related potentials (ERPs) and MR images in the same environment, conditions of stimulation, and subject state it also enables tracing the joint fluctuations of EEG and fMRI signals. The goal of this study was to investigate the possibility of tracking the trial-to-trial changes in event-related EEG activity, and of using this information as a parameter in fMRI analysis. We used an auditory oddball paradigm and obtained single-trial litude and latency features from the EEG acquired during fMRI scanning. The single-trial P300 latency presented significant correlation with parameters external to the EEG (target-to-target interval and reaction time). Moreover, we obtained significant fMRI activations for the modulation by P300 litude and latency, both at the single-subject and at the group level. Our results indicate that, in line with other studies, the EEG can bring a new dimension to the field of fMRI analysis by providing fine temporal information on the fluctuations in brain activity.
Publisher: Elsevier BV
Date: 03-2011
DOI: 10.1016/J.NEUROIMAGE.2010.11.058
Abstract: There are two competing views on the mechanisms underlying the generation of visual evoked potentials/fields in EEG/MEG. The classical hypothesis assumes an additive wave on top of background noise. Another hypothesis states that the evoked activity can totally or partially arise from a phase resetting of the ongoing alpha rhythm. There is no consensus however, on the best tools for distinguishing between these two hypotheses. In this study, we have tested different measures on a large series of simulations under a variety of scenarios, involving in particular trial-to-trial variability and different dynamics of ongoing alpha rhythm. No single measure or set of measures was found to be necessary or sufficient for defining phase resetting in the context of our simulations. Still, simulations permitted to define criteria that were the most reliable in practice for distinguishing additive and phase resetting hypotheses. We have then applied these criteria on intracerebral EEG data recordings in the visual areas during a visual discrimination task. We investigated the intracerebral channels that presented both ERP and ongoing alpha oscillations (n=37). Within these channels, a total of 30% fulfilled phase resetting criteria during the generation of the visual evoked potential, based on criteria derived from simulations. Moreover, 19% of the 37 channels presented dependence of the ERP on the level of pre-stimulus alpha. Only 5% of channels fulfilled both the simulation-related criteria and dependence on baseline alpha level. Our simulation study points out to the difficulty of clearly assessing phase resetting based on observed macroscopic electrophysiological signals. Still, some channels presented an indication of phase resetting in the context of our simulations. This needs to be confirmed by further work, in particular at a smaller recording scale.
Publisher: Wiley
Date: 09-2002
DOI: 10.1046/J.1528-1157.2002.48501.X
Abstract: Humming is a rare automatism occurring in partial seizures that has received little attention. Its study could shed light on the neural networks underlying melodic expression. In this study, we examined the anatomoelectroclinical correlates of humming during epileptic seizures Three patients undergoing presurgical stereoelectroencephalography (SEEG) for medically intractable temporal lobe epilepsy were studied. Coherence analysis of SEEG activity was carried out to study the functional coupling of different regions of the brain, whereas time-frequency (TF) analysis was conducted to assess epileptic discharge patterns. Changes in coherence were studied to identify the neural structures/systems implicated in humming. Humming began after the onset of seizures generated in medial limbic regions of the temporal lobe. At seizure onset, coherence analysis showed an increase in amygdala-hippoc us coupling. Humming began after the onset of a rhythmic discharge over lateral regions of the superior temporal gyrus (STG). A highly significant increase in coherence was observed between prefrontal regions and the STG. TF analysis of the STG discharge showed a reproducible pattern with a single fundamental frequency and associated harmonics. This frequency was approximately 6 Hz for two patients and 15 Hz for one patient. These findings suggest that the occurrence of humming during epileptic seizures of the temporal lobe is associated with activity in a neural network involving the STG and the inferior frontal gyrus.
Publisher: SAGE Publications
Date: 02-2017
Abstract: We provide a quantitative assessment of the parallel-processing hypothesis included in various language-processing models. First, we highlight the importance of reasoning about cognitive processing at the level of single trials rather than using averages. Then, we report the results of an experiment in which the hypothesis was tested at an unprecedented level of granularity with intracerebral data recorded during a picture-naming task. We extracted patterns of significant high-gamma activity from multiple patients and combined them into a single analysis framework that identified consistent patterns. Average signals from different brain regions, presumably indexing distinct cognitive processes, revealed a large degree of concurrent activity. In comparison, at the level of single trials, the temporal overlap of detected significant activity was unexpectedly low, with the exception of activity in sensory cortices. Our novel methodology reveals some limits on the degree to which word production involves parallel processing.
Start Date: 07-2020
End Date: 12-2024
Amount: $526,690.00
Funder: Australian Research Council
View Funded Activity