ORCID Profile
0000-0001-9884-2852
Current Organisation
University of Queensland
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: American Medical Association (AMA)
Date: 06-2016
Publisher: Oxford University Press (OUP)
Date: 25-06-2020
Abstract: Speech perception involves mapping from a continuous and variable acoustic speech signal to discrete, linguistically meaningful units. However, it is unclear where in the auditory processing stream speech sound representations cease to be veridical (faithfully encoding precise acoustic properties) and become categorical (encoding sounds as linguistic categories). In this study, we used functional magnetic resonance imaging and multivariate pattern analysis to determine whether tonotopic primary auditory cortex (PAC), defined as tonotopic voxels falling within Heschl’s gyrus, represents one class of speech sounds—vowels—veridically or categorically. For each of 15 participants, 4 in idualized synthetic vowel stimuli were generated such that the vowels were equidistant in acoustic space, yet straddled a categorical boundary (with the first 2 vowels perceived as [i] and the last 2 perceived as [i]). Each participant’s 4 vowels were then presented in a block design with an irrelevant but attention-demanding level change detection task. We found that in PAC bilaterally, neural discrimination between pairs of vowels that crossed the categorical boundary was more accurate than neural discrimination between equivalently spaced vowel pairs that fell within a category. These findings suggest that PAC does not represent vowel sounds veridically, but that encoding of vowels is shaped by linguistically relevant phonemic categories.
Publisher: American Speech Language Hearing Association
Date: 20-10-2022
DOI: 10.1044/2022_PERSP-22-00006
Abstract: Community aphasia groups serve an important purpose in enhancing the quality of life and psychosocial well-being of in iduals with chronic aphasia. Here, we describe the Aphasia Group of Middle Tennessee, a community aphasia group with a 17-year (and continuing) history, housed within Vanderbilt University Medical Center in Nashville, Tennessee. We describe in detail the history, philosophy, design, curriculum, and facilitation model of this group. We also present both quantitative and qualitative outcomes from group members and their loved ones. Group members and their loved ones alike indicated highly positive assessments of the format and value of the Aphasia Group of Middle Tennessee. By characterizing in detail the successful Aphasia Group of Middle Tennessee, we hope this can serve as a model for clinicians interested in starting their own community aphasia groups, in addition to reaching in iduals living with chronic aphasia and their loved ones through the accessible and aphasia-friendly materials provided with this clinical focus article. 0.23641/asha.20520783
Publisher: Elsevier BV
Date: 2019
Publisher: Elsevier BV
Date: 2004
Publisher: American Speech Language Hearing Association
Date: 22-11-2019
DOI: 10.1044/2019_JSLHR-L-RSNP-19-0031
Abstract: Recovery from aphasia is thought to depend on neural plasticity, that is, functional reorganization of surviving brain regions such that they take on new or expanded roles in language processing. To make progress in characterizing the nature of this process, we need feasible, reliable, and valid methods for identifying language regions of the brain in in iduals with aphasia. This article reviews 3 recent studies from our lab in which we have developed and validated several novel functional magnetic resonance imaging paradigms for language mapping in aphasia. In the 1st study, we investigated the reliability and validity of 4 language mapping paradigms in neurologically normal older adults. In the 2nd study, we developed a novel adaptive semantic matching paradigm and assessed its feasibility, reliability, and validity in in iduals with and without aphasia. In the 3rd study, we developed and evaluated 2 additional adaptive paradigms—rhyme judgment and syllable counting—for mapping phonological encoding regions. We found that the adaptive semantic matching paradigm could be performed by most in iduals with aphasia and yielded reliable and valid maps of core perisylvian language regions in each in idual participant. The psychometric properties of this paradigm were superior to those of other commonly used paradigms such as narrative comprehension and picture naming. The adaptive rhyme judgment paradigm was capable of identifying fronto-parietal phonological encoding regions in in idual participants. Adaptive language mapping paradigms offer a promising approach for future research on the neural basis of recovery from aphasia. 0.23641/asha.10257584
Publisher: Elsevier BV
Date: 2006
DOI: 10.1016/S0010-9452(08)70387-3
Abstract: In this position paper, we discuss a neural architecture comprising three major cortical systems: the inferior frontal cortex (including Broca's area), the rostral part of the posterior parietal cortex, and the superior temporal cortex. This network of areas is critical to imitation and to language. What are the functional properties of the network that make it possible for imitation and language to co-exist within the same neural architecture? We propose that this network implements cortical forward and inverse modeling for actions and speech sounds of self and others.
Publisher: Informa UK Limited
Date: 22-11-2012
Publisher: Elsevier BV
Date: 2023
Publisher: Oxford University Press (OUP)
Date: 16-09-2009
DOI: 10.1093/BRAIN/AWP233
Publisher: MIT Press - Journals
Date: 05-2014
DOI: 10.1162/JOCN_A_00550
Abstract: Neuroimaging and neuropsychological studies have implicated the anterior temporal lobe (ATL) in sentence-level processing, with syntactic structure-building and/or combinatorial semantic processing suggested as possible roles. A potential challenge to the view that the ATL is involved in syntactic aspects of sentence processing comes from the clinical syndrome of semantic variant primary progressive aphasia (semantic PPA also known as semantic dementia). In semantic PPA, bilateral neurodegeneration of the ATLs is associated with profound lexical semantic deficits, yet syntax is strikingly spared. The goal of this study was to investigate the neural correlates of syntactic processing in semantic PPA to determine which regions normally involved in syntactic processing are damaged in semantic PPA and whether spared syntactic processing depends on preserved functionality of intact regions, preserved functionality of atrophic regions, or compensatory functional reorganization. We scanned 20 in iduals with semantic PPA and 24 age-matched controls using structural MRI and fMRI. Participants performed a sentence comprehension task that emphasized syntactic processing and minimized lexical semantic demands. We found that, in controls, left inferior frontal and left posterior temporal regions were modulated by syntactic processing, whereas anterior temporal regions were not significantly modulated. In the semantic PPA group, atrophy was most severe in the ATLs but extended to the posterior temporal regions involved in syntactic processing. Functional activity for syntactic processing was broadly similar in patients and controls in particular, whole-brain analyses revealed no significant differences between patients and controls in the regions modulated by syntactic processing. The atrophic left ATL did show abnormal functionality in semantic PPA patients however, this took the unexpected form of a failure to deactivate. Taken together, our findings indicate that spared syntactic processing in semantic PPA depends on preserved functionality of structurally intact left frontal regions and moderately atrophic left posterior temporal regions, but no functional reorganization was apparent as a consequence of anterior temporal atrophy and dysfunction. These results suggest that the role of the ATL in sentence processing is less likely to relate to syntactic structure-building and more likely to relate to higher-level processes such as combinatorial semantic processing.
Publisher: American Speech Language Hearing Association
Date: 14-04-2017
DOI: 10.1044/2016_JSLHR-S-15-0112
Abstract: Real-time magnetic resonance imaging (MRI) and accompanying analytical methods are shown to capture and quantify salient aspects of apraxic speech, substantiating and expanding upon evidence provided by clinical observation and acoustic and kinematic data. Analysis of apraxic speech errors within a dynamic systems framework is provided and the nature of pathomechanisms of apraxic speech discussed. One adult male speaker with apraxia of speech was imaged using real-time MRI while producing spontaneous speech, repeated naming tasks, and self-paced repetition of word pairs designed to elicit speech errors. Articulatory data were analyzed, and speech errors were detected using time series reflecting articulatory activity in regions of interest. Real-time MRI captured two types of apraxic gestural intrusion errors in a word pair repetition task. Gestural intrusion errors in nonrepetitive speech, multiple silent initiation gestures at the onset of speech, and covert (unphonated) articulation of entire monosyllabic words were also captured. Real-time MRI and accompanying analytical methods capture and quantify many features of apraxic speech that have been previously observed using other modalities while offering high spatial resolution. This patient's apraxia of speech affected the ability to select only the appropriate vocal tract gestures for a target utterance, suppressing others, and to coordinate them in time.
Publisher: Public Library of Science (PLoS)
Date: 09-02-2018
Publisher: Informa UK Limited
Date: 18-03-2014
Publisher: Oxford University Press (OUP)
Date: 11-06-2010
DOI: 10.1093/BRAIN/AWQ129
Publisher: Oxford University Press (OUP)
Date: 07-03-2013
DOI: 10.1093/BRAIN/AWT034
Publisher: MIT Press - Journals
Date: 03-2004
DOI: 10.1162/089892904322984535
Abstract: We examined the abilities of aphasic patients to make grammaticality judgments on English sentences instantiating a variety of syntactic structures. Previous studies employing this metalinguistic task have suggested that aphasic patients typically perform better on grammaticality judgment tasks than they do on sentence comprehension tasks, a finding that has informed the current view that grammatical knowledge is relatively preserved in agrammatic aphasia. However, not all syntactic structures are judged equally accurately, and several researchers have attempted to provide explanatory principles to predict which structures will pose problems to agrammatic patients. One such proposal is Grodzinsky and Finkel's (1998) claim that agrammatic aphasics are selectively impaired in their ability to process structures involving traces of maximal projections. In this study, we tested this claim by presenting patients with sentences with or without such traces, but also varying the level of difficulty of both kinds of structures, assessed with reference to the performance of age-matched and young controls. We found no evidence that agrammatic aphasics, or any other subgroup, are selectively impaired on structures involving traces: Some judgments involving traces were made quite accurately, whereas other judgments not involving traces were made very poorly. Subgroup analyses revealed that patient groups and agematched controls had remarkably similar profiles of performance across sentence types, regardless of whether the patients were grouped based on Western Aphasia Battery classification, an independent screening test for agrammatic comprehension, or lesion site. This implies that the pattern of performance across sentence types does not result from any particular component of the grammar, or any particular brain region, being selectively compromised. Lesion analysis revealed that posterior temporal areas were more reliably implicated in poor grammaticality judgment performance than anterior areas, but poor performance was also observed with some anterior lesions, suggesting that areas important for syntactic processing are distributed throughout the left peri-sylvian region.
Publisher: Springer Science and Business Media LLC
Date: 06-06-2004
DOI: 10.1038/NN1263
Publisher: Elsevier BV
Date: 10-2011
Publisher: Oxford University Press (OUP)
Date: 11-06-2011
DOI: 10.1093/BRAIN/AWR099
Publisher: Informa UK Limited
Date: 20-01-2016
Publisher: Wiley
Date: 13-01-2014
DOI: 10.1002/BRB3.211
Publisher: Elsevier BV
Date: 04-2011
Publisher: Springer Science and Business Media LLC
Date: 27-06-2019
Publisher: Informa UK Limited
Date: 02-01-2018
Publisher: Oxford University Press (OUP)
Date: 02-11-2011
Publisher: Elsevier BV
Date: 2017
Publisher: Elsevier BV
Date: 09-2014
Publisher: Elsevier BV
Date: 10-2007
Publisher: Oxford University Press (OUP)
Date: 07-04-2023
Abstract: Most in iduals who experience aphasia after a stroke recover to some extent, with the majority of gains taking place in the first year. The nature and time course of this recovery process is only partially understood, especially its dependence on lesion location and extent, which are the most important determinants of outcome. The aim of this study was to provide a comprehensive description of patterns of recovery from aphasia in the first year after stroke. We recruited 334 patients with acute left hemisphere supratentorial ischaemic or haemorrhagic stroke and evaluated their speech and language function within 5 days using the Quick Aphasia Battery (QAB). At this initial time point, 218 patients presented with aphasia. In iduals with aphasia were followed longitudinally, with follow-up evaluations of speech and language at 1 month, 3 months, and 1 year post-stroke, wherever possible. Lesions were manually delineated based on acute clinical MRI or CT imaging. Patients with and without aphasia were ided into 13 groups of in iduals with similar, commonly occurring patterns of brain damage. Trajectories of recovery were then investigated as a function of group (i.e. lesion location and extent) and speech/language domain (overall language function, word comprehension, sentence comprehension, word finding, grammatical construction, phonological encoding, speech motor programming, speech motor execution, and reading). We found that aphasia is dynamic, multidimensional, and gradated, with little explanatory role for aphasia subtypes or binary concepts such as fluency. Patients with circumscribed frontal lesions recovered well, consistent with some previous observations. More surprisingly, most patients with larger frontal lesions extending into the parietal or temporal lobes also recovered well, as did patients with relatively circumscribed temporal, temporoparietal, or parietal lesions. Persistent moderate or severe deficits were common only in patients with extensive damage throughout the middle cerebral artery distribution or extensive temporoparietal damage. There were striking differences between speech/language domains in their rates of recovery and relationships to overall language function, suggesting that specific domains differ in the extent to which they are redundantly represented throughout the language network, as opposed to depending on specialized cortical substrates. Our findings have an immediate clinical application in that they will enable clinicians to estimate the likely course of recovery for in idual patients, as well as the uncertainty of these predictions, based on acutely observable neurological factors.
Publisher: SAGE Publications
Date: 21-01-2008
Abstract: The paced auditory serial addition task (PASAT) is a test of working memory and attention that is frequently abnormal in MS and is used serially to assess cognitive dysfunction as part of the MS Functional Composite in clinical trials. Previous studies using functional MRI (fMRI) during PASAT performance have shown significant differences in activation patterns between healthy controls and MS patients matched for performance, but serial fMRI measures have not been reported. A confound is that learning effects are common with repeated PASAT testing, diminishing over successive trials. After measuring PASAT performance weekly for four weeks in 10 healthy controls to eliminate practice effects, we assessed brain activity using fMRI at baseline and after six months to determine the reproducibility of activation patterns in healthy controls during PASAT performance. Results showed that scores improved during the first three testing trials and stabilized subsequently. Brain activation during PASAT performance was seen in left frontal and parietal regions consistent with previous reports. After a six-month interval, PASAT performance and fMRI activity were stable, suggesting that serial fMRI during PASAT performance could be used as an outcome measure in trials assessing cognitive decline in clinical populations once practice effects are eliminated. Multiple Sclerosis 2008 14: 465—471. msj.sagepub.com
Publisher: Elsevier BV
Date: 04-2019
Publisher: Elsevier BV
Date: 11-2009
Publisher: Springer Science and Business Media LLC
Date: 21-04-2003
DOI: 10.1038/NN1050
Publisher: American Speech Language Hearing Association
Date: 25-03-2019
DOI: 10.1044/2018_JSLHR-L-18-0254
Abstract: Recovery from aphasia after stroke has a decelerating trajectory, with the greatest gains taking place early and the slope of change decreasing over time. Despite its importance, little is known regarding evolution of language function in the early postonset period. The goal of this study was to characterize the dynamics and nature of recovery of language function in the acute and early subacute phases of stroke. Twenty-one patients with aphasia were evaluated every 2–3 days for the first 15 days after onset of acute ischemic or hemorrhagic stroke. Language function was assessed at each time point with the Quick Aphasia Battery (Wilson, Eriksson, Schneck, & Lucanie, 2018), which yields an overall summary score and a multidimensional profile of 7 different language domains. On a 10-point scale, overall language function improved by a mean of 1.07 points per week, confidence interval [0.46, 1.71], with 19 of 21 patients showing positive changes. The trajectory of recovery was approximately linear over this time period. There was significant variability across patients, and patients with more impaired language function at Day 2 poststroke experienced greater improvements over the subsequent 2 weeks. Patterns of recovery differed across language domains, with consistent improvements in word finding, grammatical construction, repetition, and reading, but less consistent improvements in word comprehension and sentence comprehension. Overall language function typically improves substantially and steadily during the first 2 weeks after stroke, driven mostly by recovery of expressive language. Information on the trajectory of early recovery will increase the accuracy of prognoses and establish baseline expectations against which to evaluate the efficacy of interventions. 0.23641/asha.7811876
Publisher: MIT Press - Journals
Date: 03-2018
DOI: 10.1162/JOCN_A_01215
Abstract: Cortical stimulation mapping (CSM) has provided important insights into the neuroanatomy of language because of its high spatial and temporal resolution, and the causal relationships that can be inferred from transient disruption of specific functions. Almost all CSM studies to date have focused on word-level processes such as naming, comprehension, and repetition. In this study, we used CSM to identify sites where stimulation interfered selectively with syntactic encoding during sentence production. Fourteen patients undergoing left-hemisphere neurosurgery participated in the study. In 7 of the 14 patients, we identified nine sites where cortical stimulation interfered with syntactic encoding but did not interfere with single word processing. All nine sites were localized to the inferior frontal gyrus, mostly to the pars triangularis and opercularis. Interference with syntactic encoding took several different forms, including misassignment of arguments to grammatical roles, misassignment of nouns to verb slots, omission of function words and inflectional morphology, and various paragrammatic constructions. Our findings suggest that the left inferior frontal gyrus plays an important role in the encoding of syntactic structure during sentence production.
Publisher: MIT Press - Journals
Date: 05-2007
DOI: 10.1162/JOCN.2007.19.5.799
Abstract: We used functional magnetic resonance imaging (fMRI) in conjunction with a voxel-based approach to lesion symptom mapping to quantitatively evaluate the similarities and differences between brain areas involved in language and environmental sound comprehension. In general, we found that language and environmental sounds recruit highly overlapping cortical regions, with cross-domain differences being graded rather than absolute. Within language-based regions of interest, we found that in the left hemisphere, language and environmental sound stimuli evoked very similar volumes of activation, whereas in the right hemisphere, there was greater activation for environmental sound stimuli. Finally, lesion symptom maps of aphasic patients based on environmental sounds or linguistic deficits [Saygin, A. P., Dick, F., Wilson, S. W., Dronkers, N. F., & Bates, E. Shared neural resources for processing language and environmental sounds: Evidence from aphasia. Brain, 126, 928–945, 2003] were generally predictive of the extent of blood oxygenation level dependent fMRI activation across these regions for sounds and linguistic stimuli in young healthy subjects.
Publisher: Oxford University Press (OUP)
Date: 06-2003
DOI: 10.1093/BRAIN/AWG154
Publisher: Wiley
Date: 15-04-2009
DOI: 10.1002/HBM.20565
Publisher: MIT Press - Journals
Date: 12-2008
Abstract: Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory–motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.
Publisher: Journal of Neurosurgery Publishing Group (JNSPG)
Date: 03-2023
Abstract: Broca’s aphasia is a syndrome of impaired fluency with retained comprehension. The authors used an unbiased algorithm to examine which neuroanatomical areas are most likely to result in Broca’s aphasia following surgical lesions. Patients were prospectively evaluated with standardized language batteries before and after surgery. Broca’s area was defined anatomically as the pars opercularis and triangularis of the inferior frontal gyrus. Broca’s aphasia was defined by the Western Aphasia Battery language assessment. Resections were outlined from MRI scans to construct 3D volumes of interest. These were aligned using a nonlinear transformation to Montreal Neurological Institute brain space. A voxel-based lesion-symptom mapping (VLSM) algorithm was used to test for areas statistically associated with Broca’s aphasia when incorporated into a resection, as well as areas associated with deficits in fluency independent of Western Aphasia Battery classification. Postoperative MRI scans were reviewed in blinded fashion to estimate the percentage resection of Broca’s area compared to areas identified using the VLSM algorithm. A total of 289 patients had early language evaluations, of whom 19 had postoperative Broca’s aphasia. VLSM analysis revealed an area that was highly correlated (p 0.001) with Broca’s aphasia, spanning ventral sensorimotor cortex and supramarginal gyri, as well as extending into subcortical white matter tracts. Reduced fluency scores were significantly associated with an overlapping region of interest. The fluency score was negatively correlated with fraction of resected precentral, postcentral, and supramarginal components of the VLSM area. Broca’s aphasia does not typically arise from neurosurgical resections in Broca’s area. When Broca’s aphasia does occur after surgery, it is typically in the early postoperative period, improves by 1 month, and is associated with resections of ventral sensorimotor cortex and supramarginal gyri.
Publisher: MDPI AG
Date: 13-04-2022
Abstract: Although researchers have recognized the need to better account for the heterogeneous perceptual speech characteristics among talkers with the same disease, guidance on how to best establish such dysarthria subgroups is currently lacking. Therefore, we compared subgroup decisions of two data-driven approaches based on a cohort of talkers with Huntington’s disease (HD): (1) a statistical clustering approach (STATCLUSTER) based on perceptual speech characteristic profiles and (2) an auditory free classification approach (FREECLASS) based on listeners’ similarity judgments. We determined the amount of overlap across the two subgrouping decisions and the perceptual speech characteristics driving the subgrouping decisions of each approach. The same speech s les produced by 48 talkers with HD were used for both grouping approaches. The STATCLUSTER approach had been conducted previously. The FREECLASS approach was conducted in the present study. Both approaches yielded four dysarthria subgroups, which overlapped between 50% to 78%. In both grouping approaches, overall bizarreness and speech rate characteristics accounted for the grouping decisions. In addition, voice abnormalities contributed to the grouping decisions in the FREECLASS approach. These findings suggest that apart from overall bizarreness ratings, indexing dysarthria severity, speech rate and voice characteristics may be important features to establish dysarthria subgroups in HD.
Publisher: Elsevier BV
Date: 10-2009
Publisher: Society for Neuroscience
Date: 15-12-2010
DOI: 10.1523/JNEUROSCI.2547-10.2010
Abstract: The left posterior inferior frontal cortex (IFC) is important for syntactic processing, and has been shown in many functional imaging studies to be differentially recruited for the processing of syntactically complex sentences relative to simpler ones. In the nonfluent variant of primary progressive aphasia (PPA), degeneration of the posterior IFC is associated with expressive and receptive agrammatism however, the functional status of this region in nonfluent PPA is not well understood. Our objective was to determine whether the atrophic posterior IFC is differentially recruited for the processing of syntactically complex sentences in nonfluent PPA. Using structural and functional magnetic resonance imaging, we quantified tissue volumes and functional responses to a syntactic comprehension task in eight patients with nonfluent PPA, compared to healthy age-matched controls. In controls, the posterior IFC showed more activity for syntactically complex sentences than simpler ones, as expected. In nonfluent PPA patients, the posterior IFC was atrophic and, unlike controls, showed an equivalent level of functional activity for syntactically complex and simpler sentences. This abnormal pattern of functional activity was specific to the posterior IFC: the mid-superior temporal sulcus, another region modulated by syntactic complexity in controls, showed normal modulation by complexity in patients. A more anterior inferior frontal region was recruited by patients, but did not support successful syntactic processing. We conclude that in nonfluent PPA, the posterior IFC is not only structurally damaged, but also functionally abnormal, suggesting a critical role for this region in the breakdown of syntactic processing in this syndrome.
Publisher: Oxford University Press (OUP)
Date: 30-04-2018
DOI: 10.1093/BRAIN/AWY101
Publisher: Elsevier BV
Date: 08-2009
Publisher: Elsevier BV
Date: 12-2012
Publisher: American Speech Language Hearing Association
Date: 27-05-2019
DOI: 10.1044/2018_AJSLP-18-0192
Abstract: Auditory-perceptual assessment, in which trained listeners rate a large number of perceptual features of speech s les, is the gold standard for the differential diagnosis of motor speech disorders. The goal of this study was to investigate the feasibility of applying a similar, formalized auditory-perceptual approach to the assessment of language deficits in connected speech s les from in iduals with aphasia. Twenty-seven common features of connected speech in aphasia were defined, each of which was rated on a 5-point scale. Three experienced researchers evaluated 24 connected speech s les from the AphasiaBank database, and 12 student clinicians evaluated subsets of 8 speech s les each. We calculated interrater reliability for each group of raters and investigated the validity of the auditory-perceptual approach by comparing feature ratings to related quantitative measures derived from transcripts and clinical measures, and by examining patterns of feature co-occurrence. Most features were rated with good-to-excellent interrater reliability by researchers and student clinicians. Most features demonstrated strong concurrent validity with respect to quantitative connected speech measures computed from AphasiaBank transcripts and/or clinical aphasia battery subscores. Factor analysis showed that 4 underlying factors, which we labeled Paraphasia, Logopenia, Agrammatism, and Motor Speech, accounted for 79% of the variance in connected speech profiles. Examination of in idual patients' factor scores revealed striking ersity among in iduals classified with a given aphasia type. Auditory-perceptual rating of connected speech in aphasia shows potential to be a comprehensive, efficient, reliable, and valid approach for characterizing connected speech in aphasia.
Publisher: Elsevier BV
Date: 05-2010
Publisher: MIT Press - Journals
Date: 2021
DOI: 10.1162/NOL_A_00031
Abstract: In this study, we investigated how the brain responds to task difficulty in linguistic and non-linguistic contexts. This is important for the interpretation of functional imaging studies of neuroplasticity in post-stroke aphasia, because of the inherent difficulty of matching or controlling task difficulty in studies with neurological populations. Twenty neurologically normal in iduals were scanned with fMRI as they performed a linguistic task and a non-linguistic task, each of which had two levels of difficulty. Critically, the tasks were matched across domains (linguistic, non-linguistic) for accuracy and reaction time, such that the differences between the easy and difficult conditions were equivalent across domains. We found that non-linguistic demand modulated the same set of multiple demand (MD) regions that have been identified in many prior studies. In contrast, linguistic demand modulated MD regions to a much lesser extent, especially nodes belonging to the dorsal attention network. Linguistic demand modulated a subset of language regions, with the left inferior frontal gyrus most strongly modulated. The right hemisphere region homotopic to Broca’s area was also modulated by linguistic but not non-linguistic demand. When linguistic demand was mapped relative to non-linguistic demand, we also observed domain by difficulty interactions in temporal language regions as well as a widespread bilateral semantic network. In sum, linguistic and non-linguistic demand have strikingly different neural correlates. These findings can be used to better interpret studies of patients recovering from aphasia. Some reported activations in these studies may reflect task performance differences, while others can be more confidently attributed to neuroplasticity.
Publisher: Wiley
Date: 22-01-2014
DOI: 10.1002/HBM.22457
Publisher: Journal of Neurosurgery Publishing Group (JNSPG)
Date: 09-2015
Abstract: Transient aphasias are often observed in the first few days after a patient has undergone resection in the language-dominant hemisphere. The aims of this prospective study were to characterize the incidence and nature of these aphasias and to determine whether there are relationships between location of the surgical site and deficits in specific language domains. One hundred ten patients undergoing resection to the language-dominant hemisphere participated in the study. Language was evaluated prior to surgery and 2–3 days and 1 month postsurgery using the Western Aphasia Battery and the Boston Naming Test. Voxel-based lesion-symptom mapping was used to identify relationships between the surgical site location assessed on MRI and deficits in fluency, information content, comprehension, repetition, and naming. Seventy-one percent of patients were classified as aphasic based on the Western Aphasia Battery 2–3 days postsurgery, with deficits observed in each of the language domains examined. Fluency deficits were associated with resection of the precentral gyrus and adjacent inferior frontal cortex. Reduced information content of spoken output was associated with resection of the ventral precentral gyrus and posterior inferior frontal gyrus (pars opercularis). Repetition deficits were associated with resection of the posterior superior temporal gyrus. Naming deficits were associated with resection of the ventral temporal cortex, with midtemporal and posterior temporal damage more predictive of naming deficits than anterior temporal damage. By 1 month postsurgery, nearly all language deficits were resolved, and no language measure except for naming differed significantly from its presurgical level. These findings show that transient aphasias are very common after left hemisphere resective surgery and that the precise nature of the aphasia depends on the specific location of the surgical site. The patient cohort in this study provides a unique window into the neural basis of language because resections are discrete, their locations are not limited by vascular distribution or patterns of neurodegeneration, and language can be studied prior to substantial reorganization.
Publisher: MIT Press
Date: 2023
DOI: 10.1162/NOL_A_00114
Abstract: Imaging studies of language processing in clinical populations can be complicated to interpret for several reasons, one being the difficulty of matching the effortfulness of processing across in iduals or tasks. To better understand how effortful linguistic processing is reflected in functional activity, we investigated the neural correlates of task difficulty in linguistic and non-linguistic contexts in the auditory modality and then compared our findings to a recent analogous experiment in the visual modality in a different cohort. Nineteen neurologically normal in iduals were scanned with fMRI as they performed a linguistic task (semantic matching) and a non-linguistic task (melodic matching), each with two levels of difficulty. We found that left hemisphere frontal and temporal language regions, as well as the right inferior frontal gyrus, were modulated by linguistic demand and not by non-linguistic demand. This was broadly similar to what was previously observed in the visual modality. In contrast, the multiple demand (MD) network, a set of brain regions thought to support cognitive flexibility in many contexts, was modulated neither by linguistic demand nor by non-linguistic demand in the auditory modality. This finding was in striking contradistinction to what was previously observed in the visual modality, where the MD network was robustly modulated by both linguistic and non-linguistic demand. Our findings suggest that while the language network is modulated by linguistic demand irrespective of modality, modulation of the MD network by linguistic demand is not inherent to linguistic processing, but rather depends on specific task factors.
Publisher: MIT Press
Date: 21-09-2023
DOI: 10.1162/NOL_A_00115
Abstract: After a stroke, in iduals with aphasia often recover to a certain extent over time. This recovery process may be dependent on the health of surviving brain regions. Leukoaraiosis (white matter hyperintensities on MRI reflecting cerebral small vessel disease) is one indication of compromised brain health and is associated with cognitive and motor impairment. Previous studies have suggested that leukoaraiosis may be a clinically relevant predictor of aphasia outcomes and recovery, although findings have been inconsistent. We investigated the relationship between leukoaraiosis and aphasia in the first year after stroke. We recruited 267 patients with acute left hemispheric stroke and coincident fluid attenuated inversion recovery MRI. Patients were evaluated for aphasia within 5 days of stroke, and 174 patients presented with aphasia acutely. Of these, 84 patients were evaluated at ∼3 months post-stroke or later to assess longer-term speech and language outcomes. Multivariable regression models were fit to the data to identify any relationships between leukoaraiosis and initial aphasia severity, extent of recovery, or longer-term aphasia severity. We found that leukoaraiosis was present to varying degrees in 90% of patients. However, leukoaraiosis did not predict initial aphasia severity, aphasia recovery, or longer-term aphasia severity. The lack of any relationship between leukoaraiosis severity and aphasia recovery may reflect the anatomical distribution of cerebral small vessel disease, which is largely medial to the white matter pathways that are critical for speech and language function.
Publisher: Wiley
Date: 29-04-2004
Publisher: Elsevier BV
Date: 10-2006
DOI: 10.1016/J.NEUROIMAGE.2006.05.032
Abstract: Neural responses to unfamiliar non-native phonemes varying in the extent to which they can be articulated were studied with functional magnetic resonance imaging (fMRI). Both superior temporal (auditory) and precentral (motor) areas were activated by passive speech perception, and both distinguished non-native from native phonemes, with greater signal change in response to non-native phonemes. Furthermore, speech-responsive motor regions and superior temporal sites were functionally connected. However, only in auditory areas did activity covary with the producibility of non-native phonemes. These data suggest that auditory areas are crucial for the transformation from acoustic signal to phonetic code, but the motor system also plays an active role, which may involve the internal generation of candidate phonemic categorizations. These 'motor' categorizations would then be compared to the acoustic input in auditory areas. The data suggest that speech perception is neither purely sensory nor motor, but rather a sensorimotor process.
Publisher: Elsevier BV
Date: 09-2018
Publisher: Oxford University Press (OUP)
Date: 04-2003
DOI: 10.1093/BRAIN/AWG082
Abstract: Although aphasia is often characterized as a selective impairment in language function, left hemisphere lesions may cause impairments in semantic processing of auditory information, not only in verbal but also in nonverbal domains. We assessed the 'online' relationship between verbal and nonverbal auditory processing by examining the ability of 30 left hemisphere-damaged aphasic patients to match environmental sounds and linguistic phrases to corresponding pictures. The verbal and nonverbal task components were matched carefully through a norming study 21 age-matched controls and five right hemisphere-damaged patients were also tested to provide further reference points. We found that, while the aphasic groups were impaired relative to normal controls, they were impaired to the same extent in both domains, with accuracy and reaction time for verbal and nonverbal trials revealing unusually high correlations (r = 0.74 for accuracy, r = 0.95 for reaction time). Severely aphasic patients tended to perform worse in both domains, but lesion size did not correlate with performance. Lesion overlay analysis indicated that damage to posterior regions in the left middle and superior temporal gyri and to the inferior parietal lobe was a predictor of deficits in processing for both speech and environmental sounds. The lesion mapping and further statistical assessments reliably revealed a posterior superior temporal region (Wernicke's area, traditionally considered a language-specific region) as being differentially more important for processing nonverbal sounds compared with verbal sounds. These results suggest that, in most cases, processing of meaningful verbal and nonverbal auditory information break down together in stroke and that subsequent recovery of function applies to both domains. This suggests that language shares neural resources with those used for processing information in other domains.
Publisher: Elsevier BV
Date: 10-2009
Publisher: Wiley
Date: 17-04-2018
DOI: 10.1002/HBM.24077
Publisher: MIT Press - Journals
Date: 02-2016
DOI: 10.1162/JOCN_A_00901
Abstract: In iduals with primary progressive aphasia (PPA) show selective breakdown in regions within the proposed dorsal (articulatory–phonological) and ventral (lexical–semantic) pathways involved in language processing. Phonological STM impairment, which has been attributed to selective damage to dorsal pathway structures, is considered to be a distinctive feature of the logopenic variant of PPA. By contrast, phonological abilities are considered to be relatively spared in the semantic variant and are largely unexplored in the nonfluent/agrammatic variant. Comprehensive assessment of phonological ability in the three variants of PPA has not been undertaken. We investigated phonological processing skills in a group of participants with PPA as well as healthy controls, with the goal of identifying whether patterns of performance support the dorsal versus ventral functional–anatomical framework and to discern whether phonological ability differs among PPA subtypes. We also explored the neural bases of phonological performance using voxel-based morphometry. Phonological performance was impaired in patients with damage to dorsal pathway structures (nonfluent/agrammatic and logopenic variants), with logopenic participants demonstrating particular difficulty on tasks involving nonwords. Binary logistic regression revealed that select phonological tasks predicted diagnostic group membership in the less fluent variants of PPA with a high degree of accuracy, particularly in conjunction with a motor speech measure. Brain–behavior correlations indicated a significant association between the integrity of gray matter in frontal and temporoparietal regions of the left hemisphere and phonological skill. Findings confirm the critical role of dorsal stream structures in phonological processing and demonstrate unique patterns of impaired phonological processing in logopenic and nonfluent/agrammatic variants of PPA.
Publisher: Elsevier BV
Date: 09-2006
DOI: 10.1016/J.CUB.2006.07.060
Abstract: The thesis of embodied semantics holds that conceptual representations accessed during linguistic processing are, in part, equivalent to the sensory-motor representations required for the enactment of the concepts described . Here, using fMRI, we tested the hypothesis that areas in human premotor cortex that respond both to the execution and observation of actions-mirror neuron areas -are key neural structures in these processes. Participants observed actions and read phrases relating to foot, hand, or mouth actions. In the premotor cortex of the left hemisphere, a clear congruence was found between effector-specific activations of visually presented actions and of actions described by literal phrases. These results suggest a key role of mirror neuron areas in the re-enactment of sensory-motor representations during conceptual processing of actions invoked by linguistic stimuli.
Publisher: Informa UK Limited
Date: 02-09-2016
Publisher: Journal of Neurosurgery Publishing Group (JNSPG)
Date: 27-03-2023
DOI: 10.3171/CASE22504
Abstract: Apraxia of speech is a disorder of speech-motor planning in which articulation is effortful and error-prone despite normal strength of the articulators. Phonological alexia and agraphia are disorders of reading and writing disproportionately affecting unfamiliar words. These disorders are almost always accompanied by aphasia. A 36-year-old woman underwent resection of a grade IV astrocytoma based in the left middle precentral gyrus, including a cortical site associated with speech arrest during electrocortical stimulation mapping. Following surgery, she exhibited moderate apraxia of speech and difficulty with reading and spelling, both of which improved but persisted 6 months after surgery. A battery of speech and language assessments was administered, revealing preserved comprehension, naming, cognition, and orofacial praxis, with largely isolated deficits in speech-motor planning and the spelling and reading of nonwords. This case describes a specific constellation of speech-motor and written language symptoms—apraxia of speech, phonological agraphia, and phonological alexia in the absence of aphasia—which the authors theorize may be attributable to disruption of a single process of “motor-phonological sequencing.” The middle precentral gyrus may play an important role in the planning of motorically complex phonological sequences for production, independent of output modality.
Publisher: Elsevier BV
Date: 2017
Publisher: Elsevier BV
Date: 05-2018
Publisher: Oxford University Press (OUP)
Date: 15-05-2008
Abstract: The role of superior temporal cortex in speech comprehension is well established, but the complete network of regions involved in understanding language in ecologically valid contexts is less clearly understood. In a functional magnetic resonance imaging (fMRI) study, we presented 24 subjects with auditory or audiovisual narratives, and used model-free intersubject correlational analyses to reveal brain areas that were modulated in a consistent way across subjects during the narratives. Conventional comparisons to a resting state were also performed. Both analyses showed the expected recruitment of superior temporal areas, however, the intersubject correlational analyses also revealed an extended network of areas involved in narrative speech comprehension. Two findings stand out in particular. Firstly, many areas in the "default mode" network (typically deactivated relative to rest) were systematically modulated by the time-varying properties of the auditory or audiovisual input. These areas included the anterior cingulate and adjacent medial frontal cortex, and the posterior cingulate and adjacent precuneus. Secondly, extensive bilateral inferior frontal and premotor regions were implicated in auditory as well as audiovisual language comprehension. This extended network of regions may be important for higher-level linguistic processes, and interfaces with extralinguistic cognitive, affective, and interpersonal systems.
Publisher: Wiley
Date: 08-2011
DOI: 10.1002/ANA.22424
Publisher: Elsevier BV
Date: 11-2017
Publisher: Elsevier BV
Date: 2017
Publisher: Informa UK Limited
Date: 17-01-2017
Publisher: Informa UK Limited
Date: 02-01-2021
Publisher: Society for Neuroscience
Date: 07-07-2004
DOI: 10.1523/JNEUROSCI.0504-04.2004
Abstract: Motion cues can be surprisingly powerful in defining objects and events. Specifically, a handful of point-lights attached to the joints of a human actor will evoke a vivid percept of action when the body is in motion. The perception of point-light biological motion activates posterior cortical areas of the brain. On the other hand, observation of others' actions is known to also evoke activity in motor and premotor areas in frontal cortex. In the present study, we investigated whether point-light biological motion animations would lead to activity in frontal cortex as well. We performed a human functional magnetic resonance imaging study on a high-field-strength magnet and used a number of methods to increase signal, as well as cortical surface-based analysis methods. Areas that responded selectively to point-light biological motion were found in lateral and inferior temporal cortex and in inferior frontal cortex. The robust responses we observed in frontal areas indicate that these stimuli can also recruit action observation networks, although they are very simplified and characterize actions by motion cues alone. The finding that even point-light animations evoke activity in frontal regions suggests that the motor system of the observer may be recruited to “fill in” these simplified displays.
Publisher: Cambridge University Press (CUP)
Date: 02-2003
DOI: 10.1017/S0305000902005512
Abstract: Children learning English often omit grammatical words and morphemes, but there is still much debate over exactly why and in what contexts they do so. This study investigates the acquisition of three elements which instantiate the grammatical category of ‘inflection’ – copula be , auxiliary be and 3sg present agreement – in longitudinal transcripts from five children, whose ages range from 1 to 3 in the corpora examined. The aim is to determine whether inflection emerges as a unitary category, as predicted by some recent generative accounts, or whether it develops in a more piecemeal fashion, consistent with constructivist accounts. It is found that for each child the relative pace of development of the three morphemes studied varies significantly, suggesting that these morphemes do not depend on a unitary underlying category. Furthermore, early on, be is often used primarily with particular closed-class subjects, suggesting that forms such as he's and that's are learned as lexically specific constructions. These findings are argued to support the idea that children learn ‘inflection’ (and by hypothesis, other functional categories) not by filling in pre-specified slots in an innate structure, but by learning some specific constructions involving particular lexical items, before going on to gradually abstract more general construction types.
Publisher: Oxford University Press (OUP)
Date: 23-08-2016
DOI: 10.1093/BRAIN/AWW218
Publisher: Wiley
Date: 03-2017
DOI: 10.1002/ANA.24885
Publisher: Elsevier BV
Date: 05-2022
Publisher: MIT Press
Date: 12-2020
DOI: 10.1162/NOL_A_00025
Abstract: Recovery from aphasia is thought to depend on neural plasticity, that is, the functional reorganization of surviving brain regions such that they take on new or expanded roles in language processing. We carried out a systematic review and meta-analysis of all articles published between 1995 and early 2020 that have described functional imaging studies of six or more in iduals with post-stroke aphasia, and have reported analyses bearing on neuroplasticity of language processing. Each study was characterized and appraised in detail, with particular attention to three critically important methodological issues: task performance confounds, contrast validity, and correction for multiple comparisons. We identified 86 studies describing a total of 561 relevant analyses. We found that methodological limitations related to task performance confounds, contrast validity, and correction for multiple comparisons have been pervasive. Only a few claims about language processing in in iduals with aphasia are strongly supported by the extant literature: First, left hemisphere language regions are less activated in in iduals with aphasia than in neurologically normal controls and second, in cohorts with aphasia, activity in left hemisphere language regions, and possibly a temporal lobe region in the right hemisphere, is positively correlated with language function. There is modest, equivocal evidence for the claim that in iduals with aphasia differentially recruit right hemisphere homotopic regions, but no compelling evidence for differential recruitment of additional left hemisphere regions or domain-general networks. There is modest evidence that left hemisphere language regions return to function over time, but no compelling longitudinal evidence for dynamic reorganization of the language network.
Publisher: American Society of Neuroradiology (ASNR)
Date: 22-09-2022
DOI: 10.3174/AJNR.A7629
Publisher: Cambridge University Press (CUP)
Date: 07-04-2010
DOI: 10.1017/S1355617710000408
Abstract: There is increasing recognition that set-shifting, a form of cognitive control, is mediated by different neural structures. However, these regions have not yet been carefully identified as many studies do not account for the influence of component processes (e.g., motor speed). We investigated gray matter correlates of set-shifting while controlling for component processes. Using the Design Fluency (DF), Trail Making Test (TMT), and Color Word Interference (CWI) subtests from the Delis-Kaplan Executive Function System (D-KEFS), we investigated the correlation between set-shifting performance and gray matter volume in 160 subjects with neurodegenerative disease, mild cognitive impairment, and healthy older adults using voxel-based morphometry. All three set-shifting tasks correlated with multiple, widespread gray matter regions. After controlling for the component processes, set-shifting performance correlated with focal regions in prefrontal and posterior parietal cortices. We also identified bilateral prefrontal cortex and the right posterior parietal lobe as common sites for set-shifting across the three tasks. There was a high degree of multicollinearity between the set-shifting conditions and the component processes of TMT and CWI, suggesting DF may better isolate set-shifting regions. Overall, these findings highlight the neuroanatomical correlates of set-shifting and the importance of controlling for component processes when investigating complex cognitive tasks. ( JINS , 2010, 16 , 640–650.)
Publisher: Oxford University Press (OUP)
Date: 09-04-2013
DOI: 10.1093/BRAIN/AWT066
Publisher: Journal of Neurosurgery Publishing Group (JNSPG)
Date: 10-2022
Abstract: Electrocortical stimulation mapping (ECS) is widely used to identify essential language areas, but sentence-level processing has rarely been investigated. While undergoing awake surgery in the dominant left hemisphere, 6 subjects were asked to comprehend sentences varying in their demands on syntactic processing. In all 6 subjects, stimulation of the inferior frontal gyrus disrupted comprehension of passive sentences, which critically depend on syntactic processing to correctly assign grammatical roles, without disrupting comprehension of simpler tasks. In 4 of the 6 subjects, these sites were localized to the pars opercularis. Sentence comprehension was also disrupted by stimulation of other perisylvian sites, but in a more variable manner. These findings suggest that there may be language regions that differentially contribute to sentence processing and which therefore are best identified using sentence-level tasks. The functional consequences of resecting these sites remain to be investigated.
Publisher: Informa UK Limited
Date: 11-2017
Publisher: Wiley
Date: 13-04-2009
DOI: 10.1002/HBM.20782
Publisher: Elsevier BV
Date: 09-2012
Publisher: MDPI AG
Date: 30-10-2022
DOI: 10.3390/DATA7110148
Abstract: Auditory-perceptual rating of connected speech in aphasia (APROCSA) is a system in which trained listeners rate a variety of perceptual features of connected speech s les, representing the disruptions and abnormalities that commonly occur in aphasia. APROCSA has shown promise as an approach for quantifying expressive speech and language function in in iduals with aphasia. The aim of this study was to acquire and share a set of audiovisual recordings of connected speech s les from a erse group of in iduals with aphasia, along with consensus ratings of APROCSA features, for future use as training materials to teach others how to use the APROCSA system. Connected speech s les were obtained from six in iduals with chronic post-stroke aphasia. The first five minutes of participant speech were excerpted from each s le, and five researchers independently evaluated each s le using APROCSA, rating its 27 features on a five-point scale. The researchers then discussed each feature in turn to obtain consensus ratings. The dataset will provide a useful, freely accessible resource for researchers, clinicians, and students to learn how to evaluate aphasic speech with an auditory-perceptual approach.
Location: United States of America
Location: United States of America
Start Date: 2017
End Date: 2020
Funder: National Institute on Deafness and Other Communication Disorders
View Funded ActivityStart Date: 2014
End Date: 2025
Funder: National Institute on Deafness and Other Communication Disorders
View Funded Activity