ORCID Profile
0000-0002-4256-1338
Current Organisations
Bond University Faculty of Society and Design
,
Macquarie University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Performing Arts and Creative Writing | Performing Arts and Creative Writing not elsewhere classified | Psychology and Cognitive Sciences not elsewhere classified | Music | Sensory Processes, Perception And Performance | Other Psychology and Cognitive Sciences | Biological Psychology (Neuropsychology, Psychopharmacology, Physiological Psychology) | Developmental Psychology and Ageing | Cognitive Science Not Elsewhere Classified | Indigenous Performing Arts | Mental Health | Music Therapy | Psychology not elsewhere classified | Social and Community Psychology
Expanding Knowledge in Psychology and Cognitive Sciences | Music | Cultural Understanding not elsewhere classified | The performing arts (incl. music, theatre and dance) | Expanding Knowledge through Studies of the Creative Arts and Writing | The Media | Learner and Learning Processes | Behavioural and cognitive sciences | Community Service (excl. Work) not elsewhere classified | Aboriginal and Torres Strait Islander heritage |
Publisher: University of California Press
Date: 06-2018
Abstract: Death Metal music with violent themes is characterized by vocalizations with unnaturally low fundamental frequencies and high levels of distortion and roughness. These attributes decrease the signal to noise ratio, rendering linguistic content difficult to understand and leaving the impression of growling, screaming, or other non-linguistic vocalizations associated with aggression and fear. Here, we compared the ability of fans and non-fans of Death Metal to accurately perceive sung words extracted from Death Metal music. We also examined whether music training confers an additional benefit to intelligibility. In a 2 × 2 between-subjects factorial design (fans/non-fans, musicians/nonmusicians), four groups of participants (n = 16 per group) were presented with 24 sung words (one per trial), extracted from the popular American Death Metal band Cannibal Corpse. On each trial, participants completed a four-alternative forced-choice word recognition task. Intelligibility (word recognition accuracy) was above chance for all groups and was significantly enhanced for fans (65.88%) relative to non-fans (51.04%). In the fan group, intelligibility between musicians and nonmusicians was statistically similar. In the non-fan group, intelligibility was significantly greater for musicians relative to nonmusicians. Results are discussed in the context of perceptual learning and the benefits of expertise for decoding linguistic information in sub-optimum acoustic conditions.
Publisher: Elsevier BV
Date: 08-2018
Publisher: Wiley
Date: 11-2004
Publisher: Elsevier BV
Date: 05-2020
Publisher: Informa UK Limited
Date: 20-06-2023
Publisher: Informa UK Limited
Date: 20-01-2020
DOI: 10.1080/09658211.2020.1713379
Abstract: Music is highly efficient at evoking autobiographical memories in both healthy and neurological populations. Music evoked autobiographical memories (MEAMs) are preserved in people with Alzheimer's Dementia (AD), and occur at the same frequency as in healthy people. To date there has been no investigation of the integrity of MEAMs in people with non-AD dementia. This study provides the first characterisation of the frequency and specificity of MEAMs and photo evoked autobiographical memories (PEAMs) in 6 people with Behavioural variant frontotemporal dementia (Bv-FTD). We found significantly reduced frequency and specificity of MEAMs and PEAMs in people with Bv-FTD compared with healthy elderly. This supports the known decline in autobiographical memory function in this population, and the integral role of medial frontal regions in the retrieval of MEAMs. Our findings highlight that the mnemonic effects of music vary between people with different types of dementia, which has implications for dementia care.
Publisher: Elsevier BV
Date: 07-2018
DOI: 10.1016/J.IJPSYCHO.2018.05.003
Abstract: Music and language both rely on the processing of spectral (pitch, timbre) and temporal (rhythm) information to create structure and meaning from incoming auditory streams. Behavioral results have shown that interrupting a melodic stream with unexpected changes in timbre leads to reduced syntactic processing. Such findings suggest that syntactic processing is conditional on successful streaming of incoming sequential information. The current study used event-related potentials (ERPs) to investigate whether (1) the effect of alternating timbres on syntactic processing is reflected in a reduced brain response to syntactic violations, and (2) the phenomenon is similar for music and language. Participants listened to melodies and sentences with either one timbre (piano or one voice) or three timbres (piano, guitar, and vibraphone, or three different voices). Half the stimuli contained syntactic violations: an out-of-key note in the melodies, and a phrase-structure violation in the sentences. We found smaller ERPs to syntactic violations in music in the three-timbre compared to the one-timbre condition, reflected in a reduced early right anterior negativity (ERAN). A similar but non-significant pattern was observed for language stimuli in both the early left anterior negativity (ELAN) and the left anterior negativity (LAN) ERPs. The results suggest that disruptions to auditory streaming may interfere with syntactic processing, especially for melodic sequences.
Publisher: SAGE Publications
Date: 10-10-2020
Abstract: Extreme metal and rap music with violent themes are sometimes blamed for eliciting antisocial behaviours, but growing evidence suggests that music with violent themes can have positive emotional, cognitive, and social consequences for fans. We addressed this apparent paradox by comparing how fans of violent and non-violent music respond emotionally to music. We also characterised the psychosocial functions of music for fans of violent and non-violent music, and their passion for music. Fans of violent extreme metal ( n=46), violent rap ( n=49), and non-violent classical music ( n=50) responded to questionnaires evaluating the cognitive (self-reflection, self-regulation) and social (social bonding) functions of their preferred music and the nature of their passion for it. They then listened to four one-minute excerpts of music and rated ten emotional descriptors for each excerpt. The top five emotions reported by the three groups of fans were positive, with empowerment and joy the emotions rated highest. However, compared with classical music fans, fans of violent music assigned significantly lower ratings to positive emotions and higher ratings to negative emotions. Fans of violent music also utilised their preferred music for positive psychosocial functions to a similar or sometimes greater extent than classical fans. Harmonious passion for music predicted positive emotional outcomes for all three groups of fans, whereas obsessive passion predicted negative emotional outcomes. Those high in harmonious passion also tended to use music for cognitive and social functions. We propose that fans of violent music use their preferred music to induce an equal balance of positive and negative emotions.
Publisher: American Psychological Association (APA)
Date: 2004
Publisher: University of California Press
Date: 04-2016
Abstract: We examined explicit processing of musical syntax and tonality in a group of Han Chinese Mandarin speakers with congenital amusia, and the extent to which pitch discrimination impairments were associated with syntax and tonality processing. In Experiment 1, we assessed whether congenital amusia is associated with impaired explicit processing of musical syntax. Congruity ratings were examined for syntactically regular or irregular endings in harmonic and melodic contexts. Unlike controls, amusic participants failed to explicitly distinguish regular from irregular endings in both contexts. Surprisingly, however, a concurrent manipulation of pitch distance did not affect the processing of musical syntax for amusics, and their impaired music-syntactic processing was uncorrelated with their pitch discrimination thresholds. In Experiment 2, we assessed tonality perception using a probe-tone paradigm. Recovery of the tonal hierarchy was less evident for the amusic group than for the control group, and this reduced sensitivity to tonality in amusia was also unrelated to poor pitch discrimination. These findings support the view that music structure is processed by cognitive and neural resources that operate independently of pitch discrimination, and that these resources are impaired in explicit judgments for in iduals with congenital amusia.
Publisher: Society for Neuroscience
Date: 30-01-2020
DOI: 10.1523/JNEUROSCI.1399-19.2020
Abstract: In tonal music, continuous acoustic waveforms are mapped onto discrete, hierarchically arranged, internal representations of pitch. To examine the neural dynamics underlying this transformation, we presented male and female human listeners with tones embedded within a Western tonal context while recording their cortical activity using magnetoencephalography. Machine learning classifiers were then trained to decode different tones from their underlying neural activation patterns at each peristimulus time s le, providing a dynamic measure of their dissimilarity in cortex. Comparing the time-varying dissimilarity between tones with the predictions of acoustic and perceptual models, we observed a temporal evolution in the brain's representational structure. Whereas initial dissimilarities mirrored their fundamental-frequency separation, dissimilarities beyond 200 ms reflected the perceptual status of each tone within the tonal hierarchy of Western music. These effects occurred regardless of stimulus regularities within the context or whether listeners were engaged in a task requiring explicit pitch analysis. Lastly, patterns of cortical activity that discriminated between tones became increasingly stable in time as the information coded by those patterns transitioned from low-to-high level properties. Current results reveal the dynamics with which the complex perceptual structure of Western tonal music emerges in cortex at the timescale of an in idual tone. SIGNIFICANCE STATEMENT Little is understood about how the brain transforms an acoustic waveform into the complex perceptual structure of musical pitch. Applying neural decoding techniques to the cortical activity of human subjects engaged in music listening, we measured the dynamics of information processing in the brain on a moment-to-moment basis as subjects heard each tone. In the first 200 ms after onset, transient patterns of neural activity coded the fundamental frequency of tones. Subsequently, a period emerged during which more temporally stable activation patterns coded the perceptual status of each tone within the “tonal hierarchy” of Western music. Our results provide a crucial link between the complex perceptual structure of tonal music and the underlying neural dynamics from which it emerges.
Publisher: Frontiers Media SA
Date: 10-2018
Publisher: American Psychological Association (APA)
Date: 2009
DOI: 10.1037/A0016456
Abstract: The authors examined how the structural attributes of tonality and meter influence musical pitch-time relations. Listeners heard a musical context followed by probe events that varied in pitch class and temporal position. Tonal and metric hierarchies contributed additively to the goodness-of-fit of probes, with pitch class exerting a stronger influence than temporal position (Experiment 1), even when listeners attempted to ignore pitch (Experiment 2). Speeded classification tasks confirmed this asymmetry. Temporal classification was biased by tonal stability (Experiment 3), but pitch classification was unaffected by temporal position (Experiment 4). Experiments 5 and 6 ruled out explanations based on the presence of pitch classes and temporal positions in the context, unequal stimulus quantity, and discriminability. The authors discuss how typical Western music biases attention toward pitch and distinguish between dimensional discriminability and salience.
Publisher: MDPI AG
Date: 27-10-2022
Abstract: Chanting is practiced in many religious and secular traditions and involves rhythmic vocalization or mental repetition of a sound or phrase. This study examined how chanting relates to cognitive function, altered states, and quality of life across a wide range of traditions. A global survey was used to assess experiences during chanting including flow states, mystical experiences, mindfulness, and mind wandering. Further, attributes of chanting were assessed to determine their association with altered states and cognitive benefits, and whether psychological correlates of chanting are associated with quality of life. Responses were analyzed from 456 English speaking participants who regularly chant across 32 countries and various chanting traditions. Results revealed that different aspects of chanting were associated with distinctive experiential outcomes. Stronger intentionality (devotion, intention, sound) and higher chanting engagement (experience, practice duration, regularity) were associated with altered states and cognitive benefits. Participants whose main practice was call and response chanting reported higher scores of mystical experiences. Participants whose main practice was repetitive prayer reported lower mind wandering. Lastly, intentionality and engagement were associated with quality of life indirectly through altered states and cognitive benefits. This research sheds new light on the phenomenology and psychological consequences of chanting across a range of practices and traditions.
Publisher: Wiley
Date: 15-10-2012
DOI: 10.1111/J.1469-8986.2012.01472.X
Abstract: The aim of this study was to determine if duration-related stress in speech and music is processed in a similar way in the brain. To this end, we tested 20 adults for their abstract mismatch negativity (MMN) event-related potentials to two duration-related stress patterns: stress on the first syllable or note (long-short), and stress on the second syllable or note (short-long). A significant MMN was elicited for both speech and music except for the short-long speech stimulus. The long-short stimuli elicited larger MMN litudes for speech and music compared to short-long stimuli. An extra negativity-the late discriminative negativity (LDN)-was observed only for music. The larger MMN litude for long-short stimuli might be due to the familiarity of the stress pattern in speech and music. The presence of LDN for music may reflect greater long-term memory transfer for music stimuli.
Publisher: University of California Press
Date: 1992
DOI: 10.2307/40285563
Abstract: Listeners with a moderate amount of musical training rated the distance between the first and final key of short chorale excerpts under one of four presentation conditions. The distance between keys, or modulation distance, was either zero, one, or two steps in either the clockwise or counterclockwise direction on the cycle of fifths. Presentation conditions were four-voice harmonic sequences excerpted from the complete set of Bach chorales, single voices of the latter sequences, four-voice harmonic sequences simplified to block chords, and single voices of the latter sequences. Consistent with earlier findings (Thompson & Cuddy, 1989), judgments for both four- voice harmonic presentations and single-voice presentations revealed a close correspondence between modulation distance and judged distance. Ratings for harmonic sequences within a given key distance, however, showed influences of direction of modulation and of harmonic progression that were not reflected in ratings for single voices. The findings suggest that harmony and melody follow somewhat different principles in the process of identifying key change.
Publisher: SAGE Publications
Date: 09-2009
DOI: 10.1177/1029864909013002061
Abstract: It is commonly argued that music originated in human evolution as an adaptation to selective pressures. In this paper we present an alternative account in which music originated from a more general adaptation known as a Theory of Mind (ToM). ToM allows an in idual to recognise the mental and emotional state of conspecifics, and is pivotal in the cultural transmission of knowledge. We propose that a specific form of ToM, Affective Engagement, provides the foundation for the emergence of music. Underpinned by the mirror neuron system of empathy and imitation, music achieves engagement by drawing from pre-existing functions across multiple modalities. As a multimodal phenomenon, music generates an emotional experience through the broadened activation of channels that are to be empathically matched by the audio-visual mirror neuron system.
Publisher: Springer Science and Business Media LLC
Date: 16-01-2018
DOI: 10.1038/S41598-018-19222-3
Abstract: In music, the perception of pitch is governed largely by its tonal function given the preceding harmonic structure of the music. While behavioral research has advanced our understanding of the perceptual representation of musical pitch, relatively little is known about its representational structure in the brain. Using Magnetoencephalography (MEG), we recorded evoked neural responses to different tones presented within a tonal context. Multivariate Pattern Analysis (MVPA) was applied to “decode” the stimulus that listeners heard based on the underlying neural activity. We then characterized the structure of the brain’s representation using decoding accuracy as a proxy for representational distance, and compared this structure to several well established perceptual and acoustic models. The observed neural representation was best accounted for by a model based on the Standard Tonal Hierarchy , whereby differences in the neural encoding of musical pitches correspond to their differences in perceived stability. By confirming that perceptual differences honor those in the underlying neuronal population coding, our results provide a crucial link in understanding the cognitive foundations of musical pitch across psychological and neural domains.
Publisher: Informa UK Limited
Date: 12-2019
DOI: 10.2147/JPR.S212080
Publisher: MDPI AG
Date: 15-03-2023
Abstract: Many people listen to music that conveys challenging emotions such as sadness and anger, despite the commonly assumed purpose of media being to elicit pleasure. We propose that eudaimonic motivation, the desire to engage with aesthetic experiences to be challenged and facilitate meaningful experiences, can explain why people listen to music containing such emotions. However, it is unknown whether music containing violent themes can facilitate such meaningful experiences. In this investigation, three studies were conducted to determine the implications of eudaimonic and hedonic (pleasure-seeking) motivations for fans of music with violent themes. In Study 1, we developed and tested a new scale and showed that fans exhibit high levels of both types of motivation. Study 2 further validated the new scale and provided evidence that the two types of motivations are associated with different affective outcomes. Study 3 revealed that fans of violently themed music exhibited higher levels of eudaimonic motivation and lower levels of hedonic motivation than fans of non-violently themed music. Taken together, the findings support the notion that fans of music with violent themes are driven to engage with this music to be challenged and to pursue meaning, as well as to experience pleasure. Implications for fans’ well-being and future applications of the new measure are discussed.
Publisher: Informa UK Limited
Date: 10-05-2018
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 15-06-2022
DOI: 10.1097/AUD.0000000000001083
Abstract: Children with hearing loss tend to have poorer psychosocial and quality of life outcomes than their typical-hearing (TH) peers—particularly in the areas of peer relationships and school functioning. A small number of studies for TH children have suggested that group-based music activities are beneficial for prosocial outcomes and help develop a sense of belonging. While one might question whether perceptual limitations would impede satisfactory participation in musical activities, findings from a few studies have suggested that group music activities may have similar benefits for children with hearing loss as well. It is important to note that the effect of music on psychosocial outcomes has primarily been investigated at an anecdotal level. The objective of this study was to explore the effect of a music training program on psychosocial and quality of life outcomes for children with hearing loss. It was hypothesized that music training would provide benefits for domains centered upon peer relationships and prosocial measures. Fourteen children aged 6 to 9 years with prelingual sensorineural hearing loss (SNHL) participated in a 12-week music training program that consisted of group-based face-to-face music therapy supplemented by online music apps. The design was a pseudorandomized, longitudinal study (9 participants were waitlisted, initially serving as a passive control group). Psychosocial wellbeing and quality of life were assessed using a questionnaire battery comprised of the Strengths and Difficulty Questionnaire (SDQ), the Pediatric Quality of Life Inventory, the Hearing Environments and Reflection on Quality of Life (HEAR-QL), and the Glasgow Children’s Benefit Inventory. For comparative purposes, responses were measured from 16 TH children that ranged in age from 6 to 9 years. At baseline, children with SNHL had poorer outcomes for internalizing problems, and all measures of the HEAR-QL compared with the TH children. There were no differences for general psychosocial and physical health. After music training, SDQ internalizing problems such as peer relationships and emotional regulation were significantly reduced for the children with SNHL. There were no changes for any outcomes for the passive control group. Additional benefits were noted for emotional and learning factors on the Glasgow Children’s Benefit Inventory. However, there were no significant changes for any psychosocial and quality of life outcomes as measured by the Pediatric Quality of Life Inventory or HEAR-QL instruments. The present study provides initial evidence that music training has a positive effect on at least some psychosocial and quality of life outcomes for children with hearing loss. As they are at a greater risk of poorer psychosocial and quality of life outcomes, these findings are cause for cautious optimism. Children with hearing loss should be encouraged to participate in group-based musical activities.
Publisher: University of California Press
Date: 09-2006
Abstract: The rigors of establishing innateness and domain specificity pose challenges to adaptationist models of music evolution. In articulating a series of constraints, the authors of the target articles provide strategies for investigating the potential origins of music. We propose additional approaches for exploring theories based on exaptation. We discuss a view of music as a multimodal system of engaging with affect, enabled by capacities of symbolism and a theory of mind.
Publisher: University of California Press
Date: 1989
DOI: 10.2307/40285446
Abstract: This report examines Clynes's theory of "pulse" for performances of music by Mozart and Beethoven (e. g., Clynes, 1983,1987). In three experiments that used a total of seven different compositions, an analysis-bysynthesis approach was used to examine the repetitive patterns of timing and loudness thought to be associated with performances of Mozart and Beethoven. Across performances, judgments by trained musicians provided support for some of the basic claims made by Clynes. However, judgments of in idual performances were not always consistent with predictions. In Experiment 1, melodies were judged to be more musical if they were played with the pulse than if they were played with an altered version of the pulse or if they were played without expression. In Experiment 2, listeners were asked to judge whether performances of Mozart were "Mozartian" and whether performances of Beethoven were " Beethovenian." Ratings were highest if the pulse of the composer was implemented, and significantly lower if the pulse of another composer was implemented (e. g., the Mozart pulse in the Beethoven piece) in all or part of each piece. In Experiment 3, a Beethoven piece was played with each of three pulses: Beethoven, Haydn, and Schubert. Listeners judged the version with the Beethoven pulse as most Beethovenian, but the version with the Haydn pulse as most "musical." Although the overall results were encouraging, it is suggested that there are significant difficulties in evaluating Clynes's theory and that much more research is needed before his ideas can be assessed adequately. The need for clarification of some theoretical issues surrounding the concept of pulse is emphasized.
Publisher: American Psychological Association (APA)
Date: 12-2012
DOI: 10.1037/A0029409
Publisher: SAGE Publications
Date: 18-07-2019
Abstract: Concerns have been raised that persistent exposure to violent media can lead to negative outcomes such as reduced empathy for the plight of others. The present study investigated whether fans of aggressive heavy or death metal music show reduced empathic reactions to aggression, relative to fans of non-aggressive music. 108 participants who self-identified as fans of heavy or death metal, classical or jazz music ( n=36 per group) were presented with vignettes that described a primary character’s reaction (the ‘aggressor’) in response to a secondary character’s irritating action (the ‘instigator’). The aggressor’s reaction was either non-aggressive, mildly aggressive or strongly aggressive. After each vignette, participants provided ratings of state empathic concern (other-oriented empathy) and personal distress (self-oriented distress). They also completed measures of trait empathy, passion for music and its psychosocial functions. Fans of heavy or death metal exhibited lower trait empathic concern compared with classical and jazz fans. However, only male heavy or death metal fans exhibited lower state empathic concern than male classical and jazz fans. Finally, social bonding was a stronger motivation for heavy or death metal fans to listen to music than for classical fans. Results are discussed in light of research and public concern regarding the effects of long-term exposure to media violence.
Publisher: SAGE Publications
Date: 10-1998
Abstract: We illustrate a technique for eliciting and exploring the constructs involved in the adjudication of music performance. Five trained musicians with extensive experience in performance adjudication evaluated six expert performances of Chopin's Etude, Opus 25, No. 6. First, we elicited from each adjudicator the personal constructs that they used to evaluate performance expression. Next, adjudicators rated each performance on each of their constructs. Adjudicators also rated each performance for overall preference. Constructs most strongly associated with overall preference related to right-hand expression and phrasing. Other constructs, such as tempo, were important for distinguishing performances, but were not strongly associated with overall preference. We discuss benefits of the technique for researchers, adjudicators and performers.
Publisher: American Speech Language Hearing Association
Date: 22-06-2020
DOI: 10.1044/2020_JSLHR-19-00391
Abstract: A growing body of evidence suggests that long-term music training provides benefits to auditory abilities for typical-hearing adults and children. The purpose of this study was to evaluate how music training may provide perceptual benefits (such as speech-in-noise, spectral resolution, and prosody) for children with hearing loss. Fourteen children aged 6–9 years with prelingual sensorineural hearing loss using bilateral cochlear implants, bilateral hearing aids, or bimodal configuration participated in a 12-week music training program, with nine participants completing the full testing requirements of the music training. Activities included weekly group-based music therapy and take-home music apps three times a week. The design was a pseudorandomized, longitudinal study (half the cohort was wait-listed, initially serving as a passive control group prior to music training). The test battery consisted of tasks related to music perception, music appreciation, and speech perception. As a comparison, 16 age-matched children with typical hearing also completed this test battery, but without participation in the music training. There were no changes for any outcomes for the passive control group. After music training, perception of speech-in-noise, question/statement prosody, musical timbre, and spectral resolution improved significantly, as did measures of music appreciation. There were no benefits for emotional prosody or pitch perception. The findings suggest even a modest amount of music training has benefits for music and speech outcomes. These preliminary results provide further evidence that music training is a suitable complementary means of habilitation to improve the outcomes for children with hearing loss.
Publisher: IGI Global
Date: 07-2012
Abstract: Most people communicate emotion through their voice, facial expressions, and gestures. However, it is assumed that only “experts” can communicate emotions in music. The authors have developed a computer-based system that enables musically untrained users to select relevant acoustic attributes to compose emotional melodies. Nonmusicians (Experiment 1) and musicians (Experiment 3) were progressively presented with pairs of melodies that each differed in an acoustic attribute (e.g., intensity - loud vs. soft). For each pair, participants chose the melody that most strongly conveyed a target emotion (anger, fear, happiness, sadness or tenderness). Once all decisions were made, a final melody containing all choices was generated. The system allowed both untrained and trained participants to compose a range of emotional melodies. New listeners successfully decoded the emotional melodies of nonmusicians (Experiment 2) and musicians (Experiment 4). Results indicate that human-computer interaction can facilitate the composition of emotional music by musically untrained and trained in iduals.
Publisher: Wiley
Date: 11-2003
Abstract: In two experiments, musically trained and untrained adults were tested on their ability to match spoken utterances with their tonal analogues (tone sequences that retained the pitch and temporal patterns of the utterances). In both cases, musical training was associated with superior performance, indicating an enhanced ability to extract prosodic information from spoken phrases.
Publisher: The Royal Society
Date: 03-2019
DOI: 10.1098/RSOS.181580
Abstract: It is suggested that long-term exposure to violent media may decrease sensitivity to depictions of violence. However, it is unknown whether persistent exposure to music with violent themes affects implicit violent imagery processing. Using a binocular rivalry paradigm, we investigated whether the presence of violent music influences conscious awareness of violent imagery among fans and non-fans of such music. Thirty-two fans and 48 non-fans participated in the study. Violent and neutral pictures were simultaneously presented one to each eye, and participants indicated which picture they perceived (i.e. violent percept, neutral percept or blend of two) via key presses, while they heard Western popular music with lyrics that expressed happiness or Western extreme metal music with lyrics that expressed violence. We found both fans and non-fans of violent music exhibited a general negativity bias for violent imagery over neutral imagery regardless of the music genres. For non-fans, this bias was stronger while listening to music that expressed violence than while listening to music that expressed happiness. For fans of violent music, however, the bias was the same while listening to music that expressed either violence or happiness. We discussed these results in view of current debates on the impact of violent media.
Publisher: Springer Science and Business Media LLC
Date: 30-04-2020
DOI: 10.1007/S00426-020-01322-3
Abstract: The ability to silently hear music in the mind has been argued to be fundamental to musicality. Objective measurements of this subjective imagery experience are needed if this link between imagery ability and musicality is to be investigated. However, previous tests of musical imagery either rely on self-report, rely on melodic memory, or do not cater in range of abilities. The Pitch Imagery Arrow Task (PIAT) was designed to address these shortcomings however, it is impractically long. In this paper, we shorten the PIAT using adaptive testing and automatic item generation. We interrogate the cognitive processes underlying the PIAT through item response modelling. The result is an efficient online test of auditory mental imagery ability (adaptive Pitch Imagery Arrow Task: aPIAT) that takes 8 min to complete, is adaptive to participant’s in idual ability, and so can be used to test participants with a range of musical backgrounds. Performance on the aPIAT showed positive moderate-to-strong correlations with measures of non-musical and musical working memory, self-reported musical training, and general musical sophistication. Ability on the task was best predicted by the ability to maintain and manipulate tones in mental imagery, as well as to resist perceptual biases that can lead to incorrect responses. As such, the aPIAT is the ideal tool in which to investigate the relationship between pitch imagery ability and musicality.
Publisher: American Psychological Association (APA)
Date: 07-2019
DOI: 10.1037/PPM0000184
Publisher: Cambridge University Press (CUP)
Date: 10-2008
DOI: 10.1017/S0140525X08005529
Abstract: We propose that the six mechanisms identified by Juslin & Västfjäll (J& V) fall into two categories: signal detection and lification . Signal detection mechanisms are unmediated and induce emotion by directly detecting emotive signals in music. Amplifiers act in conjunction with signal detection mechanisms. We also draw attention to theoretical and empirical challenges associated with the proposed mechanisms.
Publisher: Proceedings of the National Academy of Sciences
Date: 24-01-2013
Publisher: Informa UK Limited
Date: 04-07-2021
DOI: 10.1080/13554794.2021.1966045
Abstract: In five people with severe dementia, we measured their behavioral and physiological responses to familiar/unfamiliar music and speech, and measured ERP responses to subject's own name (SON) after exposure to familiar/unfamiliar music or noise. We observed more frequent behavioral responses to personally-significant stimuli than non-personally-significant stumuli, and higher skin temperatures for music than non-music conditions. The control group showed typical ERPs to SON, regardless of auditory exposure. ERP measures were unavailable for the dementia group given challenges of measuring EEG in this population. The study highlights the potential for personally-significant auditory stimuli in enhancing responsiveness of people with severe dementia.
Publisher: University of California Press
Date: 02-2011
Abstract: In Two Experiments, We Assessed the Experiential and cognitive consequences of seven minutes exposure to music (Experiment 1) and speech (Experiment 2). In Experiment 1, participants listened to music for seven minutes and reported their emotional experiences based on ratings of valence (pleasant-unpleasant) and two types of arousal: energy (energetic-boring) and tension (tense-calm). They were then assessed on two cognitive skills: speed of processing and creativity. Music varied in pitch height (high or low pitched), rate (fast or slow), and intensity (loud or soft). Experiment 2 replicated Experiment 1 using male and female speech. Experiential and cognitive consequences of stimulus manipulations were overlapping in the two experiments, suggesting that music and speech draw on a common emotional code. There were also ergent effects, however, implicating domain-specific influences on emotion induction. We discuss the results in view of a psychological framework for understanding auditory signals of emotion.
Publisher: Frontiers Media SA
Date: 16-07-2019
Publisher: Frontiers Media SA
Date: 10-10-2014
Publisher: Wiley
Date: 24-05-2020
DOI: 10.1111/PSYP.13598
Publisher: SAGE Publications
Date: 20-05-2012
Abstract: We examined the effect of background music on reading comprehension. Because the emotional consequences of music listening are affected by changes in tempo and intensity, we manipulated these variables to create four repeated-measures conditions: slow/low, slow/high, fast/low, fast/high. Tempo and intensity manipulations were selected to be psychologically equivalent in magnitude (pilot study 1). In each condition, 25 participants were given four minutes to read a passage, followed by three minutes to answer six multiple-choice questions. Baseline performance was established by having control participants complete the reading task in silence (pilot study 2). A significant tempo by intensity interaction was observed, with comprehension in the fast/high condition falling significantly below baseline. These findings reveal that listening to background instrumental music is most likely to disrupt reading comprehension when the music is fast and loud.
Publisher: SAGE Publications
Date: 04-1989
Abstract: Starting from a text-to-speech conversion programme (Carlson and Gran- strom, 1975), a note-to-tone conversion programme has been developed (Sundberg and Fryden, 1985). It works with a set of ordered rules affecting the performance of melodies written into the computer. Depending on the musical context, each of these rules manipulates various tone parameters, such as intensity level, fundamental frequency, and duration. In the present study the musical effect of nine rules is tested. Ten melodies were played under several rule-implementation conditions, and musically trained listeners rated the musical quality of each performance. The results support the assumption that the musical quality of performances is improved by applying rules.
Publisher: Walter de Gruyter GmbH
Date: 20-01-2006
DOI: 10.1515/SEM.2006.017
Publisher: Springer Science and Business Media LLC
Date: 08-2010
Publisher: SAGE Publications
Date: 2020
Abstract: Music evoked autobiographical memories (MEAMs) occur in people with Alzheimer’s dementia (AD), but there is limited study of such memories in people with other dementia types such as behavioral variant frontotemporal dementia (Bv-FTD). Furthermore, there has been no study of the integrity of such memories over time, and scarce comparison with other memory cues such as photos. Our aim was to address this current gap in our knowledge and to characterize MEAMs and photo-evoked autobiographical memories (PEAMs) in healthy elderly people and people with AD and Bv-FTD on two occasions, 6 months apart. Twenty-two participants (7 with AD, 6 with Bv-FTD, and 9 healthy elderly people) reported memories following exposure to two famous songs and two famous event photographs from each decade from 1930–2010 on two occasions. All people with AD and all healthy elderly controls reported at least one MEAM or PEAM at both times. In contrast, two people with Bv-FTD reported no memories at either time. The percentage of memories over time for songs and photos remained stable for the Healthy Elderly and AD groups, whilst the percentage of memories to songs increased over time for people with Bv-FTD. Songs elicited more positive memories than photos. The specific music and photo stimuli that triggered memories, and the topic of the memories that were evoked, remained stable over a 6-month period across all groups. Our results suggest that music and photos are efficient memory cues in people with AD and Bv-FTD. Future large-scale studies of people with different dementia types over a longer time period will provide insights into the integrity of music- and photo-evoked autobiographical memories as dementia progresses.
Publisher: Springer Science and Business Media LLC
Date: 25-05-2023
DOI: 10.1007/S12144-022-03108-9
Abstract: Concerns have been raised that prolonged exposure to heavy metal music with aggressive themes can increase the risk of aggression, anger, antisocial behaviour, substance use, suicidal ideation, anxiety and depression in community and psychiatric populations. Although research often relies on correlational evidence for which causal inferences are not possible, it is often claimed that music with aggressive themes can cause psychological and behavioural problems. This narrative review of theory and evidence suggests the issues are more complicated, and that fans typically derive a range of emotional and social benefits from listening to heavy metal music, including improved mood, identity formation, and peer affiliation. In contrast, non-fans of heavy metal music — who are often used as participants in experimental research on this topic — invariably report negative psychological experiences. Our review considers a comprehensive set of empirical findings that inform clinical strategies designed to identify fans for whom heavy metal music may confer psychological and behavioural risks, and those for whom this music may confer psychosocial benefits.
Publisher: Hindawi Limited
Date: 2015
DOI: 10.1155/2015/352869
Abstract: Cochlear implant (CI) recipients generally have good perception of speech in quiet environments but difficulty perceiving speech in noisy conditions, reduced sensitivity to speech prosody, and difficulty appreciating music. Auditory training has been proposed as a method of improving speech perception for CI recipients, and recent efforts have focussed on the potential benefits of music-based training. This study evaluated two melodic contour training programs and their relative efficacy as measured on a number of speech perception tasks. These melodic contours were simple 5-note sequences formed into 9 contour patterns, such as “rising” or “rising-falling.” One training program controlled difficulty by manipulating interval sizes, the other by note durations. Sixteen adult CI recipients (aged 26–86 years) and twelve normal hearing (NH) adult listeners (aged 21–42 years) were tested on a speech perception battery at baseline and then after 6 weeks of melodic contour training. Results indicated that there were some benefits for speech perception tasks for CI recipients after melodic contour training. Specifically, consonant perception in quiet and question/statement prosody was improved. In comparison, NH listeners performed at ceiling for these tasks. There was no significant difference between the posttraining results for either training program, suggesting that both conferred benefits for training CI recipients to better perceive speech.
Publisher: Oxford University Press
Date: 10-07-2019
DOI: 10.1093/OXFORDHB/9780190636234.013.20
Abstract: This chapter discusses evidence that musical pitch is conceived and represented spatially, and that bodily experience provides a rich source for conceptualizing music metaphorically. It also describes how bodily gestures may be combined with perceptual representations of music, focusing on music-related movements of performers, such as facial expressions and gestures. Such expressive bodily movements help to shape listeners’ perception of music structure and link perception to action. Furthermore, it describes the function of spatial representations of music, and discusses evidence that musical expertise affects the stability and reliability of these spatial representations. Finally, a cognitive-motor framework for understanding spatial representations of music is proposed, which makes predictions about how this representation is manifested, differentially relied on, and sometimes disrupted in in iduals with varying levels of expertise.
Publisher: SAGE Publications
Date: 12-10-2023
Publisher: University of California Press
Date: 1989
DOI: 10.2307/40285455
Abstract: Two experiments examined sensitivity to key change in short sequences adapted from Bach chorales. In Experiment 1, musically trained listeners identified key changes in single-voice (i.e., soprano, alto, tenor, bass) and in four-voice presentations of the sequences. There were two main findings. First, listeners judged the distance and direction of key change in single voices and in four-voice harmony with approximately equal ease. Second, for four-voice harmony but not for single voices, the direction of key change on the cycle of fifths influenced perceived distance. For an equivalent number of steps on the cycle, greater distance was associated with modulations moving in the counterclockwise, rather than in the clockwise, direction. These findings were replicated in Experiment 2, in which musically untrained listeners rated perceived distance of key change. In addition, the directional asymmetry found for four-voice harmony also was found for in idual bass voices. The evidence suggests that harmony and melody operate somewhat independently in the implication of key structure. Difficulties for a strictly hierarchical model of perceived musical pitch structure are discussed and a partially hierarchical model is considered.
Publisher: SAGE Publications
Date: 05-2015
DOI: 10.1080/17470218.2014.971034
Abstract: Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.
Publisher: Springer Science and Business Media LLC
Date: 17-12-2018
DOI: 10.1038/S41598-018-36076-X
Abstract: Music and language are complex hierarchical systems in which in idual elements are systematically combined to form larger, syntactic structures. Suggestions that music and language share syntactic processing resources have relied on evidence that syntactic violations in music interfere with syntactic processing in language. However, syntactic violations may affect auditory processing in non-syntactic ways, accounting for reported interference effects. To investigate the factors contributing to interference effects, we assessed recall of visually presented sentences and word-lists when accompanied by background auditory stimuli differing in syntactic structure and auditory distraction: melodies without violations, scrambled melodies, melodies that alternate in timbre, and environmental sounds. In Experiment 1, one-timbre melodies interfered with sentence recall, and increasing both syntactic complexity and distraction by scrambling melodies increased this interference. In contrast, three-timbre melodies reduced interference on sentence recall, presumably because alternating instruments interrupted auditory streaming, reducing pressure on long-distance syntactic structure building. Experiment 2 confirmed that participants were better at discriminating syntactically coherent one-timbre melodies than three-timbre melodies. Together, these results illustrate that syntactic processing and auditory streaming interact to influence sentence recall, providing implications for theories of shared syntactic processing and auditory distraction.
Publisher: Springer Science and Business Media LLC
Date: 23-01-2014
DOI: 10.1007/S00221-014-3836-X
Abstract: The ability to predict the actions of other agents is vital for joint action tasks. Recent theory suggests that action prediction relies on an emulator system that permits observers to use a model of their own movement kinematics to predict the actions of other agents. If this is the case, then people should be more accurate at generating predictions about actions that are similar to their own. We tested this hypothesis in two experiments in which participants were required to predict the occurrence and timing of particular critical points in an observed action. In Experiment 1, we employed a self/other prediction paradigm in which prediction accuracy for recordings of self-generated movements was compared with prediction accuracy for recordings of other-generated movements. As expected, prediction was more accurate for recordings of self-generated actions because in this case the movement kinematics of the observer and observed stimuli are maximally similar. In Experiment 1, people were able to produce actions at their own tempo and, therefore, the results might be explained in terms of self-similarity in action production tempo rather than in terms of movement kinematics. To control for this possibility in Experiment 2, we compared prediction accuracy for stimuli that were matched in tempo but differed only in terms of kinematics. The results showed that participants were more accurate when predicting actions with a human kinematic profile than tempo-matched stimuli that moved with non-human kinematics. Finally, in Experiment 3, we confirmed that the results of Experiment 2 cannot be explained by human-like stimuli containing a slowing down phase before the critical points. Taken together, these findings provide further support for the role of motor emulation in action prediction, and they suggest that the action prediction mechanism produces output that is available rapidly and available to drive action control suggesting that it can plausibly support joint action coordination.
Publisher: American Psychological Association (APA)
Date: 09-2013
DOI: 10.1037/A0034775
Publisher: Frontiers Media SA
Date: 06-2021
DOI: 10.3389/FPSYG.2021.647632
Abstract: The ancient practice of chanting typically takes place within a community as a part of a live ceremony or ritual. Research suggests that chanting leads to improved mood, reduced stress, and increased wellbeing. During the global pandemic, many chanting practices were moved online in order to adhere to social distancing recommendations. However, it is unclear whether the benefits of live chanting occur when practiced in an online format. The present study assessed the effects of a 10-min online chanting session on stress, mood, and connectedness, carried out either in a group or in idually. The study employed a 2 (chanting vs. control) × 2 (group vs. in idual) between-subjects design. Participants ( N = 117) were pseudo-randomly allocated across the four conditions. Before and after participation, in iduals completed the Spielberg’s State Trait Anxiety Inventory, the Positive and Negative Affect Schedule, the Social Connectedness Scale and Aron’s Inclusion of Self in Other Scale. Online chanting led to a significant reduction in stress and an increase in positive affect when compared to the online control task. Participants who took part in group chanting also felt more connected to members of their chanting group than participants in the control group. However, feelings of general connectedness to all people remained similar across conditions. The investigation provides evidence that online chanting may be a useful psychosocial intervention, whether practiced in idually or in a group.
Publisher: Elsevier BV
Date: 11-2015
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2015.10.004
Abstract: Congenital amusia is a neurodevelopmental disorder characterized by impaired pitch processing. Although pitch simultaneities are among the fundamental building blocks of Western tonal music, affective responses to simultaneities such as isolated dyads varying in consonance/dissonance or chords varying in major/minor quality have rarely been studied in amusic in iduals. Thirteen amusics and thirteen matched controls enculturated to Western tonal music provided pleasantness ratings of sine-tone dyads and complex-tone dyads in piano timbre as well as perceived happiness/sadness ratings of sine-tone triads and complex-tone triads in piano timbre. Acoustical analyses of roughness and harmonicity were conducted to determine whether similar acoustic information contributed to these evaluations in amusics and controls. Amusic in iduals' pleasantness ratings indicated sensitivity to consonance and dissonance for complex-tone (piano timbre) dyads and, to a lesser degree, sine-tone dyads, whereas controls showed sensitivity when listening to both tone types. Furthermore, amusic in iduals showed some sensitivity to the happiness-major association in the complex-tone condition, but not in the sine-tone condition. Controls rated major chords as happier than minor chords in both tone types. Linear regression analyses revealed that affective ratings of dyads and triads by amusic in iduals were predicted by roughness but not harmonicity, whereas affective ratings by controls were predicted by both roughness and harmonicity. We discuss affective sensitivity in congenital amusia in view of theories of affective responses to isolated chords in Western listeners.
Publisher: American Psychological Association (APA)
Date: 2022
DOI: 10.1037/REV0000364
Abstract: Research has investigated psychological processes in an attempt to explain how and why people appreciate music. Three programs of research have shed light on these processes. The first focuses on the appreciation of musical structure. The second investigates self-oriented responses to music, including music-evoked autobiographical memories, the reinforcement of a sense of self, and benefits to in idual health and wellbeing. The third seeks to explain how music listeners become sensitive to the causal and contextual sources of music making, including the biomechanics of performance, knowledge of musicians and their intentions, and the cultural and historical context of music making. To date, these programs of research have been carried out with little interaction, and the third program has been omitted from most psychological enquiries into music appreciation. In this paper, we review evidence for these three forms of appreciation. The evidence reviewed acknowledges the enormous ersity in antecedents and causes of music appreciation across contexts, in iduals, cultures, and historical periods. We identify the inputs and outputs of appreciation, propose processes that influence the forms that appreciation can take, and make predictions for future research. Evidence for source sensitivity is emphasized because the topic has been largely unacknowledged in previous discussions. This evidence implicates a set of unexplored processes that bring to mind causal and contextual details associated with music, and that shape our appreciation of music in important ways. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Proceedings of the National Academy of Sciences
Date: 29-10-2012
Abstract: A number of evolutionary theories assume that music and language have a common origin as an emotional protolanguage that remains evident in overlapping functions and shared neural circuitry. The most basic prediction of this hypothesis is that sensitivity to emotion in speech prosody derives from the capacity to process music. We examined sensitivity to emotion in speech prosody in a s le of in iduals with congenital amusia, a neurodevelopmental disorder characterized by deficits in processing acoustic and structural attributes of music. Twelve in iduals with congenital amusia and 12 matched control participants judged the emotional expressions of 96 spoken phrases. Phrases were semantically neutral but prosodic cues (tone of voice) communicated each of six emotional states: happy, tender, afraid, irritated, sad, and no emotion. Congenitally amusic in iduals were significantly worse than matched controls at decoding emotional prosody, with decoding rates for some emotions up to 20% lower than that of matched controls. They also reported difficulty understanding emotional prosody in their daily lives, suggesting some awareness of this deficit. The findings support speculations that music and language share mechanisms that trigger emotional responses to acoustic attributes, as predicted by theories that propose a common evolutionary link between these domains.
Publisher: Proceedings of the National Academy of Sciences
Date: 09-11-2015
Abstract: Emotions function to optimize adaptive responses to biologically significant events. In the auditory channel, humans are highly attuned to emotional signals in speech and music that arise from shifts in the frequency spectrum, intensity, and rate of acoustic information. We found that changes in acoustic attributes that evoke emotional responses in speech and music also trigger emotions when perceived in environmental sounds, including sounds arising from human actions, animal calls, machinery, or natural phenomena, such as wind and rain. The findings align with Darwin’s hypothesis that speech and music originated from a common emotional signal system based on the imitation and modification of sounds in the environment.
Publisher: MIT Press - Journals
Date: 03-2010
Publisher: University of California Press
Date: 12-2011
Abstract: although people generally avoid negative emotional experiences in general, they often enjoy sadness portrayed in music and other arts. The present study investigated what kinds of subjective emotional experiences are induced in listeners by sad music, and whether the tendency to enjoy sad music is associated with particular personality traits. One hundred forty-eight participants listened to 16 music excerpts and rated their emotional responses. As expected, sadness was the most salient emotion experienced in response to sad excerpts. However, other more positive and complex emotions such as nostalgia, peacefulness, and wonder were also evident. Furthermore, two personality traits – Openness to Experience and Empathy – were associated with liking for sad music and with the intensity of emotional responses induced by sad music, suggesting that aesthetic appreciation and empathetic engagement play a role in the enjoyment of sad music.
Publisher: Public Library of Science (PLoS)
Date: 25-03-2015
Publisher: Frontiers Media SA
Date: 23-12-2014
Publisher: Oxford University Press
Date: 21-11-2012
DOI: 10.1093/OXFORDHB/9780199734689.013.0039
Abstract: Listening to music entails processes in which auditory input is automatically analyzed and classified, and conscious processes in which listeners interpret and evaluate the music. Performing music involves engaging in rehearsed movements that reflect procedural (embodied) knowledge of music, along with conscious efforts to guide and refine these movements through online monitoring of the sounded output. Composing music balances the use of intuition that reflects implicit knowledge of music with conscious and deliberate efforts to invent musical textures and devices that are innovative and consistent with in idual aesthetic goals. Listeners and musicians also interact with one another in ways that blur the boundary between them: Listeners tap or clap in time with music, monitor the facial expressions and gestures of performers, and empathize emotionally with musicians musicians, in turn, attend to their audience and perform differently depending on the perceived energy and attitude of their listeners. Musicians and listeners are roped together through shared cognitive, emotional, and motor experiences, exhibiting remarkable synchrony in behavior and thought. In this chapter, we describe the forms of musical thought for musicians and listeners, and we discuss the implications of implicit and explicit thought processes for musical understanding and emotional experience.
Publisher: University of California Press
Date: 02-2023
Abstract: Two experiments investigated perceptual and emotional consequences of note articulation in music by examining the degree to which participants perceived notes to be separated from each other in a musical phrase. Seven-note piano melodies were synthesized with staccato notes (short decay) or legato notes (gradual/sustained decay). Experiment 1 (n = 64) addressed the impact of articulation on perceived melodic cohesion and perceived emotion expressed through melodies. Participants rated melodic cohesion and perceived emotions conveyed by 32 legato and 32 staccato melodies. Legato melodies were rated more cohesive than staccato melodies and perceived as emotionally calmer and sadder than staccato melodies. Staccato melodies were perceived as having greater tension and energy. Experiment 2 (n = 60) addressed whether articulation is associated with humor and fear in music, and whether the impact of articulation depends on major vs. minor mode. For both modes, legato melodies were scarier than staccato melodies, whereas staccato melodies were more amusing and surprising. The effect of articulation on perceived happiness and sadness was dependent on mode: staccato enhanced perceived happiness for minor melodies legato enhanced perceived sadness for minor melodies. Findings are discussed in relation to theories of music processing, with implications for music composition, performance, and pedagogy.
Publisher: SAGE Publications
Date: 15-08-2014
Abstract: In this investigation, eight highly-trained musicians communicated emotions through composition, performance expression, or the combination of the two. In the performance condition, they performed melodies with the intention of expressing six target emotions: anger, fear, happiness, neutral, sadness, and tenderness. In the composition condition, they composed melodies to express the same six emotions. The notated compositions were then played digitally without performance expression. In the combined condition, musicians performed the melodies they composed to convey the target emotions. Forty-two listeners heard the stimuli and attempted to decode the emotions in a forced-choice paradigm. Decoding accuracy varied significantly as a function of the channel of communication. Fear was comparatively well-decoded in the composition condition whereas anger was comparatively well decoded in the performance condition. Happiness and sadness were comparatively well-decoded in all three channels of communication. A principal component analysis of cues used by musicians clarified the distinct approaches adopted in composition and performance to differentiate emotional intentions. The results confirm that composition and performance involve the manipulation of distinct cues and have different emotional capabilities.
Publisher: Cold Spring Harbor Laboratory
Date: 13-12-2018
DOI: 10.1101/494294
Abstract: Tonal music the world over is characterized by a hierarchical structuring of pitch, whereby certain tones appear stable and others unstable within their musical context. Despite its prevalence, the cortical mechanisms supporting such a percept remain poorly understood. The current study probed the neural processing dynamics underlying the representation of pitch in Western Tonal Music. Listeners were presented with tones comprising all twelve pitch-classes embedded within a musical context whilst having their magnetoencephalographic (MEG) activity recorded. Using multivariate pattern analysis (MVPA), decoders attempted to classify the identity of tones from their corresponding MEG activity at each peristimulus time s le, providing a dynamic measure of their cortical dissimilarity. Time-evolving dissimilarities between tones were then compared with the predictions of several acoustic and perceptual models. Following tone onset, we observed a temporal evolution in the neural representation. Dissimilarities between tones initially reflected their fundamental frequency separation, but beyond 200 ms reflected their status within the tonal hierarchy of perceived stability. Furthermore, when the dissimilarities corresponding to this latter period were transposed into different keys, cortical relations between keys correlated with the well-known circle of fifths. Convergent with fundamental principles of music-theory and perception, current results detail the dynamics with which the complex perceptual structure of Western tonal music emerges in human cortex within the timescale of an in idual tone.
Publisher: Frontiers Media SA
Date: 2014
Publisher: MDPI AG
Date: 13-01-2021
Abstract: Chanting is a form of rhythmic, repetitive vocalization practiced in a wide range of cultures. It is used in spiritual practice to strengthen community, heal illness, and overcome psychological and emotional difficulties. In many traditions, chanting is used to induce mystical states, an altered state of consciousness characterised by a profound sense of peace. Despite the global prevalence of chanting, its psychological effects are poorly understood. This investigation examined the psychological and contextual factors associated with mystical states during chanting. Data were analyzed from 464 participants across 33 countries who regularly engaged in chanting. Results showed that 60% of participants experienced mystical states during chanting. Absorption, altruism, and religiosity were higher among people who reported mystical states while chanting compared to those who did not report mystical states. There was no difference in mystical experience scores between vocal, silent, group or in idual chanting and no difference in the prevalence of mystical states across chanting traditions. However, an analysis of subscales suggested that mystical experiences were especially characterised by positive mood and feelings of ineffability. The research sheds new light on factors that impact upon chanting experiences. A framework for understanding mystical states during chanting is proposed.
Publisher: Springer Science and Business Media LLC
Date: 13-08-2021
DOI: 10.1186/S12913-021-06463-8
Abstract: Process evaluations have been recommended alongside clinical and economic evaluations to enable an in-depth understanding of factors impacting results. My Therapy is a self-management program designed to augment usual care inpatient rehabilitation through the provision of additional occupational therapy and physiotherapy exercises and activities, for the patient to complete outside of supervised therapy. The aims of the process evaluation are to assess the implementation process by investigating fidelity, quality of implementation, acceptability, adoption, appropriateness, feasibility and adaptation of the My Therapy intervention and identify contextual factors associated with variations in outcomes, including the perspectives and experiences of patients and therapists. The process evaluation will be conducted alongside the clinical and economic evaluation of My Therapy, within eight rehabilitation wards across two public and two private Australian health networks. All participants of the stepped wedge cluster randomised trial (2,160 rehabilitation patients) will be included in the process evaluation (e.g., ward audit) with a subset of 120 participants undergoing more intensive evaluation (e.g., surveys and activity logs). In addition, 24 staff (occupational therapists and physiotherapists) from participating wards will participate in the process evaluation. The mixed-methods study design will adopt a range of quantitative and qualitative research approaches. Data will be collected via a service profile survey and audits of clinical practice across the participating wards (considering areas such as staffing profiles and prescription of self-management programs). The intensive patient participant data collection will involve structured therapy participation and self-management program audits, Exercise Self Efficacy Scale, patient activity logs, patient surveys, and patient-worn activity monitors. Staff data collection will include surveys and focus groups. The process evaluation will provide context to the clinical and economic outcomes associated with the My Therapy clinical trial. It considers how clinical and economic outcomes were achieved, and how to sustain the outcomes within the participating health networks. It will also provide context to inform future scaling of My Therapy to other health networks, and influence future models of rehabilitation and related policy. This study was prospectively registered with the Australian and New Zealand Clinical Trials Registry (ACTRN12621000313831 registered 22/03/2021, www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=380828& isReview=true ).
Publisher: Springer Science and Business Media LLC
Date: 06-2010
DOI: 10.3758/PBR.17.3.317
Publisher: Frontiers Media SA
Date: 2013
Publisher: SAGE Publications
Date: 09-2007
Publisher: Walter de Gruyter GmbH
Date: 20-03-2019
Abstract: How did human vocalizations come to acquire meaning in the evolution of our species? Charles Darwin proposed that language and music originated from a common emotional signal system based on the imitation and modification of sounds in nature. This protolanguage is thought to have erged into two separate systems, with speech prioritizing referential functionality and music prioritizing emotional functionality. However, there has never been an attempt to empirically evaluate the hypothesis that a single communication system can split into two functionally distinct systems that are characterized by music- and languagelike properties. Here, we demonstrate that when referential and emotional functions are introduced into an artificial communication system, that system will erge into vocalization forms with speech- and music-like properties, respectively. Participants heard novel vocalizations as part of a learning task. Half referred to physical entities and half functioned to communicate emotional states. Participants then reproduced each sound with the defined communicative intention in mind. Each recorded vocalization was used as the input for another participant in a serial reproduction paradigm, and this procedure was iterated to create 15 chains of five participants each. Referential vocalizations were rated as more speech-like , whereas emotional vocalizations were rated as more music-like , and this association was observed cross-culturally. In addition, a stable separation of the acoustic profiles of referential and emotional vocalizations emerged, with some attributes erging immediately and others erging gradually across iterations. The findings align with Darwin’s hypothesis and provide insight into the roles of biological and cultural evolution in the ergence of language and music.
Publisher: Informa UK Limited
Date: 02-01-2017
DOI: 10.1080/13554794.2017.1287278
Abstract: The hallmark symptom of Alzheimer's Dementia (AD) is impaired memory, but memory for familiar music can be preserved. We explored whether a non-musician with severe AD could learn a new song. A 91 year old woman (NC) with severe AD was taught an unfamiliar song. We assessed her delayed song recall (24 hours and 2 weeks), music cognition, two word recall (presented within a familiar song lyric, a famous proverb, or as a word stem completion task), and lyrics and proverb completion. NC's music cognition (pitch and rhythm perception, recognition of familiar music, completion of lyrics) was relatively preserved. She recalled 0/2 words presented in song lyrics or proverbs, but 2/2 word stems, suggesting intact implicit memory function. She could sing along to the newly learnt song on immediate and delayed recall (24 hours and 2 weeks later), and with intermittent prompting could sing it alone. This is the first detailed study of preserved ability to learn a new song in a non-musician with severe AD, and contributes to observations of relatively preserved musical abilities in people with dementia.
Publisher: Routledge
Date: 10-11-2023
Publisher: Frontiers Media SA
Date: 2020
Publisher: Wiley
Date: 10-03-2021
DOI: 10.1111/NYAS.14587
Abstract: Pain is essential for our survival because it helps to protect us from severe injuries. Nociceptive signals may be exacerbated by continued physical activities but can also be interrupted or overridden by physical movements, a process called movement-induced hypoalgesia. Several neural mechanisms have been proposed to account for this effect, including the reafference principle, non-nociceptive interference, and top-down descending modulation. Given that the hypoalgesic effects of these mechanisms temporally overlap during movement execution, it is unclear whether movement-induced hypoalgesia results from a single neural mechanism or from the joint action of multiple neural mechanisms. To address this question, we conducted five experiments on 129 healthy humans by assessing the hypoalgesic effect after movement execution. Combining psychophysics and electroencephalographic recordings, we quantified the relationship between the strength of voluntary movement and the hypoalgesic effect, as well as the temporal and spatial characteristics of the hypoalgesic effect. Our findings demonstrated that movement-induced hypoalgesia results from the joint action of multiple neural mechanisms. This investigation is the first to disentangle the distinct contributions of different neural mechanisms to the hypoalgesic effect of voluntary movement, which extends our understanding of sensory attenuation arising from voluntary movement and may prove instrumental in developing new strategies for pain management.
Publisher: Informa UK Limited
Date: 09-2008
Publisher: Informa UK Limited
Date: 04-2011
Publisher: University of California Press
Date: 09-2015
Abstract: Four experiments assessed the influence of emergent-level structure on melodic processing difficulty. Emergent-level structure was manipulated across experiments and defined with reference to the Implication-Realization model of melodic expectancy (Narmour, 1990, 1992, 2000). Two measures of melodic processing difficulty were used to assess the influence of emergent-level structure: serial-reconstruction and cohesion ratings. In the serial-reconstruction experiment (Experiment 1), reconstruction was more efficient for melodies with simple emergent-level structure. In the cohesion experiments (Experiments 2-4), ratings were higher for melodies with simple emergent-level structure, and the advantage was generally greater in the presence of simple surface-level structure. Results indicate that emergent-level structure as defined by the model can influence melodic processing difficulty.
Publisher: Elsevier BV
Date: 2004
DOI: 10.1016/J.NEUROPSYCHOLOGIA.2003.07.005
Abstract: We examined whether facial emotion perception was compromised in adults with recent traumatic brain injury (TBI). Few studies have examined emotion perception in TBI those that have, examined chronic patients only. Recent and chronic TBI populations differ according to degree of functional reorganization of the brain, use of compensatory strategies, and severity of cognitive impairments--any of which might differentially affect presentation of emotion perception deficits. A secondary aim of the study was to utilize the TBI population--in whom diffuse axonal injury (DAI) is a cardinal neurological feature--to examine the suggestion of Adolphs et al. [Journal of Neuroscience 20(7) (2000) 2683] that damage to white matter tracts should give rise to emotion perception deficits. Thirty TBI participants and 30 age-matched controls were tested. A 2 x 3 mixed design was employed. The dependent variable was accuracy on neutral and emotional face perception tests. (1) The TBI group performed significantly less accurately than the matched controls on the facial emotion perception tasks, whereas the groups performed equivalently on a non-emotional face perception control task. (2) A sub-group of TBI participants without evidence of focal injury to areas of the brain most commonly implicated in facial emotion perception was as impaired on the emotion perception tasks as a second sub-group who had sustained focal lesions to these areas. This suggests an alternative neurological mechanism for deficits in the first sub-group, such as DAI. Patients with recently acquired TBI are impaired in their ability to perceive emotions in faces. DAI alone may cause facial emotion perception deficits.
Publisher: SAGE Publications
Date: 09-10-2021
DOI: 10.1177/03057356211044200
Abstract: Fans of extreme metal and rap music with violent themes, hereafter termed “violently themed music,” predominantly experience positive emotional and psychosocial outcomes in response to this music. However, negative emotional responses to preferred music are reported to a greater extent by such fans than by fans of non-violently themed music. We investigated negative emotional responses to violently themed music among fans by assessing their experience of depressive symptoms, and whether violently themed music functions to regulate negative moods through two common mood regulation strategies: discharge and ersion. Fans of violent rap ( n = 49), violent extreme metal ( n = 46), and non-violent classical music ( n = 50) reported depressive symptoms and use of music to regulate moods. Participants listened to four one-minute excerpts of music in their preferred genres and rated negative emotional responses to each excerpt (sadness, tension, anger, fear). There were no significant differences between ratings of depression between groups, but depressive symptoms predicted negative emotional responses to music across all groups. Furthermore, depression ratings predicted the use of the mood regulation strategy of discharge in all groups. The discharge strategy did not reduce (or exacerbate) fans’ negative emotional responses, but may nevertheless confer other benefits. We discuss implications for the psychosocial well-being of fans of violently themed music.
Publisher: SAGE Publications
Date: 29-12-2037
DOI: 10.1177/10298649231157099
Abstract: Regional conflict, growing technological developments, and climate change have seen high migration rates, which are likely to rise. Discrimination and violence at the hands of host societies continue to threaten the well-being of immigrant communities, as well as wider social cohesion in migration destinations. The urgency of the situation has been highlighted in several international policy documents released since 2020 by the United Nations (UN) and related agencies. In response, we have seen a global movement of intercultural music ensembles intended to break down cultural barriers and explore sites of cultural intersection, yet the real-world benefits of such initiatives remain unclear. There is a need to further explore and understand how and when music can be used as an instrument or site for fostering inclusion, understanding, and cohesion between migrants and their host communities. On appraising the evidence, we propose a conceptual framework for explaining how different cultures can interact with each other through musical participation.
Publisher: Elsevier BV
Date: 09-2010
DOI: 10.1016/J.NEULET.2010.07.010
Abstract: The mismatch negativity (MMN) component of the auditory event-related potential (ERP) reflects the process of change detection in the auditory system. The present study investigated the effect of deviance direction (increment vs. decrement) and calculation method (traditional vs. same-stimulus) on the litude of MMN. MMN was recorded for increments and decrements in frequency and duration in 20 adults. The stimuli (standard/deviant) were 250 Hz/350 Hz (frequency MMN) and 200 ms/300 ms (duration MMN) for increment MMN and vice versa for decrement MMN. Amplitude of MMN was calculated in two ways: the traditional method (subtracting ERP to the standard from the deviant presented in the same block) and the same-stimulus method (subtracting ERP to identical stimuli presented as standard in one block and deviant in another block). We found that increments in frequency produced higher MMN litudes compared to decrements for both methods of calculation. For duration deviance, the decrement MMN was absent in the traditional method, while the decrement and increment MMN did not differ for the same-stimulus method. These findings suggest that the brain processes frequency increments and decrements in different ways. The results also suggest the use of same-stimulus method for the calculation of duration MMN when long duration stimuli are used.
Publisher: University of California Press
Date: 09-2019
Abstract: Note-to-note changes in brightness are able to influence the perception of interval size. Changes that are congruent with pitch tend to expand interval size, whereas changes that are incongruent tend to contract. In the case of singing, brightness of notes can vary as a function of vowel content. In the present study, we investigated whether note-to-note changes in brightness arising from vowel content influence perception of relative pitch. In Experiment 1, three-note sequences were synthesized so that they varied with regard to the brightness of vowels from note to note. As expected, brightness influenced judgments of interval size. Changes in brightness that were congruent with changes in pitch led to an expansion of perceived interval size. A follow-up experiment confirmed that the results of Experiment 1 were not due to pitch distortions. In Experiment 2, the final note of three-note sequences was removed, and participants were asked to make speeded judgments of the pitch contour. An analysis of response times revealed that brightness of vowels influenced contour judgments. Changes in brightness that were congruent with changes in pitch led to faster response times than did incongruent changes. These findings show that the brightness of vowels yields an extra-pitch influence on the perception of relative pitch in song.
Publisher: Zenodo
Date: 2018
Publisher: Springer Science and Business Media LLC
Date: 14-11-2019
DOI: 10.1038/S41598-019-53260-9
Abstract: Recent magnetoencephalography (MEG) studies have established that sensorimotor brain rhythms are strongly modulated during mental imagery of musical beat and rhythm , suggesting that motor regions of the brain are important for temporal aspects of musical imagery. The present study examined whether these rhythms also play a role in non-temporal aspects of musical imagery including musical pitch . Brain function was measured with MEG from 19 healthy adults while they performed a validated musical pitch imagery task and two non-imagery control tasks with identical temporal characteristics. A 4-dipole source model probed activity in bilateral auditory and sensorimotor cortices. Significantly greater β-band modulation was found during imagery compared to control tasks of auditory perception and mental arithmetic. Imagery-induced β-modulation showed no significant differences between auditory and sensorimotor regions, which may reflect a tightly coordinated mode of communication between these areas. Directed connectivity analysis in the θ-band revealed that the left sensorimotor region drove left auditory region during imagery onset. These results add to the growing evidence that motor regions of the brain are involved in the top-down generation of musical imagery, and that imagery-like processes may be involved in musical perception.
Publisher: SAGE Publications
Date: 05-2007
DOI: 10.1068/P5435
Abstract: Striking changes in sensitivity to tonality across the pitch range are reported. Participants were presented a key-defining context (do-mi-do-sol) followed by one of the 12 chromatic tones of the octave, and rated the goodness of fit of the probe tone to the context. The set of ratings, called the probe-tone profile, was compared to an established standardised profile for the Western tonal hierarchy. The presentation of context and probe tones at low and high pitch registers resulted in significantly reduced sensitivity to tonality. Sensitivity was especially poor for presentations in the lowest octaves where inharmonicity levels were substantially above the threshold for detection. We propose that sensitivity to tonality may be influenced by pitch salience (or a co-varying factor such as exposure to pitch distributional information) as well as suprathreshold inharmonicity.
Publisher: Oxford University Press
Date: 13-10-2011
Publisher: SAGE Publications
Date: 06-1998
DOI: 10.1177/1321103X9801000102
Abstract: Assessing musical performance is common across many types of music education practice, yet research clarifying the range of factors which impact on a judge's assessment is relatively scarce. This article attempts to provide focus to the current literature, by proposing a process model of assessing musical performance that identifies some of the main elements that affect a judge's assessment in formal performance settings such as competitions, auditions, recitals, Eisteddfods and graded examinations. The article includes a review of the literature according to the categories defined in the model and suggestions which are intended to form the basis for further research in the area.
Publisher: Frontiers Media SA
Date: 09-04-2015
Publisher: Cambridge University Press (CUP)
Date: 18-03-2013
DOI: 10.1017/S0140525X1200180X
Abstract: Art appreciation often involves contemplation beyond immediate perceptual experience. However, there are challenges to incorporating such processes into a comprehensive theory of art appreciation. Can appreciation be captured in the responses to in idual artworks? Can all forms of contemplation be defined? What properties of artworks trigger contemplation? We argue that such questions are fundamental to a psycho-historical framework for the science of art appreciation, and we suggest research that may assist in refining this framework.
Publisher: Elsevier BV
Date: 10-2022
Publisher: Public Library of Science (PLoS)
Date: 08-02-2012
Publisher: Springer Science and Business Media LLC
Date: 04-2009
DOI: 10.3758/MC.37.3.368
Publisher: Elsevier
Date: 2013
Publisher: SAGE Publications
Date: 07-2020
Abstract: Music has been argued to contribute to well-being in multiple ways, through its links to identity, social relationships, emotion, and memory. We investigated the phenomenon of “couple-defining songs (CDSs),” in which members of a couple come to jointly identify their relationship with a particular song. Two hundred participants who were currently in a romantic relationship, erse in age and relationship length and status, reported whether they had a CDS. Those who reported a CDS described its origins and meaning, and any memories and emotions elicited by thinking about their song. In addition, participants completed measures of music appreciation and relationship intimacy. We found that CDSs were common, relatively unique to romantic relationships, and associated with higher music appreciation and higher intimacy. CDSs tended to be acquired early in relationships, and they cued positive emotions and specific memories. These findings suggest that CDSs represent a common and understudied phenomenon. We propose that the multifaceted nature of music may contribute to the prevalence of CDSs in intimate relationships.
Publisher: Acoustical Society of America (ASA)
Date: 06-2001
DOI: 10.1121/1.1367254
Abstract: In five experiments, we investigated the speed of pitch resolution in a musical context. In experiments 1–3, listeners were presented an incomplete scale (doh, re, mi, fa, sol, la, ti) and then a probe tone. Listeners were instructed to make a rapid key-press response to probe tones that were relatively proximal in pitch to the last note of the scale (valid trials), and to ignore other probe tones (invalid trials). Reaction times were slower if the pitch of the probe tone was dissonant with the expected pitch (i.e., the completion of the scale, or doh) or if the probe tone was nondiatonic to the key implied by the scale. In experiments 4 and 5, listeners were presented a two-octave incomplete arpeggio, and then a probe tone. In this case, listeners were asked to make a rapid key-press response to probe tones that were relatively distant in pitch from the last note of the arpeggio. Under these conditions, registral direction and pitch proximity were the dominant influences on reaction time. Results are discussed in view of research on auditory attention and models of musical pitch.
Publisher: SAGE Publications
Date: 10-2012
DOI: 10.1080/17470218.2012.678369
Abstract: In two experiments, we examined the effect of intensity and intensity change on judgements of pitch differences or interval size. In Experiment 1, 39 musically untrained participants rated the size of the interval spanned by two pitches within in idual gliding tones. Tones were presented at high intensity, low intensity, looming intensity (up-r ), and fading intensity (down-r ) and glided between two pitches spanning either 6 or 7 semitones (a tritone or a perfect fifth interval). The pitch shift occurred in either ascending or descending directions. Experiment 2 repeated the conditions of Experiment 1 but the shifts in pitch and intensity occurred across two discrete tones (i.e., a melodic interval). Results indicated that participants were sensitive to the differences in interval size presented: Ratings were significantly higher when two pitches differed by 7 semitones than when they differed by 6 semitones. However, ratings were also dependent on whether the interval was high or low in intensity, whether it increased or decreased in intensity across the two pitches, and whether the interval was ascending or descending in pitch. Such influences illustrate that the perception of pitch relations does not always adhere to a logarithmic function as implied by their musical labels, but that identical intervals are perceived as substantially different in size depending on other attributes of the sound source.
Publisher: University of California Press
Date: 06-2009
Abstract: FACIAL EXPRESSIONS ARE USED IN MUSIC PERFORMANCE to communicate structural and emotional intentions. Exposure to emotional facial expressions also may lead to subtle facial movements that mirror those expressions. Seven participants were recorded with motion capture as they watched and imitated phrases of emotional singing. Four different participants were recorded using facial electromyography (EMG) while performing the same task. Participants saw and heard recordings of musical phrases sung with happy, sad, and neutral emotional connotations. They then imitated the target stimulus, paying close attention to the emotion expressed. Facial expressions were monitored during four epochs: (a) during the target (b) prior to their imitation (c) during their imitation and (d) after their imitation. Expressive activity was observed in all epochs, implicating a role of facial expressions in the perception, planning, production, and post-production of emotional singing.
Publisher: University of California Press
Date: 04-2006
Abstract: Using a three-dimensional model of affect, we compared the affective consequences of manipulating intensity, rate, and pitch height in music and speech. Participants rated 64 music and 64 speech excerpts on valence (pleasant-unpleasant), energy arousal (awake-tired), and tension arousal (tense-relaxed). For music and speech, loud excerpts were judged as more pleasant, energetic, and tense than soft excerpts. Manipulations of rate had overlapping effects on music and speech. Fast music and speech were judged as having greater energy than slow music and speech. However, whereas fast speech was judged as less pleasant than slow speech, fast music was judged as having greater tension than slow music. Pitch height had opposite consequences for music and speech, with high-pitched speech but lowpitched music associated with higher ratings of valence (more pleasant). Interactive effects on judgments were also observed. We discuss similarities and differences between vocal and musical communication of affect, and the need to distinguish between two types of arousal: energy and tension.
Publisher: Springer Science and Business Media LLC
Date: 25-10-2012
DOI: 10.1007/S00221-011-2907-5
Abstract: Common Coding theory predicts that perceived action should resonate in produced action to which it bears some resemblance. Here we show that the qualities of motion commonly attributed to melodies are instantiated in motor plans that control timed movements. Participants attempted to tap a steady beat. Each tap triggered a sounded tone, and successive tones were systematically varied in pitch to form short melodies. Tapping behavior was monitored with motion capture. Although instructed to ignore them, triggered tones systematically affected timing and finger movement. When slower melodic motion was implied by a contour change or a smaller pitch displacement, the interval-tap interval (ITI) was longer. When faster melodic motion was implied by a preserved pitch contour or a larger pitch displacement, ITI was shorter. Kinematic recordings suggested that ITI Error arose from an initial failure to disambiguate perception (i.e., velocity implied by melodic motion) from action (i.e., finger velocity [FV]). Early in the tap trajectory, slower FV was associated with longer ITI and faster FV was associated with shorter ITI. These associations were reversed near mid-trajectory, suggesting a transition from execution of motor planning to online control (Glover et al. in Exp Brain Res 154:103-108, 2004).
Publisher: MDPI AG
Date: 20-01-2023
Abstract: Rich intercultural music engagement (RIME) is an embodied form of engagement whereby in iduals immerse themselves in foreign musical practice, for ex le, by learning a traditional instrument from that culture. The present investigation evaluated whether RIME with Chinese or Middle Eastern music can nurture intercultural understanding. White Australian participants were randomly assigned to one of two plucked-string groups: Chinese pipa (n = 29) or Middle Eastern oud (n = 29). Before and after the RIME intervention, participants completed measures of ethnocultural empathy, tolerance, social connectedness, explicit and implicit attitudes towards ethnocultural groups, and open-ended questions about their experience. Following RIME, White Australian participants reported a significant increase in ethnocultural empathy, tolerance, feelings of social connection, and improved explicit and implicit attitudes towards Chinese and Middle Eastern people. However, these benefits differed between groups. Participants who learned Chinese pipa reported reduced bias and increased social connectedness towards Chinese people, but not towards Middle Eastern people. Conversely, participants who learned Middle Eastern oud reported a significant increase in social connectedness towards Middle Eastern people, but not towards Chinese people. This is the first experimental evidence that participatory RIME is an effective tool for understanding a culture other than one’s own, with the added potential to reduce cultural bias.
Publisher: No publisher found
Date: 2022
DOI: 10.1037/EMO0001054
Abstract: It is well established that adults can interpret emotional speech prosody independent of word meaning comprehension, even for emotional speech prosody in an unfamiliar language. However, the acquisition of this ability remains unclear. This study examined the decoding of four emotions (happy, sad, surprise, angry) conveyed with speech prosody in four languages (English, Chinese, French, Spanish) by American and Chinese children at 3 to 5 years of age-an age range when the ability to decode emotional prosody in one's native language emerges but remains fragile. Chinese and American children could decode the emotional meaning of speech prosody in both familiar and unfamiliar languages as young as 3 years old. Performance did not differ across the four languages used-a finding observed in both American and Chinese children. Thus, the in-group advantage of emotional prosody decoding reported for adults may not be evident by 5 years of age. Furthermore, emotional prosody decoding skills improved with age. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: Springer Science and Business Media LLC
Date: 12-02-2015
Publisher: SAGE Publications
Date: 15-03-2023
DOI: 10.1177/10298649231157404
Abstract: Passionate music engagement is a defining feature of music fans worldwide. Although benefits to psychosocial well-being are often experienced by fans of music, some fans experience maladaptive outcomes from their music engagement. The Dualistic Model of Passion proposes that two types of passion—harmonious and obsessive—are associated with positive and negative outcomes of passionate engagement, respectively. This model has been employed in research on passion for a wide range of pursuits including music performers, but not for passionate listeners. The present study employed this model to investigate whether (1) harmonious passion for music is associated with positive music listening experiences and/or psychological well-being and (2) obsessive passion for music is associated with negative music listening experiences and/or psychological ill-being. Passionate fans ( n = 197) of 40 different musical genres were surveyed about their experiences when listening to their favorite music. Measures included the passion scale, affective experiences with music, and psychological well-being and ill-being. Results supported the Dualistic Model of Passion. Structural equation modeling revealed that harmonious passion for music predicted positive affective experiences which, in turn, predicted psychological well-being. Conversely, obsessive passion for music predicted negative affective experiences which, in turn, predicted psychological ill-being. The findings suggest that the nature of passionate engagement with music has an integral role in the psychological impact of music engagement and implications for the well-being of music fans.
Publisher: Elsevier BV
Date: 2012
Publisher: MDPI AG
Date: 30-11-2022
DOI: 10.3390/BS12120486
Abstract: While the benefits to mood and well-being from passionate engagement with music are well-established, far less is known about the relationship between passion for explicitly violently themed music and psychological well-being. The present study employed the Dualistic Model of Passion to investigate whether harmonious passion (i.e., passionate engagement that is healthily balanced with other life activities) predicts positive music listening experiences and/or psychological well-being in fans of violently themed music. We also investigated whether obsessive passion (i.e., uncontrollable passionate engagement with an activity) predicts negative music listening experiences and/or psychological ill-being. Fans of violently themed music (N = 177) completed the passion scale, scale of positive and negative affective experiences, and various psychological well- and ill-being measures. As hypothesised, harmonious passion for violently themed music significantly predicted positive affective experiences which, in turn, predicted psychological well-being. Obsessive passion for violently themed music significantly predicted negative affective experiences which, in turn, predicted ill-being. Findings support the Dualistic Model of Passion, and suggest that even when music engagement includes violent content, adaptive outcomes are often experienced. We propose that the nature of one’s passion for music is more influential in predicting well-being than the content or valence of the lyrical themes.
Publisher: American Psychological Association (APA)
Date: 2001
Publisher: Frontiers Media SA
Date: 29-04-2014
Publisher: Informa UK Limited
Date: 12-2008
Publisher: Springer Science and Business Media LLC
Date: 05-1994
DOI: 10.3758/BF03209770
Abstract: In four experiments, listeners' sensitivity to combinations of pitch and duration was investigated. Experiments 1-3 involved "textures" of notes, which were created by repeatedly sounding one of two notes (e.g., C4 quarter note D4 eighth note), so that each note had an equal chance of occurring at any point within a texture. Experiment 1 showed that if a texture change was effected by introducing a pitch or duration that was not in the initial texture, the change was perceived by both attentive and distracted listeners. If a texture change was effected by combining the pitch of one note with the duration of the other note in the initial texture, and vice versa, it was perceived only if the listeners were attentive. Sensitivity to pitch/duration combinations was poorer when the pitch difference between component notes of textures was increased (Experiment 2), but it was better when the difference in duration between component notes was increased (Experiment 3). In Experiment 4, listeners' sensitivity to combinations of pitch pattern and durational pattern in brief sequences was examined. Listeners were sensitive to the manner in which parameter patterns were combined when they were attentive, but not when they were distracted. The results are discussed in view of feature-integration theory and its application to music cognition.
Publisher: Springer Science and Business Media LLC
Date: 1992
DOI: 10.1007/BF00937134
Publisher: Springer Science and Business Media LLC
Date: 1993
DOI: 10.3758/BF03211711
Abstract: Perceptual relationships between four-voice harmonic sequences and single voices were examined in three experiments. In Experiment 1, listeners rated the extent to which single voices were musically consistent with harmonic sequences. When harmonic sequences did not change key, judgments were influenced by three sources of congruency: melody (whether the single voice was the same as the soprano voice of the harmonic sequence), chord progression (whether the single voice could be harmonized to give rise to the chord progression of the harmonic sequence), and key structure (whether or not the single voice implied modulation). When key changes occurred, sensitivity to sources of congruency was reduced. In Experiment 2, another interpretation of the results was examined: that consistency ratings were based on congruency in well-formedness. Listeners provided well-formedness ratings of the single voices and harmonic sequences. A multiple regression analysis suggested that consistency ratings were based not merely on well-formedness but on congruency in melody, chord progression, and key structure. In Experiment 3, listeners rated the extent of modulation in harmonic sequences and in each voice of the sequences. Discrimination between modulation conditions was greater for single voices than for harmonic sequences, suggesting that abstraction of key from melody may occur without reference to implied harmony. A partially hierarchical system for processing melody, harmony, and key is proposed.
Publisher: Springer Science and Business Media LLC
Date: 1992
DOI: 10.1007/BF00937133
Publisher: Macquarie Centre for Cognitive Science
Date: 2010
DOI: 10.5096/ASCS20098
Publisher: Oxford University Press
Date: 08-09-2016
Publisher: SAGE Publications
Date: 02-2011
DOI: 10.1080/17470218.2010.495408
Abstract: The ideomotor principle predicts that perception will modulate action where overlap exists between perceptual and motor representations of action. This effect is demonstrated with auditory stimuli. Previous perceptual evidence suggests that pitch contour and pitch distance in tone sequences may elicit tonal motion effects consistent with listeners’ implicit awareness of the lawful dynamics of locomotive bodies. To examine modulating effects of perception on action, participants in a continuation tapping task produced a steady tempo. Auditory tones were triggered by each tap. Pitch contour randomly and persistently varied within trials. Pitch distance between successive tones varied between trials. Although participants were instructed to ignore them, tones systematically affected finger dynamics and timing. Where pitch contour implied positive acceleration, the following tap and the intertap interval (ITI) that it completed were faster. Where pitch contour implied negative acceleration, the following tap and the ITI that it completed were slower. Tempo was faster with greater pitch distance. Musical training did not predict the magnitude of these effects. There were no generalized effects on timing variability. Pitch contour findings demonstrate how tonal motion may elicit the spontaneous production of accents found in expressive music performance.
Publisher: Frontiers Media SA
Date: 18-09-2019
Publisher: Oxford University Press
Date: 08-09-2016
Publisher: No publisher found
Date: 2001
Publisher: Springer Science and Business Media LLC
Date: 28-12-2022
Publisher: Informa UK Limited
Date: 06-2011
DOI: 10.1080/02699931.2010.500159
Abstract: Several studies have used a visual search task to demonstrate that schematic negative-face targets are found faster and/or more efficiently than positive ones, with these findings taken as evidence that negative emotional expression is capable of guiding attentional allocation in visual search. A common hypothesis is that these effects should be disrupted by face inversion however, this has not been consistently demonstrated, and raises the possibility of a perceptual confound. One candidate confound is the feature of "closure" (see Wolfe & Horowitz, 2004) caused by the down-turned mouth adjacent to edge of the face. This was investigated in the present series of experiments. In Experiment 1, the speed advantage for upright negative faces was replicated. In Experiment 2, the effect was not disrupted with inversion, and an efficiency advantage emerged, suggesting that perceptual features could be causing the advantage. In Experiment 3, speed and efficiency effects were seen when this perceptual characteristic remained but face features were scrambled. Taken together, these findings suggest that visual search using schematic faces containing a curved-line mouth feature cannot provide a valid test of guided search by negative facial emotion unless this confound is controlled.
Publisher: Oxford University Press
Date: 02-03-2006
Start Date: 2013
End Date: 12-2016
Amount: $330,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2009
End Date: 06-2013
Amount: $295,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 12-2021
End Date: 12-2026
Amount: $416,369.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2019
End Date: 12-2023
Amount: $425,291.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2007
End Date: 12-2010
Amount: $181,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 02-2016
End Date: 06-2021
Amount: $400,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2011
End Date: 12-2018
Amount: $21,000,000.00
Funder: Australian Research Council
View Funded Activity