ORCID Profile
0000-0002-3933-9752
Current Organisations
Los Alamos National Lab
,
University of Melbourne
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Psychology | Psychological Methodology, Design and Analysis | History and Philosophy of the Social Sciences | Social and Community Psychology
Behaviour and Health | Expanding Knowledge in Psychology and Cognitive Sciences | Criminal Justice |
Publisher: American Psychological Association (APA)
Date: 08-2018
DOI: 10.1037/PSPP0000136
Abstract: Personality traits are most often assessed using global self-reports of one's general patterns of thoughts, feelings, and behavior. However, recent theories have challenged the idea that global self-reports are the best way to assess traits. Whole Trait Theory postulates that repeated measures of a person's self-reported personality states (i.e., the average of many state self-reports) can be an alternative and potentially superior way of measuring a person's trait level (Fleeson & Jayawickreme, 2015). Our goal is to examine the validity of average state self-reports of personality for measuring between-person differences in what people are typically like. In order to validate average states as a measure of personality, we examine whether they are incrementally valid in predicting informant reports above and beyond global self-reports. In 2 s les, we find that average state self-reports tend to correlate with informant reports, although this relationship is weaker than the relationship between global self-reports and informant reports. Further, using structural equation modeling, we find that average state self-reports do not significantly predict informant reports independently of global self-reports. Our results suggest that average state self-reports may not contain information about between-person differences in personality traits beyond what is captured by global self-reports, and that average state self-reports may contain more self-bias than is commonly believed. We discuss the implications of these findings for research on daily manifestations of personality and the accuracy of self-reports. (PsycINFO Database Record
Publisher: Elsevier BV
Date: 10-2019
DOI: 10.1016/J.TICS.2019.07.009
Abstract: Preregistration clarifies the distinction between planned and unplanned research by reducing unnoticed flexibility. This improves credibility of findings and calibration of uncertainty. However, making decisions before conducting analyses requires practice. During report writing, respecting both what was planned and what actually happened requires good judgment and humility in making claims.
Publisher: Center for Open Science
Date: 18-01-2021
Abstract: Objectives. Questionable research practices (QRPs) lead to incorrect research results and contribute to irreproducibility in science. Researchers and institutions have proposed open science practices (OSPs) to improve the detectability of QRPs and the credibility of science. We examine the prevalence of QRPs and OSPs in criminology, and researchers’ opinions of those practices.Methods. We administered an anonymous survey to authors of articles published in criminology journals. Respondents self-reported their own use of 10 QRPs and 5 OSPs. They also estimated the prevalence of use by others, and reported their attitudes toward the practices. Results. QRPs and OSPs are both common in quantitative criminology, about as common as they are in other fields. Criminologists who responded to our survey support using QRPs in some circumstances, but are even more supportive of using OSPs. We did not detect a significant relationship between methodological training and either QRP or OSP use. Support for QRPs is negatively and significantly associated with support for OSPs. Perceived prevalence estimates for some practices resembled a uniform distribution, suggesting criminologists have little knowledge of the proportion of researchers that engage in certain questionable practices.Conclusions. Most quantitative criminologists in our s le use QRPs, and many use multiple QRPs. The substantial prevalence of QRPs raises questions about the validity and reproducibility of published criminological research. We found promising levels of OSP use, albeit at levels lagging what researchers endorse. The findings thus suggest that additional reforms are needed to decrease QRP use and increase the use of OSPs.
Publisher: SAGE Publications
Date: 21-12-2021
Abstract: Participants in experience s ling method (ESM) studies are “beeped” several times per day to report on their momentary experiences—but participants do not always answer the beep. Knowing whether there are systematic predictors of missing a report is critical for understanding the extent to which missing data threatens the validity of inferences from ESM studies. Here, 228 university students completed up to four ESM reports per day while wearing the Electronically Activated Recorder (EAR)—an unobtrusive audio recording device—for a week. These audio recordings provided an alternative source of information about what participants were doing when they missed or completed reports (3,678 observations). We predicted missing ESM reports from 46 variables coded from the EAR recordings, and found very little evidence that missing an ESM report was correlated with constructs typically of interest to ESM researchers. These findings provide reassuring evidence for the validity of ESM research among relatively healthy university student s les.
Publisher: SAGE Publications
Date: 07-2023
Publisher: American Association for the Advancement of Science (AAAS)
Date: 26-06-2015
Abstract: Author guidelines for journals could help to promote transparency, openness, and reproducibility
Publisher: Elsevier BV
Date: 08-2017
Publisher: Center for Open Science
Date: 10-02-2019
Abstract: The words that people use have been found to reflect stable psychological traits, but less isknown about the extent to which everyday fluctuations in spoken language reflect transient psychological states. We explored within-person associations between spoken words and self- reported state emotion among 185 participants who wore the Electronically Activated Recorder (EAR an unobtrusive audio recording device) and completed experience s ling reports of their positive and negative emotions four times per day for seven days (1,579 observations). We examined language using the Linguistic Inquiry and Word Count program (LIWC theoretically created dictionaries) and open-vocabulary themes (clusters of data-driven semantically-related words). Although some studies give the impression that LIWC’s positive and negative emotion dictionaries can be used as indicators of emotion experience, we found that when computed on spoken language, LIWC emotion scores were not significantly associated with self-reports of state emotion experience. Exploration of other categories of language variables suggests a number of hypotheses about substantive everyday correlates of momentary positive and negative emotion that can be tested in future studies. These findings (1) suggest that LIWC positive and negative emotion dictionaries may not capture self-reported subjective emotion experience when applied to everyday speech, (2) emphasize the importance of establishing the validity of language-based measures within one’s target domain, (3) demonstrate the potential for developing new hypotheses about personality processes from the open-ended words that occur in everyday speech, and (4) extend perspectives on intra-in idual variability to the domain of spoken language.
Publisher: Center for Open Science
Date: 08-10-2016
Abstract: Here, we provide you with supplemental material (additional tables, data, R-Codes) and a Preprint to the manuscript "Zooming into Real-Life Extraversion - How Personality and Context Shape Sociability in Social Interactions" by Breil et al. (under review). Abstract:What predicts sociable behavior? While main effects of personality and situation characteristics on sociability are well established, the determinants of sociable behavior within real-life social interactions are understudied. Moreover, although such effects are often hypothesized, there is to date little evidence of person-situation interaction effects. Finally, previous research focused on self-reported behavior ratings, and less is known on the partner’s social perspective, i.e. how partners perceive and influence an actor’s behavior. In the current research we investigated predictors of sociable behavior in real-life social interactions across social perspectives, including person and situation main effects as well as person-situation interaction effects. In two experience-s ling studies (Study 1: N = 394, US, time-based Study 2: N = 124, Germany, event-based), we assessed personality traits with self- and informant reports, self-reported sociable behavior during real-life social interaction, and corresponding information on the situation (dimensional ratings of situation characteristics and categorical situation classifications). In Study 2, we additionally assessed interaction partner-reported behavior. Multilevel analyses provided consistent evidence for main effects of personality and situation features, and for person-situation interaction effects. First, extraverts acted more sociable in general. Second, in iduals behaved more sociable in hedonic ositive/low-duty situations (vs. eudaimonic/negative/high-duty situations). Third, the latter was particularly true for extraverts. Further specific interaction effects were found for the other social perspectives. These results are discussed regarding the complex interplay of persons and situations in shaping human behavior.
Publisher: The Royal Society
Date: 04-2022
DOI: 10.1098/RSOS.200048
Abstract: What research practices should be considered acceptable? Historically, scientists have set the standards for what constitutes acceptable research practices. However, there is value in considering non-scientists’ perspectives, including research participants'. 1873 participants from MTurk and university subject pools were surveyed after their participation in one of eight minimal-risk studies. We asked participants how they would feel if (mostly) common research practices were applied to their data: p -hacking/cherry-picking results, selective reporting of studies, Hypothesizing After Results are Known (HARKing), committing fraud, conducting direct replications, sharing data, sharing methods, and open access publishing. An overwhelming majority of psychology research participants think questionable research practices (e.g. p -hacking, HARKing) are unacceptable (68.3–81.3%), and were supportive of practices to increase transparency and replicability (71.4–80.1%). A surprising number of participants expressed positive or neutral views toward scientific fraud (18.7%), raising concerns about data quality. We grapple with this concern and interpret our results in light of the limitations of our study. Despite the ambiguity in our results, we argue that there is evidence (from our study and others’) that researchers may be violating participants' expectations and should be transparent with participants about how their data will be used.
Publisher: Center for Open Science
Date: 03-01-2022
Abstract: Scientists are increasingly concerned with making their work easy to verify and build upon. Associated practices include sharing data, materials, and analytic scripts, and preregistering protocols. This has been referred to as a “credibility revolution”. The credibility of empirical legal research has been questioned in the past due to its distinctive peer review system and because the legal background of its researchers means that many often are not trained in study design or statistics. Still, there has been no systematic study of transparency and credibility-related characteristics of published empirical legal research. To fill this gap and provide an estimate of current practices that can be tracked as the field evolves, we assessed 300 empirical articles from highly ranked law journals including both faculty-edited journals and student-edited journals. We found high levels of article accessibility (86% could be accessed without a subscription, 95% CI = [82%, 90%]), especially among student-edited journals (100% accessibility). Few articles stated that a study’s data are available, (19%, 95% CI = [15%, 23%]), and only about half of those datasets are reportedly available without contacting the author. Preregistration (3%, 95% CI = [1%, 5%]) and availability of analytic scripts (6%, 95% = [4%, 9%]) were very uncommon. We suggest that empirical legal researchers and the journals that publish their work cultivate norms and practices to encourage research credibility.
Publisher: University of California Press
Date: 2017
DOI: 10.1525/COLLABRA.74
Abstract: When consumers of science (readers and reviewers) lack relevant details about the study design, data, and analyses, they cannot adequately evaluate the strength of a scientific study. Lack of transparency is common in science, and is encouraged by journals that place more emphasis on the aesthetic appeal of a manuscript than the robustness of its scientific claims. In doing this, journals are implicitly encouraging authors to do whatever it takes to obtain eye-catching results. To achieve this, researchers can use common research practices that beautify results at the expense of the robustness of those results (e.g., p-hacking). The problem is not engaging in these practices, but failing to disclose them. A car whose carburetor is duct-taped to the rest of the car might work perfectly fine, but the buyer has a right to know about the duct-taping. Without high levels of transparency in scientific publications, consumers of scientific manuscripts are in a similar position as buyers of used cars – they cannot reliably tell the difference between lemons and high quality findings. This phenomenon – quality uncertainty – has been shown to erode trust in economic markets, such as the used car market. The same problem threatens to erode trust in science. The solution is to increase transparency and give consumers of scientific research the information they need to accurately evaluate research. Transparency would also encourage researchers to be more careful in how they conduct their studies and write up their results. To make this happen, we must tie journals’ reputations to their practices regarding transparency. Reviewers hold a great deal of power to make this happen, by demanding the transparency needed to rigorously evaluate scientific manuscripts. The public expects transparency from science, and appropriately so – we should be held to a higher standard than used car salespeople.
Publisher: Center for Open Science
Date: 23-12-2016
Abstract: People fluctuate in their behavior as they go about their daily lives, but little is known about the processes underlying these fluctuations. In two ecological momentary assessment studies (Ns = 124, 415), we examined the extent to which negative and positive affect accounted for the within-person variance in Big Five states. Participants were prompted six times a day over six days (Study 1) or four times a day over two weeks (Study 2) to report their recent thoughts, feelings, and behaviors. Multilevel modeling results indicated that negative and positive affect account for most, but not all, of the within-person variance in personality states. Importantly, situation variables predicted variance in some personality states even after accounting for fluctuations in affect, indicating that fluctuations in personality states may be more than fluctuations in state affect.
Publisher: Center for Open Science
Date: 26-02-2018
Abstract: The present study aimed to replicate and extend findings by Mehl, Vazire, Holleran and Clark (2010) that in iduals with higher well-being tend to spend less time alone and more time interacting with others (e.g., greater conversation quantity), and engage in less small talk and more substantive conversations (e.g., greater conversation quality). To test the robustness of these effects in a larger and more erse s le, we used Bayesian integrative data analysis to pool data on subjective life satisfaction and observed daily conversations from three heterogeneous adult s les, in addition to the original s le (N = 486). We found moderate associations between life satisfaction and amount of alone time, conversation time, and substantive conversations, but no reliable association with small talk. Personality did not substantially moderate these associations. The failure to replicate the original small talk effect is theoretically and practically important as it has garnered considerable scientific and lay interest.
Publisher: SAGE Publications
Date: 03-07-2019
Abstract: Readers of peer-reviewed research may assume that the reported statistical analyses supporting scientific claims have been closely scrutinized and surpass a high-quality threshold. However, widespread misunderstanding and misuse of statistical concepts and methods suggests that suboptimal or erroneous statistical practice is routinely overlooked during peer review in psychology. Here, we explore whether psychology journals could ameliorate some of the field’s statistical ailments by adopting specialized statistical review: a focused technical assessment, performed by statistical experts, that addresses the analysis and presentation of quantitative information and supplements regular peer review. We discuss evidence from a recent survey of journal editors suggesting that specialized statistical review may be unusual in psychology journals and is regarded by many editors as unnecessary. We contrast these views with those in the biomedical domain, where statistical review has been considered a partial preventive measure against the improper use of statistics since the late 1970s. We suggest that the current “credibility revolution” presents an opportune occasion for psychology journals to consider adopting specialized statistical review.
Publisher: Center for Open Science
Date: 14-08-2019
Abstract: Preregistration clarifies the distinction between planned and unplanned research by reducing unnoticed flexibility. This improves credibility of findings and calibration of uncertainty. However, making decisions before conducting analyses requires practice. During report writing, respecting both what was planned and what actually happened requires good judgment and humility in making claims.
Publisher: SAGE Publications
Date: 21-02-2018
Abstract: Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence (“professor”) subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence (“soccer hooligans”). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%–3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and −0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the “professor” category and those primed with the “hooligan” category (0.14%) and no moderation by gender.
Publisher: Center for Open Science
Date: 23-01-2018
Abstract: In this essay, I explore what the credibility revolution means for productivity, creativity, and progress in psychology. I begin by reviewing the most salient changes that have been brought about by the credibility revolution. Then I review some common concerns about the implications of theses changes for psychological science, and consider each of these concerns in turn. I conclude that the changes brought about by the credibility revolution are likely to h er the rate of in idual researchers’ productivity, could have a negative or positive impact on creativity depending on how the changes are implemented and what is meant by creativity, and are likely to increase the rate of scientific progress.
Publisher: Springer Science and Business Media LLC
Date: 24-06-2021
DOI: 10.1038/S41562-021-01142-4
Abstract: In registered reports (RRs), initial peer review and in-principle acceptance occur before knowing the research outcomes. This combats publication bias and distinguishes planned from unplanned research. How RRs could improve the credibility of research findings is straightforward, but there is little empirical evidence. Also, there could be unintended costs such as reducing novelty. Here, 353 researchers peer reviewed a pair of papers from 29 published RRs from psychology and neuroscience and 57 non-RR comparison papers. RRs numerically outperformed comparison papers on all 19 criteria (mean difference 0.46, scale range -4 to +4) with effects ranging from RRs being statistically indistinguishable from comparison papers in novelty (0.13, 95% credible interval [-0.24, 0.49]) and creativity (0.22, [-0.14, 0.58]) to sizeable improvements in rigour of methodology (0.99, [0.62, 1.35]) and analysis (0.97, [0.60, 1.34]) and overall paper quality (0.66, [0.30, 1.02]). RRs could improve research quality while reducing publication bias and ultimately improve the credibility of the published literature.
Publisher: American Psychological Association (APA)
Date: 04-2022
DOI: 10.1037/PSPP0000388
Abstract: What do people think their best and worst personality traits are? Do their friends agree? Across three s les, 463 college students ("targets") and their friends freely described two traits they most liked and two traits they most disliked about the target. Coders categorized these open-ended trait descriptors into high or low poles of six trait domains (extraversion, agreeableness, conscientiousness, emotional stability, openness, and honesty-humility) and judged whether targets and friends reported the same specific best and worst traits. Best traits almost exclusively reflected high levels of the major trait domains (especially high agreeableness and extraversion). In contrast, although worst traits typically reflected low levels of these traits (especially low emotional stability), they sometimes also revealed the downsides of having high levels of these traits (e.g., high extraversion: "loud" high agreeableness: "people-pleaser"). Overall, targets and friends mentioned similar kinds of best traits however, targets emphasized low emotional stability worst traits more than friends did, whereas friends emphasized low prosociality worst traits more than targets did. Targets and friends also showed a moderate amount of self-other agreement on what the targets' best and worst traits were. These results (a) shed light on the traits that people consider to be most important in themselves and their friends, (b) suggest that the desirability of some traits may be in the eye of the beholder, (c) reveal the mixed blessings of different traits, and, ultimately, (d) provide a nuanced perspective on what it means for a trait to be "good" or "bad." (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Publisher: American Psychological Association (APA)
Date: 2011
DOI: 10.1037/A0023781
Publisher: SAGE Publications
Date: 04-2011
Abstract: Most people believe that they know themselves better than anyone else knows them. However, a complete picture of what a person is like requires both the person’s own perspective and the perspective of others who know him or her well. People’s perceptions of their own personalities, while largely accurate, contain important omissions. Some of these blind spots are likely due to a simple lack of information, whereas others are due to motivated distortions in our self-perceptions. Perhaps for these reasons, others can perceive some aspects of personality better than the self can. This is especially true for traits that are very desirable or undesirable, when motivational factors are most likely to distort self-perceptions. Therefore, much can be learned about a person’s personality from how he or she is seen by others. Future research should examine how people can tap into others' knowledge to improve self-knowledge.
Publisher: SAGE Publications
Date: 03-07-2018
Abstract: In the present study, we aimed to replicate and extend findings by Mehl, Vazire, Holleran, and Clark (2010) that in iduals with higher well-being tend to spend less time alone and more time interacting with others (e.g., greater conversation quantity) and engage in less small talk and more substantive conversations (e.g., greater conversation quality). To test the robustness of these effects in a larger and more erse s le, we used Bayesian integrative data analysis to pool data on subjective life satisfaction and observed daily conversations from three heterogeneous adult s les, in addition to the original s le ( N = 486). We found moderate associations between life satisfaction and amount of alone time, conversation time, and substantive conversations, but no reliable association with small talk. Personality did not substantially moderate these associations. The failure to replicate the original small-talk effect is theoretically and practically important, as it has garnered considerable scientific and lay interest.
Publisher: American Psychological Association (APA)
Date: 11-2008
DOI: 10.1037/A0013314
Abstract: Many people assume that they know themselves better than anyone else knows them. Recent research on inaccuracies in self-perception, however, suggests that self-knowledge may be more limited than people typically assume. In this article, the authors examine the possibility that people may know a person as well as (or better than) that person knows himself or herself. In Study 1, the authors document the strength of laypeople's beliefs that the self is the best expert. In Study 2, the authors provide a direct test of self- and other-accuracy using an objective and representative behavioral criterion. To do this, the authors compared self- and other-ratings of daily behavior to real-life measures of act frequencies assessed unobtrusively over 4 days. Our results show that close others are as accurate as the self in predicting daily behavior. Furthermore, accuracy varies across behaviors for both the self and for others, and the two perspectives often independently predict behavior. These findings suggest that there is no single perspective from which a person is known best and that both the self and others possess unique insight into how a person typically behaves.
Publisher: Center for Open Science
Date: 15-09-2021
Abstract: The credibility revolution in social science has highlighted the importance of conducting replication studies. Despite this growing awareness, the value of direct replications is still hotly debated. In this article, we identify three main functions served by replication. We argue that replications are valuable when they target important or influential studies, when they provide a general estimate of the replicability rate of a population of published articles, and when they create incentives favoring replicable research. We therefore argue that the scientific community should organize systematic large-scale replication audits of two subsets of journals’ published articles: a subset of the most-cited articles, and a subset of randomly selected articles that would provide an estimate of the replicability of the journals' articles. These replicability audits should pave the way for more general quality audits of scientific journals.
Publisher: Center for Open Science
Date: 02-2021
Abstract: Self-correction—a key feature distinguishing science from pseudoscience—requires that scientists update their beliefs in light of new evidence. However, people are often reluctant to change their beliefs. We examined self-correction in action, tracking research psychologists’ beliefs in psychological effects before and after the completion of four large-scale replication projects. We found that psychologists did update their beliefs they updated as much as they predicted they would, but not as much as our Bayesian model suggests they should if they trust the results. We found no evidence that psychologists became more critical of replications when it would have preserved their pre-existing beliefs. We also found no evidence that personal investment or lack of expertise discouraged belief updating, but people higher on intellectual humility updated their beliefs slightly more. Overall, our results suggest that replication studies can contribute to self-correction within psychology, but psychologists may underweight their evidentiary value.
Publisher: Springer Science and Business Media LLC
Date: 30-06-2021
Publisher: Center for Open Science
Date: 08-06-2023
Abstract: Limitations are an inherent part of the research process. Looking these limitations in the eye is no easy task, but it is important if the field of psychology wishes to be considered a credible science. Current practices for reporting limitations in psychology leave much room for improvement. Concrete guidance for discussing specific limitations is lacking. The aim of this tutorial is to enable psychology researchers to “own” their research limitations (inspired by Whitcomb et al., 2017). We provide general recommendations, such as the ‘steel-person principle’ (reflecting on what the best argument is against your conclusions), and specific advice for different types of limitations. We assembled a team with expertise in assessing various aspects of validity, and structured this tutorial around recommendations for discussing common threats to construct, internal, external, and statistical conclusion validity (Shadish et al., 2002). Our goal is to prompt psychologists to write more deeply and clearly about the limitations of their research, and to hold each other to higher standards when reviewing each other’s work. A major limitation of this tutorial is that our advice risks being applied formulaically, and as a substitute for critical thinking about limitations. Further, this tutorial should not replace efforts to prevent or reduce research limitations in the first place. Instead, readers should use this tutorial as a starting point for reflecting on their limitations which should be thoughtfully incorporated in all relevant conclusions throughout their paper.
Publisher: Center for Open Science
Date: 12-12-2018
Abstract: Recent popular claims surrounding virtual assistants suggest that computers will soon be able to hear our emotions. Supporting this possibility, promising work has harnessed big data and emergent technologies to automatically predict stable levels of one specific emotion, happiness, at the community (e.g., counties) and trait (i.e., people) levels. Furthermore, research in affective science has shown that non-verbal vocal bursts (e.g., sighs, gasps) and specific acoustic features (e.g., pitch, energy) can differentiate between distinct emotions (e.g., anger, happiness), and that machine-learning algorithms can detect these differences. Yet, to our knowledge, no work has tested whether computers can automatically detect normal, everyday within-person fluctuations in one emotional state from acoustic analysis. To address this issue in the context of happy mood, across three studies (total N = 20,197), we asked participants to repeatedly report their state happy mood, and to provide audio recordings—including both direct speech and ambient sounds—from which we extracted acoustic features. Using three different machine learning algorithms (neural networks, random forests, and support vector machines) and two sets of acoustic features, we found that acoustic features yielded minimal predictive insight into happy mood above chance. Neither multilevel modeling analyses nor human coders provided additional insight into state happy mood. These findings suggest that it is not yet possible to automatically assess fluctuations in one emotional state (i.e., happy mood) from acoustic analysis, pointing to a critical future direction for affective scientists interested in acoustic analysis of emotion and automated emotion detection.
Publisher: SAGE Publications
Date: 17-09-2009
Abstract: Despite the crucial role of physical appearance in forming first impressions, little research has examined the accuracy of personality impressions based on appearance alone. This study examined the accuracy of observers’ impressions on 10 personality traits based on full-body photographs using criterion measures based on self and peer reports. When targets’ posture and expression were constrained (standardized condition), observers’ judgments were accurate for extraversion, self-esteem, and religiosity. When targets were photographed with a spontaneous pose and facial expression (spontaneous condition), observers’ judgments were accurate for almost all of the traits examined. Lens model analyses demonstrated that both static cues (e.g., clothing style) and dynamic cues (e.g., facial expression, posture) offered valuable personality-relevant information. These results suggest that personality is manifested through both static and expressive channels of appearance, and observers use this information to form accurate judgments for a variety of traits.
Publisher: American Psychological Association (APA)
Date: 1
DOI: 10.1037/EMO0000571
Abstract: Recent popular claims surrounding virtual assistants suggest that computers will soon be able to hear our emotions. Supporting this possibility, promising work has harnessed big data and emergent technologies to automatically predict stable levels of one specific emotion, happiness, at the community (e.g., counties) and trait (i.e., people) levels. Furthermore, research in affective science has shown that nonverbal vocal bursts (e.g., sighs, gasps) and specific acoustic features (e.g., pitch, energy) can differentiate between distinct emotions (e.g., anger, happiness) and that machine-learning algorithms can detect these differences. Yet, to our knowledge, no work has tested whether computers can automatically detect normal, everyday, within-person fluctuations in one emotional state from acoustic analysis. To address this issue in the context of happy mood, across 3 studies (total N = 20,197), we asked participants to repeatedly report their state happy mood and to provide audio recordings-including both direct speech and ambient sounds-from which we extracted acoustic features. Using three different machine learning algorithms (neural networks, random forests, and support vector machines) and two sets of acoustic features, we found that acoustic features yielded minimal predictive insight into happy mood above chance. Neither multilevel modeling analyses nor human coders provided additional insight into state happy mood. These findings suggest that it is not yet possible to automatically assess fluctuations in one emotional state (i.e., happy mood) from acoustic analysis, pointing to a critical future direction for affective scientists interested in acoustic analysis of emotion and automated emotion detection. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Publisher: Springer Science and Business Media LLC
Date: 19-08-2021
Publisher: Elsevier BV
Date: 12-2008
Publisher: Center for Open Science
Date: 05-10-2016
Abstract: The Transparency and Openness Promotion (TOP) Committee met in November 2014 to address one important element of the incentive systems - journals’ procedures and policies for publication. The outcome of the effort is the TOP Guidelines. There are eight standards in the TOP guidelines each move scientific communication toward greater openness. These standards are modular, facilitating adoption in whole or in part. However, they also complement each other, in that commitment to one standard may facilitate adoption of others. Moreover, the guidelines are sensitive to barriers to openness by articulating, for ex le, a process for exceptions to sharing because of ethical issues, intellectual property concerns, or availability of necessary resources.
Publisher: SAGE Publications
Date: 18-02-2010
Publisher: Center for Open Science
Date: 07-2022
Abstract: Every research project has limitations. The limitations that authors acknowledge in their articles offer a glimpse into some of the concerns that occupy a field’s attention. We examine the types of limitations authors discuss in their published articles by categorizing them according to the four validities framework and investigate whether the field’s attention to each of the four validities has shifted from 2010 to 2020. We selected one journal in social and personality psychology (Social Psychological and Personality Science SPPS), the subfield most in the crosshairs of psychology’s replication crisis. We s led 440 articles (with half of those articles containing a subsection explicitly addressing limitations) and we identified and categorized 831 limitations across the 440 articles. Articles with limitations sections reported more limitations than those without (avg. 2.6 vs. 1.2 limitations per article). Threats to external validity were the most common type of reported limitation (est. 52% of articles) and threats to statistical conclusion validity were the least common (est. 17% of articles). Authors reported slightly more limitations over time. Despite the extensive attention paid to statistical conclusion validity in the scientific discourse throughout psychology’s credibility revolution, our results suggest that concerns about statistics-related issues were not reflected in social and personality psychologists’ reported limitations. The high prevalence of limitations concerning external validity might suggest it is time that we improve our practices in this area, rather than apologizing for these limitations after the fact.
Publisher: SAGE Publications
Date: 2010
Abstract: Can we trust our beliefs about the first impressions we make? The current article addresses this question by assessing “idiographic” meta-accuracy, or people’s ability to detect how another person views their characteristic pattern of traits, and people’s awareness of their level of meta-accuracy. Results from two s les suggest that people do achieve idiographic meta-accuracy (i.e., they know which traits a new acquaintance perceives as particularly characteristic of them) and that people’s beliefs about the first impression they make are well calibrated (i.e., the people who are relatively more confident in the accuracy of their metaperceptions are in fact more accurate). Implications of idiographic meta-accuracy and the calibration of meta-accuracy are discussed, as are the ways in which future research can improve our understanding of the process of metaperception formation and the interpersonal consequences of meta-accuracy.
Publisher: SAGE Publications
Date: 08-11-2013
Abstract: In this article, the Society for Personality and Social Psychology (SPSP) Task Force on Publication and Research Practices offers a brief statistical primer and recommendations for improving the dependability of research. Recommendations for research practice include (a) describing and addressing the choice of N (s le size) and consequent issues of statistical power, (b) reporting effect sizes and 95% confidence intervals (CIs), (c) avoiding “questionable research practices” that can inflate the probability of Type I error, (d) making available research materials necessary to replicate reported results, (e) adhering to SPSP’s data sharing policy, (f) encouraging publication of high-quality replication studies, and (g) maintaining flexibility and openness to alternative standards and methods. Recommendations for educational practice include (a) encouraging a culture of “getting it right,” (b) teaching and encouraging transparency of data reporting, (c) improving methodological instruction, and (d) modeling sound science and supporting junior researchers who seek to “get it right.”
Publisher: Wiley
Date: 11-2016
DOI: 10.1111/SPC3.12287
Publisher: Queensland University of Technology
Date: 28-07-2021
DOI: 10.5204/LTHJ.1875
Abstract: Fields closely related to empirical legal research (ELR) are enhancing their methods to improve the credibility of their findings. This includes making data, analysis codes and other materials openly available on digital repositories and preregistering studies. There are numerous benefits to these practices, such as research being easier to find and access through digital research methods. However, ELR appears to be lagging cognate fields. This may be partly due to a lack of field-specific meta-research and guidance. We sought to fill that gap by first evaluating credibility indicators in ELR, including a review of guidelines for legal journals. This review finds considerable room for improvement in how law journals regulate ELR. The remainder of the article provides practical guidance for the field. We start with general recommendations for empirical legal researchers and then turn to recommendations aimed at three commonly used empirical legal methods: content analyses of judicial decisions, surveys and qualitative studies. We end with suggestions for journals and law schools.
Publisher: Elsevier BV
Date: 08-2009
Publisher: American Psychological Association (APA)
Date: 2004
Publisher: Wiley
Date: 20-03-2013
Publisher: SAGE Publications
Date: 27-10-2023
DOI: 10.1177/17456916221101060
Abstract: The replication crisis and credibility revolution in the 2010s brought a wave of doubts about the credibility of social and personality psychology. We argue that as a field, we must reckon with the concerns brought to light during this critical decade. How the field responds to this crisis will reveal our commitment to self-correction. If we do not take the steps necessary to address our problems and simply declare the crisis to be over or the problems to be fixed without evidence, we risk further undermining our credibility. To fully reckon with this crisis, we must empirically assess the state of the field to take stock of how credible our science actually is and whether it is improving. We propose an agenda for metascientific research, and we review approaches to empirically evaluate and track where we are as a field (e.g., analyzing the published literature, surveying researchers). We describe one such project (Surveying the Past and Present State of Published Studies in Social and Personality Psychology) underway in our research group. Empirical evidence about the state of our field is necessary if we are to take self-correction seriously and if we hope to avert future crises.
Publisher: SAGE Publications
Date: 05-2006
DOI: 10.1207/S15327957PSPR1002_4
Abstract: Currently prominent models of narcissism (e.g., Morf & Rhodewalt, 2001) primarily explain narcissists' self-defeating behaviors in terms of conscious cognitive and affective processes. We propose that the disposition of impulsivity may also play an important role. We offer 2 forms of evidence. First, we present a meta-analysis demonstrating a strong positive relationship between narcissism and impulsivity. Second, we review and reinterpret the literature on 3 hallmarks of narcissism: self-enhancement, aggression, and negative long-term outcomes. Our reinterpretation argues that impulsivity provides a more parsimonious explanation for at least some of narcissists' self-defeating behavior than do existing models. These 2 sources of evidence suggest that narcissists' quest for the status and recognition they so intensely desire is thwarted, in part, by their lack of the self-control necessary to achieve those goals.
Publisher: University of Arizona
Date: 11-2021
DOI: 10.2458/JMMSS.3062
Abstract: For several decades, leading behavioral scientists have offered strong criticisms of the common practice of null hypothesis significance testing as producing spurious findings without strong theoretical or empirical support. But only in the past decade has this manifested as a full-scale replication crisis. We consider some possible reasons why, on or about December 2010, the behavioral sciences changed.
Publisher: Public Library of Science (PLoS)
Date: 31-07-2013
Publisher: Center for Open Science
Date: 26-09-2020
Abstract: What do people think their best and worst personality traits are? Do their friends agree? Across three s les, 463 college students (“targets”) and their friends freely described two traits they most liked and two traits they most disliked about the target. Coders categorized these open-ended trait descriptors into high or low poles of six trait domains (extraversion, agreeableness, conscientiousness, emotional stability, openness, and honesty-humility) and judged whether targets and friends reported the same specific best and worst traits. Best traits almost exclusively reflected high levels of the major trait domains (especially high agreeableness and extraversion). In contrast, although worst traits typically reflected low levels of these traits (especially low emotional stability), they sometimes also revealed the downsides of having high levels of these traits (e.g., high extraversion: “loud” high agreeableness: “people-pleaser”). Overall, targets and friends mentioned similar kinds of best traits however, targets emphasized low emotional stability worst traits more than friends did, whereas friends emphasized low prosociality worst traits more than targets did. Targets and friends also showed a moderate amount of self–other agreement on what the targets’ best and worst traits were. These results (a) shed light on the traits that people consider to be most important in themselves and their friends, (b) suggest that the desirability of some traits may be in the eye of the beholder, (c) reveals the mixed blessings of different traits, and, ultimately, (d) provide a nuanced perspective on what it means for a trait to be “good” or “bad.”
Publisher: Center for Open Science
Date: 24-09-2020
Abstract: Fields closely related to empirical legal research are enhancing their methods to improve the credibility of their findings. This includes making data, analysis code, and other materials openly available, and preregistering studies. Empirical legal research appears to be lagging behind other fields. This may be due, in part, to a lack of meta-research and guidance on empirical legal studies. The authors seek to fill that gap by evaluating some indicators of credibility in empirical legal research, including a review of guidelines at legal journals. They then provide both general recommendations for researchers, and more specific recommendations aimed at three commonly used empirical legal methods: case law analysis, surveys, and qualitative studies. They end with suggestions for policies and incentive systems that may be implemented by journals and law schools.
Publisher: SAGE Publications
Date: 17-01-2019
Abstract: Knowing yourself requires knowing not only what you are like in general (trait self-knowledge) but also how your personality fluctuates from moment to moment (state self-knowledge). We examined this latter form of self-knowledge. Participants (248 people 2,938 observations) wore the Electronically Activated Recorder (EAR), an unobtrusive audio recorder, and completed experience-s ling self-reports of their personality states four times each day for 1 week. We estimated state self-knowledge by comparing self-reported personality states with consensual observer ratings of personality states coded from the EAR files, which formed the criterion for what participants were “actually” like in the moment. People had self-insight into their momentary extraversion, conscientiousness, and likely neuroticism, suggesting that people can accurately detect fluctuations in some aspects of their personality. However, the evidence for self-insight was weaker for agreeableness. This apparent self-ignorance may be partly responsible for interpersonal problems and for blind spots in trait self-knowledge.
Publisher: SAGE Publications
Date: 04-2022
DOI: 10.1177/09637214211067779
Abstract: Psychological science’s “credibility revolution” has produced an explosion of metascientific work on improving research practices. Although much attention has been paid to replicability (reducing false positives), improving credibility depends on addressing a wide range of problems afflicting psychological science, beyond simply making psychology research more replicable. Here we focus on the “four validities” and highlight recent developments—many of which have been led by early-career researchers—aimed at improving these four validities in psychology research. We propose that the credibility revolution in psychology, which has its roots in replicability, can be harnessed to improve psychology’s validity more broadly.
Publisher: Center for Open Science
Date: 24-08-2019
Abstract: Participants in experience s ling method (ESM) studies are “beeped” several times per day to report on their momentary experiences—but participants do not always answer the beep. Knowing whether there are systematic predictors of missing a report is critical for understanding the extent to which missing data threatens the validity of inferences from ESM studies. Here, 228 university students completed up to four ESM reports per day while wearing an unobtrusive audio recording device for a week. These audio recordings provided an alternative source of information about what participants were doing when they missed or completed reports (3,678 observations). We predicted missing ESM reports from 46 variables coded from the EAR recordings, and found very little evidence that missing an ESM report was correlated with constructs typically of interest to ESM researchers. These findings provide reassuring evidence for the validity of ESM research among relatively healthy university student s les.
Publisher: Elsevier BV
Date: 08-2017
Publisher: Center for Open Science
Date: 09-09-2018
Abstract: Knowing yourself requires knowing not just what you are like in general (trait self-knowledge), but also how your personality fluctuates from moment to moment (state self-knowledge). We examined this latter form of self-knowledge. Participants (248 people 2,938 observations) wore the Electronically Activated Recorder (EAR), an unobtrusive audio recorder, and completed experience s ling (ESM) self-reports of their personality states four times each day for one week. We estimated state self-knowledge by comparing self-reported personality states to consensual observer ratings of personality states coded from the EAR files, which formed the criterion for what participants were “actually” like in the moment. People had self-insight into their momentary extraversion, conscientiousness, and likely neuroticism, suggesting that people can accurately detect fluctuations in some aspects of their personality. However, the evidence for self-insight was weaker for agreeableness. This apparent self-ignorance may be partly responsible for interpersonal problems and for blind spots in trait self-knowledge.
Publisher: American Psychological Association (APA)
Date: 02-2020
DOI: 10.1037/PSPP0000244
Abstract: The words that people use have been found to reflect stable psychological traits, but less is known about the extent to which everyday fluctuations in spoken language reflect transient psychological states. We explored within-person associations between spoken words and self-reported state emotion among 185 participants who wore the Electronically Activated Recorder (EAR an unobtrusive audio recording device) and completed experience s ling reports of their positive and negative emotions 4 times per day for 7 days (1,579 observations). We examined language using the Linguistic Inquiry and Word Count program (LIWC theoretically created dictionaries) and open-vocabulary themes (clusters of data-driven semantically-related words). Although some studies give the impression that LIWC's positive and negative emotion dictionaries can be used as indicators of emotion experience, we found that when computed on spoken language, LIWC emotion scores were not significantly associated with self-reports of state emotion experience. Exploration of other categories of language variables suggests a number of hypotheses about substantive everyday correlates of momentary positive and negative emotion that can be tested in future studies. These findings (a) suggest that LIWC positive and negative emotion dictionaries may not capture self-reported subjective emotion experience when applied to everyday speech, (b) emphasize the importance of establishing the validity of language-based measures within one's target domain, (c) demonstrate the potential for developing new hypotheses about personality processes from the open-ended words that are used in everyday speech, and (d) extend perspectives on intrain idual variability to the domain of spoken language. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Publisher: No publisher found
Date: 2006
Publisher: American Psychological Association (APA)
Date: 2011
DOI: 10.1037/A0024297
Abstract: Although people can accurately guess how others see them, many studies have suggested that this may only be because people generally assume that others see them as they see themselves. These findings raise the question: In their everyday lives, do people understand the distinction between how they see their own personality and how others see their personality? We examined whether people make this distinction, or whether people possess what we call meta-insight. In 3 studies, we assessed meta-insight for a broad range of traits (e.g., Big Five, intelligent, funny) across several naturalistic social contexts (e.g., first impression, friends). Our findings suggest that people can make valid distinctions between how they see themselves and how others see them. Thus, people seem to have some genuine insight into their reputation and do not achieve meta-accuracy only by capitalizing on the fact that others see them similarly to how they see themselves.
Publisher: Proceedings of the National Academy of Sciences
Date: 21-12-2021
Abstract: While the social sciences have made impressive progress in adopting transparent research practices that facilitate verification, replication, and reuse of materials, the problem of publication bias persists. Bias on the part of peer reviewers and journal editors, as well as the use of outdated research practices by authors, continues to skew literature toward statistically significant effects, many of which may be false positives. To mitigate this bias, we propose a framework to enable authors to report all results efficiently (RARE), with an initial focus on experimental and other prospective empirical social science research that utilizes public study registries. This framework depicts an integrated system that leverages the capacities of existing infrastructure in the form of public registries, institutional review boards, journals, and granting agencies, as well as investigators themselves, to efficiently incentivize full reporting and thereby, improve confidence in social science findings. In addition to increasing access to the results of scientific endeavors, a well-coordinated research ecosystem can prevent scholars from wasting time investigating the same questions in ways that have not worked in the past and reduce wasted funds on the part of granting agencies.
Publisher: Wiley
Date: 12-2008
Publisher: American Psychological Association
Date: 2015
DOI: 10.1037/14343-012
Publisher: AIP Publishing
Date: 10-2021
DOI: 10.1063/5.0057878
Abstract: We developed tools and a workflow for real-time analysis of data from dynamic diamond anvil cell experiments performed at user light sources. These tools allow users to determine the phases of matter observed during the compression of materials in order to make decisions during an experiment to improve the quality of experimental results and maximize the use of scarce experimental facility time. The tools fill a gap in dynamic compression data analysis tools that are real-time, are flexible to the needs of high-pressure scientists, connect to automated processing of results, can be easily incorporated into workflows with existing tools and data formats, and support remote experimental data analysis workflows. Specific analytics developed include novel automated two-peak analysis for overlapping peaks and multiple phases, coordinated views of pressure and temperature values, full-compression contour plots, and configurable views of integrated x-ray diffraction. We present an experimental use case to show how the tools produce real-time analytics that help the scientists revise parameters for the next compression.
Publisher: SAGE Publications
Date: 09-2017
DOI: 10.1002/PER.2128
Publisher: SAGE Publications
Date: 05-2015
DOI: 10.1002/PER.2005
Publisher: Springer Science and Business Media LLC
Date: 28-10-2021
DOI: 10.1038/S41562-021-01203-8
Abstract: The replication crisis in the social, behavioural and life sciences has spurred a reform movement aimed at increasing the credibility of scientific studies. Many of these credibility-enhancing reforms focus, appropriately, on specific research and publication practices. A less often mentioned aspect of credibility is the need for intellectual humility or being transparent about and owning the limitations of our work. Although intellectual humility is presented as a widely accepted scientific norm, we argue that current research practice does not incentivize intellectual humility. We provide a set of recommendations on how to increase intellectual humility in research articles and highlight the central role peer reviewers can play in incentivizing authors to foreground the flaws and uncertainty in their work, thus enabling full and transparent evaluation of the validity of research.
Publisher: The Japan Association for Philosophy of Science
Date: 2015
Publisher: American Psychological Association (APA)
Date: 10-2023
DOI: 10.1037/PSPP0000458
Publisher: Center for Open Science
Date: 08-03-2023
Abstract: We introduce a tool to aid reviewers in identifying potential threats to the validity of empirical research. This tool was developed through consensus-based expert feedback. Reviewers can visit seaboat.io to identify relevant validity threats and generate reports to share alongside traditional peer review reports or in post-publication peer review.
Publisher: Cambridge University Press
Date: 02-04-2020
Publisher: Frontiers Media SA
Date: 2011
Publisher: Oxford University Press
Date: 11-2015
Publisher: Public Library of Science (PLoS)
Date: 23-02-2021
DOI: 10.1371/JOURNAL.PONE.0246675
Abstract: Academic journals provide a key quality-control mechanism in science. Yet, information asymmetries and conflicts of interests incentivize scientists to deceive journals about the quality of their research. How can honesty be ensured, despite incentives for deception? Here, we address this question by applying the theory of honest signaling to the publication process. Our models demonstrate that several mechanisms can ensure honest journal submission, including differential benefits, differential costs, and costs to resubmitting rejected papers. Without submission costs, scientists benefit from submitting all papers to high-ranking journals, unless papers can only be submitted a limited number of times. Counterintuitively, our analysis implies that inefficiencies in academic publishing (e.g., arbitrary formatting requirements, long review times) can serve a function by disincentivizing scientists from submitting low-quality work to high-ranking journals. Our models provide simple, powerful tools for understanding how to promote honest paper submission in academic publishing.
Publisher: University of California Press
Date: 2019
DOI: 10.1525/COLLABRA.170
Abstract: What predicts sociable behavior? While main effects of personality and situation characteristics on sociability are well established, there is little evidence for the existence of person-situation interaction effects within real-life social interactions. Moreover, previous research has focused on self-reported behavior ratings, and less is known about the partner’s social perspective, i.e. how partners perceive and influence an actor’s behavior. In the current research, we investigated predictors of sociable behavior in real-life social interactions across social perspectives, including person and situation main effects as well as person-situation interaction effects. In two experience-s ling studies (Study 1: N = 394, US, time-based Study 2: N = 124, Germany, event-based), we assessed personality traits with self- and informant-reports, self-reported sociable behavior during real-life social interactions, and corresponding information on the situation (categorical situation classifications and dimensional ratings of situation characteristics). In Study 2, we additionally assessed interaction partner-reported actor behavior. Multilevel analyses provided evidence for main effects of personality and situation features, as well as small but consistent evidence for person-situation interaction effects. First, extraverts acted more sociable in general. Second, in iduals behaved more sociable in low-effort ositive/low-duty situations (vs. high-effort/negative/high-duty situations). Third, the latter was particularly true for extraverts. Further specific interaction effects were found for the partner’s social perspective. These results are discussed regarding their accordance with different behavioral models (e.g., Trait Activation Theory) and their transferability to other behavioral domains.
Publisher: Elsevier BV
Date: 10-2006
Publisher: Center for Open Science
Date: 16-11-2020
Abstract: In Registered Reports (RRs), initial peer review and in-principle acceptance occurs before knowing the research outcomes. This combats publication bias and distinguishes planned and unplanned research. How RRs could improve the credibility of research findings is straightforward, but there is little empirical evidence. Also, there could be unintended costs such as reducing novelty. 353 researchers peer reviewed a pair of papers from 29 published RRs from psychology and neuroscience and 57 non-RR comparison papers. RRs outperformed comparison papers on all 19 criteria (mean difference=0.46 Scale range -4 to +4) with effects ranging from little improvement in novelty (0.13, 95% credible interval [-0.24, 0.49]) and creativity (0.22, [-0.14, 0.58]) to larger improvements in rigor of methodology (0.99, [0.62, 1.35]) and analysis (0.97, [0.60, 1.34]) and overall paper quality (0.66, [0.30, 1.02]). RRs could improve research quality while reducing publication bias and ultimately improve the credibility of the published literature.
Publisher: Elsevier BV
Date: 12-2008
Publisher: Center for Open Science
Date: 12-04-2017
Abstract: The drive for eminence is inherently at odds with scientific values, and insufficient attention to this problem is partly responsible for the recent crisis of confidence in psychology and other sciences. The replicability crisis has shown that a system without transparency doesn’t work. The lack of transparency in science is a direct consequence of the corrupting influence of eminence-seeking. If journals and societies are primarily motivated by boosting their impact, their most effective strategy will be to publish the sexiest findings by the most famous authors. Humans will always care about eminence. Scientific institutions and gatekeepers should be a bulwark against the corrupting influence of the drive for eminence, and help researchers maintain integrity and uphold scientific values in the face of internal and external pressures to compromise. One implication for evaluating scientific merit is that gatekeepers should attempt to reward all scientists whose work reaches a more objective threshold of scientific rigor or soundness, rather than attempting to select the cream of the crop (i.e., identify the most “eminent”).
Publisher: SAGE Publications
Date: 03-2021
Abstract: Science is often perceived to be a self-correcting enterprise. In principle, the assessment of scientific claims is supposed to proceed in a cumulative fashion, with the reigning theories of the day progressively approximating truth more accurately over time. In practice, however, cumulative self-correction tends to proceed less efficiently than one might naively suppose. Far from evaluating new evidence dispassionately and infallibly, in idual scientists often cling stubbornly to prior findings. Here we explore the dynamics of scientific self-correction at an in idual rather than collective level. In 13 written statements, researchers from erse branches of psychology share why and how they have lost confidence in one of their own published findings. We qualitatively characterize these disclosures and explore their implications. A cross-disciplinary survey suggests that such loss-of-confidence sentiments are surprisingly common among members of the broader scientific population yet rarely become part of the public record. We argue that removing barriers to self-correction at the in idual level is imperative if the scientific community as a whole is to achieve the ideal of efficient self-correction.
Publisher: Center for Open Science
Date: 19-07-2022
Abstract: What information do science journalists use when evaluating psychology findings? We examined this in a preregistered, controlled experiment by manipulating four factors in descriptions of fictitious behavioral psychology studies: (1) the study’s s le size, (2) the representativeness of the study’s s le, (3) the p-value associated with the finding, and (4) institutional prestige of the researcher who conducted the study. We investigated the effects of these manipulations on 181 real journalists’ perceptions of each study’s trustworthiness and newsworthiness. S le size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a finding was, with larger s le sizes leading to an increase of about two thirds of one point on a 7-point scale. University prestige had no effect in this controlled setting, and the effects of s le representativeness and of p-values were inconclusive, but any effects in this setting are likely quite small. Exploratory analyses suggest that other types of prestige might be more important (i.e., journal prestige), and that study design (experimental vs. correlational) may also impact trustworthiness and newsworthiness.
Publisher: No publisher found
Date: 2006
Publisher: Center for Open Science
Date: 16-01-2018
Abstract: Background and PerspectiveThis is an exciting time to be a psychological scientist. There is a major new movement that seeks to promote the credibility and replicability of psychological research by enhancing its transparency, with scholarly societies promoting the principles (ublications/open-science) and groups formed specifically to advance that mission (see improvingpsych.org/ and cos.io for two ex les). While relatively low rates of replicability among scientific findings (Begley & Ellis, 2012 OSC, 2015 Chang & Li 2015) inspired the existence of these groups, in this chapter we describe how striving to maximize transparency in your research can benefit both science and your career.
Publisher: Springer Science and Business Media LLC
Date: 02-12-2019
Publisher: Elsevier BV
Date: 10-2017
Publisher: Cambridge University Press (CUP)
Date: 10-2009
DOI: 10.1017/S0140525X09991026
Abstract: We present evidence that smiling is positively associated with positive affect in women and negatively associated with negative affect in men. In line with Vigil's model, we propose that, in women, smiling signals warmth (trustworthiness cues), which attracts fewer and more intimate relationships, whereas in men, smiling signals confidence and lack of self-doubt (capacity cues), which attracts numerous, less-intimate relationships.
Publisher: Center for Open Science
Date: 20-12-2016
Abstract: Personality traits are most often assessed using global self-reports of one’s general patterns of thoughts, feelings, and behavior. However, recent theories have challenged the idea that global self-reports are the best way to assess traits. Whole Trait Theory postulates that repeated measures of a person’s self-reported personality states (i.e., the average of many state self-reports) can be an alternative and potentially superior way of measuring a person’s trait level (Fleeson & Jayawickreme, 2015). Our goal is to examine the validity of average state self-reports of personality for measuring between-person differences in what people are typically like. In order to validate average states as a measure of personality, we examine if they are incrementally valid in predicting informant reports above and beyond global self-reports. In two s les, we find that average state self-reports tend to correlate with informant reports, although this relationship is weaker than the relationship between global self-reports and informant reports. Further, using structural equation modeling, we find that average state self-reports do not significantly predict informant reports independently of global self-reports. Our results suggest that average state self-reports may not contain information about between-person differences in personality traits beyond what is captured by global self-reports, and that average state self-reports may contain more self-bias than is commonly believed. We discuss the implications of these findings for research on daily manifestations of personality and the accuracy of self-reports.
Publisher: American Psychological Association (APA)
Date: 2014
DOI: 10.1037/A0036899
Abstract: Do romantic partners see each other realistically, or do they have overly positive perceptions of each other? Research has shown that realism and positivity co-exist in romantic partners' perceptions (Boyes & Fletcher, 2007). The current study takes a novel approach to explaining this seemingly paradoxical effect when it comes to physical attractiveness--a highly evaluative trait that is especially relevant to romantic relationships. Specifically, we argue that people are aware that others do not see their partners as positively as they do. Using both mean differences and correlational approaches, we test the hypothesis that despite their own biased and idiosyncratic perceptions, people have 2 types of partner-knowledge: insight into how their partners see themselves (i.e., identity accuracy) and insight into how others see their partners (i.e., reputation accuracy). Our results suggest that romantic partners have some awareness of each other's identity and reputation for physical attractiveness, supporting theories that couple members' perceptions are driven by motives to fulfill both esteem- and epistemic-related needs (i.e., to see their partners positively and realistically).
Publisher: Center for Open Science
Date: 03-09-2018
Abstract: Discusses issues of open science, transparency, and reproducability as they pertain to clinical psychology research.
Publisher: Elsevier BV
Date: 04-2009
Publisher: Springer Science and Business Media LLC
Date: 09-2017
Publisher: Center for Open Science
Date: 09-02-2021
Abstract: Replication, an important, uncommon, and misunderstood practice, is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understanding to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understanding and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges, such as disincentives to conduct replications, framing of replication as personal attack rather than healthy scientific practice, and headwinds for replication contributing to self-correction. Nevertheless, innovation in doing and understanding replication, and its cousins, reproducibility and robustness, have positioned psychology to improve research practices and accelerate progress.
Publisher: American Psychological Association (APA)
Date: 02-2010
DOI: 10.1037/A0017908
Abstract: This article tests a new model for predicting which aspects of personality are best judged by the self and which are best judged by others. Previous research suggests an asymmetry in the accuracy of personality judgments: Some aspects of personality are known better to the self than others and vice versa. According to the self-other knowledge asymmetry (SOKA) model presented here, the self should be more accurate than others for traits low in observability (e.g., neuroticism), whereas others should be more accurate than the self for traits high in evaluativeness (e.g., intellect). In the present study, 165 participants provided self-ratings and were rated by 4 friends and up to 4 strangers in a round-robin design. Participants then completed a battery of behavioral tests from which criterion measures were derived. Consistent with SOKA model predictions, the self was the best judge of neuroticism-related traits, friends were the best judges of intellect-related traits, and people of all perspectives were equally good at judging extraversion-related traits. The theoretical and practical value of articulating this asymmetry is discussed.
Publisher: American Psychological Association (APA)
Date: 2004
Publisher: Elsevier BV
Date: 12-2002
Publisher: Public Library of Science (PLoS)
Date: 08-10-2014
Publisher: Center for Open Science
Date: 15-02-2018
Abstract: In order to increase the replicability of scientific work, the scientific community has called for practices designed to increase the transparency of research (McNutt, 2014 Nosek et al., 2015). The validity of a scientific claim depends not on the reputation of those making the claim, the venue in which the claim is made, or the novelty of the result, but rather on the empirical evidence provided by the underlying data and methods. Proper evaluation of the merits of scientific findings requires availability of the methods, materials, and data and the reasoned argument that serve as the basis for the published conclusions (Claerbout and Karrenbach 1992 Donoho et al 2009 Stodden et al 2013 Borwein et al 2013 Munafò et al, 2017). Wide and growing support for these principles (see, for ex le, signatories to Declaration on Research Assessment, DORA, sfdora.org/, and the Transparency and Openness Promotion Guidelines cos.io/our-services/top-guidelines/) must be coupled with guidelines to increase open sharing of data and research materials, use of reporting guidelines, preregistration, and replication. We propose that, going forward, authors of all scientific articles disclose the availability and location of all research items, including data, materials, and code, related to their published articles in what we will refer to as a TOP Statement.
Publisher: Elsevier BV
Date: 04-2016
Publisher: Center for Open Science
Date: 05-08-2019
Abstract: Social relationships are often touted as critical for well-being. However, the vast majority of studies on social relationships have relied on self-report measures of both social interactions and well-being, which makes it difficult to disentangle true associations from shared method variance. To address this gap, we assessed the quantity and quality of social interactions using both self-report and observer-based measures in everyday life. Participants (N = 256, 3,206 observations) wore the Electronically Activated Recorder (EAR), an unobtrusive audio recorder, and completed experience s ling method (ESM) self-reports of their momentary social interactions, happiness, and feelings of social connectedness, four times each day for one week. Observers rated the quantity and quality of participants’ social interactions based on the EAR recordings from the same time points. Quantity of social interactions was robustly associated with greater well-being in the moment and on average, whether they were measured with self-reports or observer reports. Conversational (conversational depth and self-disclosure) and relational (knowing and liking one’s interaction partners) aspects of social interaction quality were also generally associated with greater well-being, but the effects were larger and more consistent for self-reported (vs. observer-reported) quality variables, within-person (vs. between-person) associations, and for predicting social connectedness (vs. happiness). Finally, although most associations were similar for introverts and extraverts, our exploratory results suggest that introverts may experience greater boosts in social connectedness, relative to extraverts, when engaging in deeper conversations. This study provides compelling multi-method evidence supporting the link between more frequent and deeper social interactions and well-being.
Publisher: SAGE Publications
Date: 07-2018
Abstract: The credibility revolution (sometimes referred to as the “replicability crisis”) in psychology has brought about many changes in the standards by which psychological science is evaluated. These changes include (a) greater emphasis on transparency and openness, (b) a move toward preregistration of research, (c) more direct-replication studies, and (d) higher standards for the quality and quantity of evidence needed to make strong scientific claims. What are the implications of these changes for productivity, creativity, and progress in psychological science? These questions can and should be studied empirically, and I present my predictions here. The productivity of in idual researchers is likely to decline, although some changes (e.g., greater collaboration, data sharing) may mitigate this effect. The effects of these changes on creativity are likely to be mixed: Researchers will be less likely to pursue risky questions more likely to use a broad range of methods, designs, and populations and less free to define their own best practices and standards of evidence. Finally, the rate of scientific progress—the most important shared goal of scientists—is likely to increase as a result of these changes, although one’s subjective experience of making progress will likely become rarer.
Publisher: Springer Science and Business Media LLC
Date: 23-12-2019
DOI: 10.1038/S41562-019-0812-2
Abstract: An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Publisher: Annual Reviews
Date: 04-01-2022
DOI: 10.1146/ANNUREV-PSYCH-020821-114157
Abstract: Replication—an important, uncommon, and misunderstood practice—is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress.
Publisher: Center for Open Science
Date: 23-12-2016
Abstract: Who are the people who maintain satisfying friendships? And, what are the behaviours that might explain why those people achieve high friendship satisfaction? We examined the associations between personality (self-reports and peer-reports) and friendship satisfaction (self-reports) among 434 students. We also examined whether role personality (how people act with their friends) and quantity and quality of social interactions using ecological momentary assessment mediate the associations between personality and friendship satisfaction. Extraversion, agreeableness, conscientiousness and (low) neuroticism were associated with higher levels of friendship satisfaction. These associations could not be accounted for by in idual differences in role personality. In addition, our results suggest that quantity of time spent with friends and quality of friend interactions (depth of conversation, self-disclosure and lack of emotion suppression), although associated with friendship satisfaction, do not account for the associations between trait personality and friendship satisfaction. Future research should examine other potential interpersonal processes that explain why some people are more satisfied with their friendships than others and the consequences of friendship satisfaction (e.g. for well-being).
Publisher: American Psychological Association
Date: 2010
DOI: 10.1037/12076-011
Publisher: SAGE Publications
Date: 12-08-2022
DOI: 10.1177/10892680211033912
Abstract: It is often said that science is self-correcting, but the replication crisis suggests that self-correction mechanisms have fallen short. How can we know whether a particular scientific field has effective self-correction mechanisms, that is, whether its findings are credible? The usual processes that supposedly provide mechanisms for scientific self-correction, such as journal-based peer review and institutional committees, have been inadequate. We describe more verifiable indicators of a field’s commitment to self-correction. These fall under the broad headings of 1) transparency, which is already the subject of many reform efforts and 2) critical appraisal, which has received less attention and which we focus on here. Only by obtaining Observable Self-Correction Indicators (OSCIs) can we begin to evaluate the claim that “science is self-correcting.” We expect that the veracity of this claim varies across fields and subfields, and suggest that some fields, such as psychology and biomedicine, fall far short of an appropriate level of transparency and, especially, critical appraisal. Fields without robust, verifiable mechanisms for transparency and critical appraisal cannot reasonably be said to be self-correcting, and thus do not warrant the credibility often imputed to science as a whole.
Publisher: Leibniz Institute for Psychology (ZPID)
Date: 12-08-2021
DOI: 10.5964/PS.6001
Abstract: Personality is not the most popular subfield of psychology. But, in one way or another, personality psychologists have played an outsized role in the ongoing “credibility revolution” in psychology. Not only have in idual personality psychologists taken on visible roles in the movement, but our field’s practices and norms have now become models for other fields to emulate (or, for those who share Baumeister’s (2016, 0.1016/j.jesp.2016.02.003) skeptical view of the consequences of increasing rigor, a model for what to avoid). In this article we discuss some unique features of our field that may have placed us in an ideal position to be leaders in this movement. We do so from a subjective perspective, describing our impressions and opinions about possible explanations for personality psychology’s disproportionate role in the credibility revolution. We also discuss some ways in which personality psychology remains less-than-optimal, and how we can address these flaws.
Publisher: Center for Open Science
Date: 29-07-2020
Abstract: The replication crisis in the social, behavioural, and life sciences has spurred a reform movement aimed at increasing the credibility of scientific studies. Many of these credibility-enhancing reforms focus, appropriately, on specific research and publication practices. A less often mentioned aspect of credibility is the need for intellectual humility, or being transparent about and owning the limitations of our work. Although intellectual humility is presented as a widely accepted scientific norm, we argue that current research practice does not incentivize intellectual humility. We provide a set of recommendations on how to increase intellectual humility in research articles and highlight the central role peer reviewers can play in incentivizing authors to foreground the flaws and uncertainty in their work, thus enabling full and transparent evaluation of the validity of research.
Publisher: Center for Open Science
Date: 23-12-2016
Abstract: What does it mean to know a person? In his famous article, McAdams (1995) addresses this question from the perspective of personality psychology and concludes that personality traits are “the psychology of the stranger.” To really know someone, you need to know more than just how they typically think, feel, and behave on average (a common definition of traits). You need to know how their thoughts, feelings, and behaviors change depending on their role and context, why those fluctuations occur (the underlying motives and causes of those patterns), and how they make sense of their own patterns over time (their life narrative). In this essay, we argue that although there has been little empirical work on within-person fluctuations in personality, the time is ripe to examine these patterns. New technology has made it possible to quantify momentary thoughts, feelings, and behaviors, and to track the contextual factors that underlie these fluctuations (i.e., “personality signatures”). By capturing in idual differences at this dynamic level, we can gain a better understanding of how people differ from one another. This will also open the door to new research questions, such as investigating the amount of insight people have into their own and others' personality signatures.
Publisher: Wiley
Date: 20-07-2011
Publisher: American Association for the Advancement of Science (AAAS)
Date: 06-07-2007
Abstract: Women are generally assumed to be more talkative than men. Data were analyzed from 396 participants who wore a voice recorder that s led ambient sounds for several days. Participants' daily word use was extrapolated from the number of recorded words. Women and men both spoke about 16,000 words per day.
Publisher: American Psychological Association (APA)
Date: 12-2020
DOI: 10.1037/PSPP0000272
Abstract: Social relationships are often touted as critical for well-being. However, the vast majority of studies on social relationships have relied on self-report measures of both social interactions and well-being, which makes it difficult to disentangle true associations from shared method variance. To address this gap, we assessed the quantity and quality of social interactions using both self-report and observer-based measures in everyday life. Participants (N = 256 3,206 observations) wore the Electronically Activated Recorder (EAR), an unobtrusive audio recorder, and completed experience s ling method self-reports of their momentary social interactions, happiness, and feelings of social connectedness, 4 times each day for 1 week. Observers rated the quantity and quality of participants' social interactions based on the EAR recordings from the same time points. Quantity of social interactions was robustly associated with greater well-being in the moment and on average, whether they were measured with self-reports or observer reports. Conversational (conversational depth and self-disclosure) and relational (knowing and liking one's interaction partners) aspects of social interaction quality were also generally associated with greater well-being, but the effects were larger and more consistent for self-reported (vs. observer-reported) quality variables, within-person (vs. between-person) associations, and for predicting social connectedness (vs. happiness). Finally, although most associations were similar for introverts and extraverts, our exploratory results suggest that introverts may experience greater boosts in social connectedness, relative to extraverts, when engaging in deeper conversations. This study provides compelling multimethod evidence supporting the link between more frequent and deeper social interactions and well-being. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Publisher: Springer Science and Business Media LLC
Date: 30-12-2019
Publisher: American Psychological Association (APA)
Date: 2003
Publisher: Wiley
Date: 08-2010
Publisher: Cambridge University Press (CUP)
Date: 2022
DOI: 10.1017/S0140525X21000546
Abstract: Improvements to the validity of psychological science depend upon more than the actions of in idual researchers. Editors, journals, and publishers wield considerable power in shaping the incentives that have ushered in the generalizability crisis. These gatekeepers must raise their standards to ensure authors' claims are supported by evidence. Unless gatekeepers change, changes made by in idual scientists will not be sustainable.
Publisher: Center for Open Science
Date: 28-12-2021
Abstract: What research practices should be considered acceptable? Historically, scientists have set the standards for what constitutes acceptable research practices. However, there is value in considering non-scientists’ perspectives, including research participants’. 1,873 participants from MTurk and university subject pools were surveyed after their participation in one of eight minimal-risk studies. We asked participants how they would feel if (mostly) common research practices were applied to their data: p-hacking/cherry-picking results, selective reporting of studies, Hypothesizing After Results are Known (HARKing), committing fraud, conducting direct replications, sharing data, sharing methods, and open access publishing. An overwhelming majority of psychology research participants think questionable research practices (e.g., p-hacking, HARKing) are unacceptable (68.3--81.3%), and were supportive of practices to increase transparency and replicability (71.4--80.1%). A surprising number of participants expressed positive or neutral views toward scientific fraud (18.7%), raising concerns about data quality. We grapple with this concern and interpret our results in light of the limitations of our study. Despite ambiguity in our results, we argue that there is evidence (from our study and others’) that researchers may be violating participants’ expectations and should be transparent with participants about how their data will be used.
Publisher: SAGE Publications
Date: 10-2022
DOI: 10.1177/25152459221120217
Abstract: Scholars and institutions commonly use impact factors to evaluate the quality of empirical research. However, a number of findings published in journals with high impact factors have failed to replicate, suggesting that impact alone may not be an accurate indicator of quality. Fraley and Vazire proposed an alternative index, the N-pact factor, which indexes the median s le size of published studies, providing a narrow but relevant indicator of research quality. In the present research, we expand on the original report by examining the N-pact factor of social ersonality-psychology journals between 2011 and 2019, incorporating additional journals and accounting for study design (i.e., between persons, repeated measures, and mixed). There was substantial variation in the s le sizes used in studies published in different journals. Journals that emphasized personality processes and in idual differences had larger N-pact factors than journals that emphasized social-psychological processes. Moreover, N-pact factors were largely independent of traditional markers of impact. Although the majority of journals in 2011 published studies that were not well powered to detect an effect of ρ = .20, this situation had improved considerably by 2019. In 2019, eight of the nine journals we s led published studies that were, on average, powered at 80% or higher to detect such an effect. After decades of unheeded warnings from methodologists about the dangers of small-s le designs, the field of social ersonality psychology has begun to use larger s les. We hope the N-pact factor will be supplemented by other indices that can be used as alternatives to improve further the evaluation of research.
Publisher: Center for Open Science
Date: 16-08-2017
Abstract: We outline an array of journal policies that JPSP:ASC could adopt to further promote transparent and responsible research practices in turn, these practices will increase the reliability of research findings published in JPSP:ASC.
Publisher: Center for Open Science
Date: 11-2018
Abstract: This report outlines: a) a need for objective, transparent and usable criteria for judging the decision-readiness of published research evidence and b) the many, important research challenges associated with producing such criteria and ensuring their uptake in the scientific community and beyond. It was produced by Focus Group 2 at TECSS.
Publisher: Mary Ann Liebert Inc
Date: 09-2011
Publisher: Center for Open Science
Date: 13-08-2020
Abstract: It is often said that science is self-correcting, but the replication crisis suggests that, at least in some fields, self-correction mechanisms have fallen short of what we might hope for. How can we know whether a particular scientific field has effective self-correction mechanisms, that is, whether its findings are credible? The usual processes that supposedly provide mechanisms for scientific self-correction – mainly peer review and disciplinary committees – have been inadequate. We argue for more verifiable indicators of a field’s commitment to self-correction. These include transparency, which is already a target of many reform efforts, and critical appraisal, which has received less attention. Only by obtaining Measurements of Observable Self-Correction (MOSCs) can we begin to evaluate the claim that “science is self-correcting.” We expect the validity of this claim to vary across fields and subfields, and suggest that some fields, such as psychology and biomedicine, fall far short of an appropriate level of transparency and, especially, critical appraisal. Fields without robust, verifiable mechanisms for transparency and critical appraisal cannot reasonably be said to be self-correcting, and thus do not warrant the credibility often imputed to science as a whole.
Publisher: Wiley
Date: 06-10-2004
DOI: 10.1002/JCLP.20072
Abstract: We evaluate Henriques' Justification Hypothesis (JH this issue) and argue that his explanation for the evolution of self-consciousness is overly narrow and the evolutionary sequence of events is backwards. Instead, we propose a broader theory of the evolution of self-consciousness, with four categories of adaptive functions: (a) self-regulation, (b) selective information processing, (c) understanding others, and (d) identity formation.
Publisher: Springer Science and Business Media LLC
Date: 07-2017
DOI: 10.1038/547007A
Publisher: Center for Open Science
Date: 20-12-2016
Abstract: Personality traits are most often assessed using global self-reports of one’s general patterns of thoughts, feelings, and behavior. However, recent theories have challenged the idea that global self-reports are the best way to assess traits. Whole Trait Theory postulates that repeated measures of a person’s self-reported personality states (i.e., the average of many state self-reports) can be an alternative and potentially superior way of measuring a person’s trait level (Fleeson & Jayawickreme, 2015). Our goal is to examine the validity of average state self-reports of personality for measuring between-person differences in what people are typically like. In order to validate average states as a measure of personality, we examine if they are incrementally valid in predicting informant reports above and beyond global self-reports. In two s les, we find that average state self-reports tend to correlate with informant reports, although this relationship is weaker than the relationship between global self-reports and informant reports. Further, using structural equation modeling, we find that average state self-reports do not significantly predict informant reports independently of global self-reports. Our results suggest that average state self-reports may not contain information about between-person differences in personality traits beyond what is captured by global self-reports, and that average state self-reports may contain more self-bias than is commonly believed. We discuss the implications of these findings for research on daily manifestations of personality and the accuracy of self-reports.
Publisher: Center for Open Science
Date: 28-06-2021
Abstract: Personality is not the most popular subfield of psychology. But, in one way or another, personality psychologists have played an outsized role in the ongoing “credibility revolution” in psychology. Not only have in idual personality psychologists taken on visible roles in the movement, but our field’s practices and norms have now become models for other fields to emulate (or, for those who share Baumeister’s (2016) skeptical view of the consequences of increasing rigor, a model for what to avoid). In this article we discuss some unique features of our field that may have placed us in an ideal position to be leaders in this movement. We do so from a subjective perspective, describing our impressions and opinions about possible explanations for personality psychology’s disproportionate role in the credibility revolution. We also discuss some ways in which personality psychology remains less-than-optimal, and how we can address these flaws.
Publisher: Springer Science and Business Media LLC
Date: 09-10-2020
Publisher: SAGE Publications
Date: 03-2015
DOI: 10.1002/PER.1998
Abstract: Historically, personality psychology has not focused on the social realm, and social psychology has mostly neglected the influence of in idual differences. This has, however, begun to change in the past two decades. Recent years have brought an explosion in creative research programmes on the social consequences of personality. In this paper, we offer a (highly subjective) view on how research on the social consequences of personality should move forward. We note that the existing literature is focused heavily on: traits (at the expense of other personality characteristics), a narrow set of social outcomes (e.g. romantic relationship satisfaction) and effects of personality on one's own outcomes (rather than taking a dyadic/interpersonal perspective). In addition, little attention has been paid to the complex dynamic processes that might account for the links between personality and social outcomes. Based on this, we outline six suggestions for future research on the social consequences of personality: (1) examine a wide range of personality variables and integrate findings across domains (2) take a broader and more integrative view on social outcomes, including different relationship types, phases and transitions (3) analyse personality effects on social outcomes from different social perspectives (e.g. self, other and dyad) (4) search for processes that explain the associations between personality and social outcomes (5) collect rich, multi–method, longitudinal, behavioural datasets with large s les and (6) carefully evaluate the implications of personality effects on social outcomes. We invite researchers to embrace a more collaborative and slower scientific approach to answer the many open questions about the social consequences of personality. Copyright © 2015 European Association of Personality Psychology
Publisher: Center for Open Science
Date: 29-05-2018
Abstract: We contest the “building a wall” analogy of scientific progress. We argue that this analogy unfairly privileges original research (which is perceived as laying bricks, and therefore constructive) over replication research (which is perceived as testing and removing bricks, and therefore destructive). We propose an alternative analogy for scientific progress: solving a jigsaw puzzle.
Publisher: SAGE Publications
Date: 03-2015
DOI: 10.1002/PER.1996
Abstract: Who are the people who maintain satisfying friendships? And, what are the behaviours that might explain why those people achieve high friendship satisfaction? We examined the associations between personality (self–reports and peer–reports) and friendship satisfaction (self–reports) among 434 students. We also examined whether role personality (how people act with their friends) and quantity and quality of social interactions using ecological momentary assessment mediate the associations between personality and friendship satisfaction. Extraversion, agreeableness, conscientiousness and (low) neuroticism were associated with higher levels of friendship satisfaction. These associations could not be accounted for by in idual differences in role personality. In addition, our results suggest that quantity of time spent with friends and quality of friend interactions (depth of conversation, self–disclosure and lack of emotion suppression), although associated with friendship satisfaction, do not account for the associations between trait personality and friendship satisfaction. Future research should examine other potential interpersonal processes that explain why some people are more satisfied with their friendships than others and the consequences of friendship satisfaction (e.g. for well–being). Copyright © 2015 European Association of Personality Psychology
Publisher: SAGE Publications
Date: 29-01-2010
Publisher: Springer Science and Business Media LLC
Date: 22-11-2021
DOI: 10.1038/S41562-021-01220-7
Abstract: Self-correction-a key feature distinguishing science from pseudoscience-requires that scientists update their beliefs in light of new evidence. However, people are often reluctant to change their beliefs. We examined belief updating in action by tracking research psychologists' beliefs in psychological effects before and after the completion of four large-scale replication projects. We found that psychologists did update their beliefs they updated as much as they predicted they would, but not as much as our Bayesian model suggests they should if they trust the results. We found no evidence that psychologists became more critical of replications when it would have preserved their pre-existing beliefs. We also found no evidence that personal investment or lack of expertise discouraged belief updating, but people higher on intellectual humility updated their beliefs slightly more. Overall, our results suggest that replication studies can contribute to self-correction within psychology, but psychologists may underweight their evidentiary value.
Publisher: American Psychological Association (APA)
Date: 09-2016
DOI: 10.1037/PSPI0000061
Abstract: It may be important to know when our impressions of someone differ from how that person sees him/herself and how others see that same person. We investigated whether people are aware of how their friends see themselves (knowledge of identity) and are seen by others (knowledge of reputation). Previous research indicates that, for physical attractiveness, romantic partners do have such knowledge of others' perceptions, but it is unknown whether people in platonic relationships also detect such discrepancies between their own perceptions and others'. We examined this phenomenon for a new set of characteristics: the Big Five personality traits. Our primary research questions pertained to identity accuracy and reputation accuracy (i.e., knowledge of a target's self-views and how others view the target, respectively) and identity insight and reputation insight (i.e., identity accuracy and reputation accuracy that cannot be accounted for by a potential artifact: perceivers assuming that others share their own views of targets). However, after a series of preliminary tests, we did not examine reputation insight, as several necessary conditions were not met, indicating that any effects would likely be spurious. We did find that perceivers can accurately infer a target's identity and reputation on global personality traits (identity and reputation accuracy), and that perceivers can sometimes accurately distinguish between their own perceptions of targets and targets' self-views, but not others' views of targets (i.e., identity, but not reputation, insight). Finally, we explored boundary conditions for knowledge of others' perceptions and whether knowledge of identity is correlated with knowledge of reputation. (PsycINFO Database Record
Publisher: Elsevier BV
Date: 08-2010
Publisher: Elsevier BV
Date: 2021
Location: United States of America
Location: United States of America
Start Date: 06-2022
End Date: 06-2026
Amount: $1,059,797.00
Funder: Australian Research Council
View Funded Activity