ORCID Profile
0000-0001-9122-885X
Current Organisation
University of South Australia
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Springer US
Date: 2013
Publisher: Elsevier BV
Date: 07-2005
Publisher: SAGE Publications
Date: 12-2007
DOI: 10.1260/095830507782616887
Abstract: In 2007, the Intergovernmental Panel on Climate Change's Working Group One, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme, issued its Fourth Assessment Report. The Report included predictions of dramatic increases in average world temperatures over the next 92 years and serious harm resulting from the predicted temperature increases. Using forecasting principles as our guide we asked: Are these forecasts a good basis for developing public policy? Our answer is “no”. To provide forecasts of climate change that are useful for policy-making, one would need to forecast (1) global temperature, (2) the effects of any temperature changes, and (3) the effects of feasible alternative policies. Proper forecasts of all three are necessary for rational policy making. The IPCC WG1 Report was regarded as providing the most credible long-term forecasts of global average temperatures by 31 of the 51 scientists and others involved in forecasting climate change who responded to our survey. We found no references in the 1056-page Report to the primary sources of information on forecasting methods despite the fact these are conveniently available in books, articles, and websites. We audited the forecasting processes described in Chapter 8 of the IPCC's WG1 Report to assess the extent to which they complied with forecasting principles. We found enough information to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures that were described violated 72 principles. Many of the violations were, by themselves, critical. The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts' predictions are not useful in situations involving uncertainly and complexity. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.
Publisher: Institute for Operations Research and the Management Sciences (INFORMS)
Date: 10-2008
Abstract: Calls to list polar bears as a threatened species under the United States Endangered Species Act are based on forecasts of substantial long-term declines in their population. Nine government reports were written to help US Fish and Wildlife Service managers decide whether or not to list polar bears as a threatened species. We assessed these reports based on evidence-based (scientific) forecasting principles. None of the reports referred to sources of scientific forecasting methodology. Of the nine, Amstrup et al. [Amstrup, S. C., B. G. Marcot, D. C. Douglas. 2007. Forecasting the rangewide status of polar bears at selected times in the 21st century. Administrative Report, USGS Alaska Science Center, Anchorage, AK.] and Hunter et al. [Hunter, C. M., H. Caswell, M. C. Runge, S. C. Amstrup, E. V. Regehr, I. Stirling. 2007. Polar bears in the Southern Beaufort Sea II: Demography and population growth in relation to sea ice conditions. Administrative Report, USGS Alaska Science Center, Anchorage, AK.] were the most relevant to the listing decision, and we devoted our attention to them. Their forecasting procedures depended on a complex set of assumptions, including the erroneous assumption that general circulation models provide valid forecasts of summer sea ice in the regions that polar bears inhabit. Nevertheless, we audited their conditional forecasts of what would happen to the polar bear population assuming, as the authors did, that the extent of summer sea ice would decrease substantially during the coming decades. We found that Amstrup et al. properly applied 15 percent of relevant forecasting principles and Hunter et al. 10 percent. Averaging across the two papers, 46 percent of the principles were clearly contravened and 23 percent were apparently contravened. Consequently, their forecasts are unscientific and inconsequential to decision makers. We recommend that researchers apply all relevant principles properly when important public-policy decisions depend on their forecasts.
Publisher: Cambridge University Press
Date: 23-06-2022
Abstract: The scientific method delivers prosperity, yet scientific practice has become subject to corrupting influences from within and without the scientific community. This essential reference is intended to help remedy those threats. The authors identify eight essential criteria for the practice of science and provide checklists to help avoid costly failures in scientific practice. Not only for scientists, this book is for all stakeholders of the broad enterprise of science. Science administrators, research funders, journal editors, and policymakers alike will find practical guidance on how they can encourage scientific research that produces useful discoveries. Journalists, commentators, and lawyers can turn to this text for help with assessing the validity and usefulness of scientific claims. The book provides practical guidance and makes important recommendations for reforms in science policy and science administration. The message of the book is complemented by Nobel Laureate Vernon L. Smith's foreword, and an afterword by Terence Kealey.
Publisher: Elsevier BV
Date: 10-2013
Publisher: Institute for Operations Research and the Management Sciences (INFORMS)
Date: 06-2007
Abstract: In important conflicts such as wars and labor-management disputes, people typically rely on the judgment of experts to predict the decisions that will be made. We compared the accuracy of 106 forecasts by experts and 169 forecasts by novices about eight real conflicts. The forecasts of experts who used their unaided judgment were little better than those of novices. Moreover, neither group’s forecasts were much more accurate than simply guessing. The forecasts of experienced experts were no more accurate than the forecasts of those with less experience. The experts were nevertheless confident in the accuracy of their forecasts. Speculating that consideration of the relative frequency of decisions across similar conflicts might improve accuracy, we obtained 89 sets of frequencies from novices instructed to assume there were 100 similar situations. Forecasts based on the frequencies were no more accurate than 96 forecasts from novices asked to pick the single most likely decision. We conclude that expert judgment should not be used for predicting decisions that people will make in conflicts. When decision makers ask experts for their opinions, they are likely to overlook other, more useful, approaches.
Publisher: Public Library of Science (PLoS)
Date: 10-01-2019
Publisher: SAGE Publications
Date: 09-2012
DOI: 10.1509/JPPM.12.053
Abstract: The authors find no evidence that consumers benefit from government-mandated disclaimers in advertising. Experiments and common experience show that admonishments to change or avoid behaviors often have effects opposite to those intended. The authors examine 18 experimental studies that provide evidence relevant to mandatory disclaimers. Mandated messages increased confusion for consumers and were ineffective or harmful in the 15 studies that examined perceptions, attitudes, or decisions. The authors conduct an experiment on the effects of a government-mandated disclaimer for a Florida court case, showing two advertisements for dentists offering implant dentistry to 317 participants. Only one advertiser had implant dentistry credentials. Participants exposed to the disclaimer recommended the advertiser who lacked credentials more often, and women and less-educated participants were particularly prone to this error. In addition, participants drew false and damaging inferences about the credentialed dentist.
Publisher: Elsevier BV
Date: 2011
Publisher: Elsevier BV
Date: 08-2015
Publisher: Elsevier BV
Date: 07-2007
Publisher: Elsevier BV
Date: 10-2009
Publisher: SAGE Publications
Date: 09-2012
DOI: 10.1509/JPPM.12.112
Publisher: Emerald
Date: 08-02-2016
Abstract: – This paper aims to respond to issues posed in the four commentaries on Armstrong, Du, Green and Graefe (2016, this issue) regarding the immediate usefulness of that paper’s test of advertisements’ compliance with persuasion principles, and regarding the need for further research. – This paper addresses commentators’ concerns using logic, prior research findings and further analyses of the data. – The superiority of the index method remains when a simple, theory-based, alternative weighting-scheme is used in the index model. Combinations of three unaided experts’ forecasts were more accurate than the in idual forecasts, but the gain was only one-third of the gain achieved by using the Persuasion Principles Index (PPI). – Replications and extensions using behavioral data and alternative implementations of the index method would help to better assess the effects of judging conformity with principles as a means of predicting relative advertising effectiveness. Advertisers can expect more accurate pretest results if they combine the predictions of three experts or, even better, if they use tests of compliance with persuasion principles, such as the PPI. The PPI software is copyrighted, but is available now and is free to use. – New analysis and findings provide further support for the claim that advertisers who use the PPI approach proposed by Armstrong, Du, Green and Graefe (2016, this issue) to choose among alternative advertisements will be more profitable than those who do not.
Publisher: Oxford University Press (OUP)
Date: 28-10-2010
DOI: 10.1093/IJPOR/EDQ038
Publisher: Elsevier BV
Date: 08-2015
Publisher: Elsevier BV
Date: 08-2015
Publisher: Informa UK Limited
Date: 14-03-2018
Publisher: Elsevier BV
Date: 07-2002
Publisher: Emerald
Date: 08-02-2016
Abstract: – This paper aims to test whether a structured application of persuasion principles might help improve advertising decisions. Evidence-based principles are currently used to improve decisions in other complex situations, such as those faced in engineering and medicine. – Scores were calculated from the ratings of 17 self-trained novices who rated 96 matched pairs of print advertisements for adherence to evidence-based persuasion principles. Predictions from traditional methods – 10,809 unaided judgments from novices and 2,764 judgments from people with some expertise in advertising and 288 copy-testing predictions – provided benchmarks. – A higher adherence-to-principles-score correctly predicted the more effective advertisement for 75 per cent of the pairs. Copy testing was correct for 59 per cent, and expert judgment was correct for 55 per cent. Guessing would provide 50 per cent accurate predictions. Combining judgmental predictions led to substantial improvements in accuracy. – Advertisements for high-involvement utilitarian products were tested on the assumption that persuasion principles would be more effective for such products. The measure of effectiveness that was available –day-after-recall – is a proxy for persuasion or behavioral measures. – Pretesting advertisements by assessing adherence to evidence-based persuasion principles in a structured way helps in deciding which advertisements would be best to run. That procedure also identifies how to make an advertisement more effective. – This is the first study in marketing, and in advertising specifically, to test the predictive validity of evidence-based principles. In addition, the study provides the first test of the predictive validity of the index method for a marketing problem.
Publisher: SAGE Publications
Date: 05-07-2018
Abstract: The contribution of regression analysis (econometrics) to advertising and media decision-making is questioned and found wanting. Econometrics cannot be expected to estimate valid and reliable forecasting models unless it is based on extensive experimental data on important variables, across varied conditions. This article canvasses alternative, evidence-based methods that have been shown to be useful for forecasting problems. These methods are described with the hope that they are more widely used for marketing forecasting. The approaches include media and copy experiments, analyses of in idual level single source data, and structured expert judgment.
Publisher: SAGE Publications
Date: 12-2011
DOI: 10.1260/0958-305X.22.8.1091
Abstract: The validity of the manmade global warming alarm requires the support of scientific forecasts of (1) a substantive long-term rise in global mean temperatures in the absence of regulations, (2) serious net harmful effects due to global warming, and (3) cost-effective regulations that would produce net beneficial effects versus alternatives policies, including doing nothing. Without scientific forecasts for all three aspects of the alarm, there is no scientific basis to enact regulations. In effect, the warming alarm is like a three-legged stool: Each leg needs to be strong. Despite repeated appeals to global warming alarmists, we have been unable to find scientific forecasts for any of the three legs. We drew upon scientific (evidence-based) forecasting principles to audit the forecasting procedures used to forecast global mean temperatures by the Intergovernmental Panel on Climate Change (IPCC) — leg “1” of the stool. This audit found that the IPCC procedures violated 81% of the 89 relevant forecasting principles. We also audited forecasting procedures, used in two papers, that were written to support regulation regarding the protection of polar bears from global warming — leg “3” of the stool. On average, the forecasting procedures violated 85% of the 90 relevant principles. The warming alarmists have not demonstrated the predictive validity of their procedures. Instead, their argument for predictive validity is based on their claim that nearly all scientists agree with the forecasts. This count of “votes” by scientists is not only an incorrect tally of scientific opinion, it is also, and most importantly, contrary to the scientific method. We conducted a validation test of the IPCC forecasts that were based on the assumption that there would be no regulations. The errors for the IPCC model long-term forecasts (for 91 to 100 years in the future) were 12.6 times larger than those from an evidence-based “no change” model. Based on our own analyses and the documented unscientific behavior of global warming alarmists, we concluded that the global warming alarm is the product of an anti-scientific political movement. Having come to this conclusion, we turned to the “structured analogies” method to forecast the likely outcomes of the warming alarmist movement. In our ongoing study we have, to date, identified 26 similar historical alarmist movements. None of the forecasts behind the analogous alarms proved correct. Twenty-five alarms involved calls for government intervention and the government imposed regulations in 23. None of the 23 interventions was effective and harm was caused by 20 of them. Our findings on the scientific evidence related to global warming forecasts lead to the following recommendations: End government funding for climate change research. End government funding for research predicated on global warming (e.g., alternative energy CO 2 reduction habitat loss). End government programs and repeal regulations predicated on global warming. End government support for organizations that lobby or c aign predicated on global warming.
Publisher: Elsevier BV
Date: 07-2002
No related grants have been discovered for Kesten Green.