ORCID Profile
0000-0001-7536-0308
Current Organisations
The University of Auckland
,
Institute of Environmental Science and Research Ltd
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Elsevier BV
Date: 07-2018
DOI: 10.1016/J.FSIGEN.2018.03.016
Abstract: Modern probabilistic genotyping (PG) software is capable of modeling stutter as part of the profile weighting statistic. This allows for peaks in stutter positions to be considered as allelic or stutter or both. However, prior to running any s le through a PG calculator, the examiner must first interpret the s le, considering such things as artifacts and number of contributors (NOC or N). Stutter can play a major role both during the assignment of the number of contributors, and the assessment of inclusion and exclusion. If stutter peaks are not filtered when they should be, it can lead to the assignment of an additional contributor, causing N contributors to be assigned as N + 1. If peaks in the stutter position of a major contributor are filtered using a threshold that is too high, true alleles of minor contributors can be lost. Until now, the software used to view the electropherogram stutter filters are based on a locus specific model. Combined stutter peaks occur when a peak could be the result of both back stutter (stutter one repeat shorter than the allele) and forward stutter (stutter one repeat unit larger than the allele). This can challenge existing filters. We present here a novel stutter filter model in the ArmedXpert™ software package that uses a linear model based on allele for back stutter and applies an additive filter for combined stutter. We term this the allele specific stutter model (AM). We compared AM with a traditional model based on locus specific stutter filters (termed LM). This improved stutter model has the benefit of: Instances of over filtering were reduced 78% from 101 for a traditional model (LM) to 22 for the allele specific model (AM) when scored against each other. Instances of under filtering were reduced 80% from 85 (LM) to 17 (AM) when scored against ground truth mixtures.
Publisher: Elsevier BV
Date: 2021
Publisher: Elsevier BV
Date: 2019
DOI: 10.1016/J.FSIGEN.2018.11.011
Abstract: Using a simplified model, we examine the effect of varying the number of contributors in the prosecution and alternate propositions for a number of simulated ex les. We compare the Slooten and Caliebe [1] solution, with several existing practices. Our own experience is that most laboratories, and ourselves, assign the number of contributors, N = n, by allele count and a manual examination of peak heights. The LR
Publisher: Informa UK Limited
Date: 21-12-2015
Publisher: Informa UK Limited
Date: 07-01-2020
Publisher: Elsevier BV
Date: 05-2019
DOI: 10.1016/J.FSIGEN.2019.01.006
Abstract: An intra and inter-laboratory study using the probabilistic genotyping (PG) software STRmix™ is reported. Two complex mixtures from the PROVEDIt set, analysed on an Applied Biosystems™ 3500 Series Genetic Analyzer, were selected. 174 participants responded. For S le 1 (low template, in the order of 200 rfu for major contributors) five participants described the comparison as inconclusive with respect to the POI or excluded him. Where LRs were assigned, the point estimates ranging from 2 × 10
Publisher: Elsevier BV
Date: 2019
Publisher: Elsevier BV
Date: 2015
DOI: 10.1016/J.FSIGEN.2014.09.019
Abstract: There has been a recent push from many jurisdictions for the standardisation of forensic DNA interpretation methods. Current research is moving from threshold-based interpretation strategies towards continuous interpretation strategies. However laboratory uptake of software employing probabilistic models is slow. Some of this reluctance could be due to the perceived intimidating calculations to replicate the software answers and the lack of formal internal validation requirements for interpretation software. In this paper we describe a set of experiments which may be used to internally validate in part probabilistic interpretation software. These experiments included both single source and mixed profiles calculated with and without dropout and drop-in and studies to determine the reproducibility of the software with replicate analyses. We do this by way of ex le using three software packages: STRmix™, LRmix, and Lab Retriever. We outline and demonstrate the profile ex les where the expected answer may be calculated and provide all calculations.
Publisher: Elsevier BV
Date: 11-2017
DOI: 10.1016/J.FSIGEN.2017.09.002
Abstract: The introduction of probabilistic DNA interpretation systems has made it possible to evaluate many profiles that previously (under a manual interpretation system) were not. These probabilistic systems have been around for a number of years and it is becoming more common that their use within a laboratory has spanned at least one technology change. This may be a change in laboratory hardware, the DNA profiling kit used, or the manner in which the profile is generated. Up until this point, when replicates DNA profiles are generated, that span a technological change, the ability to utilise all the information in all replicates has been limited or non-existent. In this work we explain and derive the models required to evaluate (what we term) multi-kit analysis problems. We demonstrate the use of the multi-kit feature on a number of scenarios where such an analysis would be desired within a laboratory. Allowing the combination of profiling data that spans a technological change will further increase the amount of DNA profile information produced in a laboratory that can be evaluated.
Publisher: Informa UK Limited
Date: 11-02-2014
Publisher: Elsevier BV
Date: 02-2013
DOI: 10.1016/J.FSIGEN.2012.11.013
Abstract: Traditional forensic DNA interpretation methods are restricted as they are unable to deal completely with complex low level or mixed DNA profiles. This type of data has become more prevalent as DNA typing technologies become more sensitive. In addition they do not make full use of the information available in peak heights. Existing methods of interpretation are often described as binary which describes the fact that the probability of the evidence is assigned as 0 or 1 (hence binary) (see for ex le [1] at 7.3.3). These methods are being replaced by more advanced interpretation methods such as continuous models. In this paper we describe a series of models that can be used to calculate expected values for allele and stutter peak heights, and their ratio SR. This model could inform methods which implement a continuous method for the interpretation of DNA profiling data.
Publisher: Elsevier BV
Date: 11-2020
Publisher: Wiley
Date: 02-10-2014
Abstract: Forward stutter, or over stutter, one repeat unit length larger than the parent allele (N + 1 stutter), is a relatively rare product of the PCR lification of STRs used in forensic DNA analysis. We have investigated possible explanatory variables for the occurrence and size of forward stutter for four different autosomal multiplexes. In addition, we have investigated models used to predict the expected heights of forward stutter. For all tetra and penta-nucleotide repeats we can find no correlation between allelic peak height, marker, or longest uninterrupted sequence in the allele. The data fit a gamma distribution with no explanatory variables. For the single trinucleotide repeat present in two of the four multiplexes (D22S1045) forward stutter is much more common and the best explanatory variable appears to be back stutter height. This suggests some fundamental cocausation of high backward and forward stutter for this locus.
Publisher: Elsevier BV
Date: 09-2020
Publisher: Elsevier BV
Date: 09-2020
Publisher: Elsevier BV
Date: 07-2018
DOI: 10.1016/J.FSIGEN.2018.04.009
Abstract: STRmix™ uses several laboratory specific parameters to calibrate the stochastic model for peak heights. These are modelled on empirical observations specific to the instruments and protocol used in the analysis. The extent to which these parameters can be borrowed from laboratories with similar technology and protocols without affecting the accuracy of the system is investigated using a sensitivity analysis. Parameters are first calibrated to a publicly available dataset, after which a large number of likelihood ratios are computed for true contributors and non-contributors using both the calibrated parameters and several borrowed parameters. Differences in the LR caused by using different sets of parameter values are found to be negligible.
Publisher: Elsevier BV
Date: 05-2020
Publisher: Elsevier BV
Date: 07-2017
DOI: 10.1016/J.FSIGEN.2017.04.004
Abstract: The interpretation of DNA evidence can entail analysis of challenging STR typing results. Genotypes inferred from low quality or quantity specimens, or mixed DNA s les originating from multiple contributors, can result in weak or inconclusive match probabilities when a binary interpretation method and necessary thresholds (such as a stochastic threshold) are employed. Probabilistic genotyping approaches, such as fully continuous methods that incorporate empirically determined biological parameter models, enable usage of more of the profile information and reduce subjectivity in interpretation. As a result, software-based probabilistic analyses tend to produce more consistent and more informative results regarding potential contributors to DNA evidence. Studies to assess and internally validate the probabilistic genotyping software STRmix™ for casework usage at the Federal Bureau of Investigation Laboratory were conducted using lab-specific parameters and more than 300 single-source and mixed contributor profiles. Simulated forensic specimens, including constructed mixtures that included DNA from two to five donors across a broad range of template amounts and contributor proportions, were used to examine the sensitivity and specificity of the system via more than 60,000 tests comparing hundreds of known contributors and non-contributors to the specimens. Conditioned analyses, concurrent interpretation of lification replicates, and application of an incorrect contributor number were also performed to further investigate software performance and probe the limitations of the system. In addition, the results from manual and probabilistic interpretation of both prepared and evidentiary mixtures were compared. The findings support that STRmix™ is sufficiently robust for implementation in forensic laboratories, offering numerous advantages over historical methods of DNA profile analysis and greater statistical power for the estimation of evidentiary weight, and can be used reliably in human identification testing. With few exceptions, likelihood ratio results reflected intuitively correct estimates of the weight of the genotype possibilities and known contributor genotypes. This comprehensive evaluation provides a model in accordance with SWGDAM recommendations for internal validation of a probabilistic genotyping system for DNA evidence interpretation.
Publisher: Elsevier BV
Date: 03-2016
DOI: 10.1016/J.FSIGEN.2015.11.010
Abstract: Y-STR profiling makes up a small but important proportion of forensic DNA casework. Often Y-STR profiles are used when autosomal profiling has failed to yield an informative result. Consequently Y-STR profiles are often from the most challenging s les. In addition to these points, Y-STR loci are linked, meaning that evaluation of haplotype probabilities are either based on overly simplified counting methods or computationally costly genetic models, neither of which extend well to the evaluation of mixed Y-STR data. For all of these reasons Y-STR data analysis has not seen the same advances as autosomal STR data. We present here a probabilistic model for the interpretation of Y-STR data. Due to the fact that probabilistic systems for Y-STR data are still some way from reaching active casework, we also describe how data can be analysed in a continuous way to generate interpretational thresholds and guidelines.
Publisher: CRC Press
Date: 03-09-2018
Publisher: Elsevier BV
Date: 11-2017
Publisher: Elsevier BV
Date: 03-2014
DOI: 10.1016/J.FSIGEN.2013.12.001
Abstract: DNA databases have revolutionised forensic science. They are a powerful investigative tool as they have the potential to identify persons of interest in criminal investigations. Routinely, a DNA profile generated from a crime s le could only be searched for in a database of in iduals if the stain was from single contributor (single source) or if a contributor could unambiguously be determined from a mixed DNA profile. This meant that a significant number of s les were unsuitable for database searching. The advent of continuous methods for the interpretation of DNA profiles offers an advanced way to draw inferential power from the considerable investment made in DNA databases. Using these methods, each profile on the database may be considered a possible contributor to a mixture and a likelihood ratio (LR) can be formed. Those profiles which produce a sufficiently large LR can serve as an investigative lead. In this paper empirical studies are described to determine what constitutes a large LR. We investigate the effect on a database search of complex mixed DNA profiles with contributors in equal proportions with dropout as a consideration, and also the effect of an incorrect assignment of the number of contributors to a profile. In addition, we give, as a demonstration of the method, the results using two crime s les that were previously unsuitable for database comparison. We show that effective management of the selection of s les for searching and the interpretation of the output can be highly informative.
Publisher: Elsevier BV
Date: 03-2016
DOI: 10.1016/J.FSIGEN.2015.12.009
Abstract: In forensic DNA analysis a DNA extract is lified using polymerase chain reaction (PCR), separated using capillary electrophoresis and the resulting DNA products are detected using fluorescence. S ling variation occurs when the DNA molecules are aliquotted during the PCR setup stage and this translates to variability in peak heights in the resultant electropherogram or between electropherograms generated from a DNA extract. Beyond the variability caused by s ling variation it has been observed that there are factors in generating the DNA profile that can contribute to the magnitude of variability observed, most notably the number of PCR cycles. In this study we investigate a number of factors in the generation of a DNA profile to determine which contribute to levels of peak height variability.
Publisher: Elsevier BV
Date: 12-2014
DOI: 10.1016/J.JTBI.2014.08.021
Abstract: A commonly used idea in forensic fields is known as the 'hierarchy of propositions'. DNA analysts commonly report at the sub-source level in the hierarchy. This means that they simply comment on the probability of the evidence for the given propositions that consider contributors that lead to a DNA profile and not on the source of specific biological components, not the activity that led to the transfer or the offence that is reported to have occurred. However DNA analysts also commonly report at a level even lower than the sub-source level. In this 'sub-sub-source' level only reference comparisons to components of a mixture are reported. The difference between the sub-source level and sub-sub-source level is the difference between comparing an in idual to a mixture as a whole, or comparing them to only one component of a mixture. This idea has been expressed in the past as the 'two trace' problem or the 'factor of two' problem. With the advent of expert systems that can provide a measure of weight of evidence in the form of a likelihood ratio (LR) for any mixture, resolvable or not, the distinction between these two levels becomes more important. In this paper we explore how the LR can be constructed to report correctly at the sub-source level, by taking contributor orders and genotype set orders into account. We include worked ex les of the LR calculation to help explain this confusing issue.
Publisher: Wiley
Date: 06-08-2015
Abstract: The interpretation of complex DNA profiles is facilitated by a Bayesian approach. This approach requires the development of a pair of propositions: one aligned to the prosecution case and one to the defense case. This note explores the issue of proposition setting in an adversarial environment by a series of ex les. A set of guidelines generalize how to formulate propositions when there is a single person of interest and when there are multiple in iduals of interest. Additional explanations cover how to handle multiple defense propositions, relatives, and the transition from subsource level to activity level propositions. The propositions depend on case information and the allegations of each of the parties. The prosecution proposition is usually known. The authors suggest that a sensible proposition is selected for the defense that is consistent with their stance, if available, and consistent with a realistic defense if their position is not known.
Publisher: Informa UK Limited
Date: 18-11-2019
Publisher: Wiley
Date: 22-09-2021
Abstract: Likelihood ratios ( LR ) differences between the probabilistic genotyping software EuroForMix and STRmix™ are examined. After considering differences in the allele probabilities, the LRs from both software for an unambiguous single‐source profile were identical (four significant figures). LRs from both software for an unambiguous single‐source profile with alleles previously unseen in the allele frequency database (rare alleles) were the same (three significant figures) for θ = 0.01. Due to differences in the minimum allele frequencies, the LR s differed by three orders of magnitude when θ = 0. For both software, the LR s for a single‐source dilution series decreased as the input amount decreased. The LR s from both software were within an order of magnitude for known contributors. The largest difference was where the target input amount was 0.0156 ng: The LR EuroForMix was 2.1 × 10 25 and the LR STRmix was 8.0 × 10 24 . Both software show similar LR behavior with respect to mixture ratio. For two person mixtures the LR increases for both the major and the minor as the ratio moves away from 1:1. The LR for the major stabilizes at about 3:1 whereas the LR for the minor reaches its maximum at about 3:1 and then declines. Greater differences in LR were observed between EuroForMix and STRmix™ for mixtures. One‐hundred and twenty‐nine mixtures from the PROVEDIt dataset were compared. LR s for 84% of the comparisons for known contributors without rare alleles were within two orders of magnitude. Five ergent results were investigated, and a manual intervention approach was applied where appropriate.
Publisher: Elsevier BV
Date: 10-2019
Publisher: Frontiers Media SA
Date: 15-03-2017
Publisher: Cold Spring Harbor Laboratory
Date: 26-06-2021
DOI: 10.1101/2021.06.25.450000
Abstract: Two methods for applying a lower bound to the variation induced by the Monte Carlo effect are trialled. One of these is implemented in the widely used probabilistic genotyping system, STRmix ™ . Neither approach is giving the desired 99% coverage. In some cases the coverage is much lower than the desired 99%. The discrepancy (i.e. the distance between the LR corresponding to the desired coverage and the LR observed coverage at 99%) is not large. For ex le, the discrepancy of 0.23 for approach 1 suggests the lower bounds should be moved downwards by a factor of 1.7 to achieve the desired 99% coverage. Although less effective than desired these methods provide a layer of conservatism that is additional to the other layers. These other layers are from factors such as the conservatism within the sub-population model, the choice of conservative measures of co-ancestry, the consideration of relatives within the population and the res ling method used for allele probabilities, all of which tend to understate the strength of the findings. Two methods for quantifying Monte Carlo variability are tested, Both give less than the desired 99% coverage, The magnitude of possible discrepancy is small, For ex le an LR of 4.3 × 10 11 could be reported as 1.8 × 10 12 An LR of 18 could be reported as 22.
Publisher: Elsevier BV
Date: 03-2014
DOI: 10.1016/J.FSIGEN.2013.07.001
Abstract: Some advanced methods for DNA profile interpretation require a probability for the event of dropout. Methods have been suggested based on logistic regression. Two of these respectively use a proxy for template that is constant across loci and one that is modelled using an exponential curve. Both of these methods allow different modelling constants from each locus. A variant of the model using an exponential curve is discussed. This variant constrains the constants to be the same for every locus. We test these two methods and the variant by developing the constants (training) on one set of data and testing them on another. This mimics the likely use in casework. We find that the new variant appears to be the most useful in that it performs better than the other two options when trained on one data set and used on another. The hypothesised reason for this is that locus to locus variation in lification efficiency varies with time, multimix batch, or from s le to s le.
Publisher: Cold Spring Harbor Laboratory
Date: 25-06-2021
DOI: 10.1101/2021.06.25.449960
Abstract: In previously reported work a method for applying a lower bound to the variation induced by the Monte Carlo effect was trialled. This is implemented in the widely used probabilistic genotyping system, STRmix ™ . The approach did not give the desired 99% coverage. However, the method for assigning the lower bound to the MCMC variability is only one of a number of layers of conservativism applied in a typical application. We tested all but one of these sources of variability collectively and term the result the near global coverage. The near global coverage for all tested s les was greater than 99.5% for inclusionary average LR s of known donors. This suggests that when included in the probability interval method the other layers of conservativism are more than adequate to compensate for the intermittent underperformance of the MCMC variability component. Running for extended MCMC accepts was also shown to result in improved precision.
Publisher: Elsevier BV
Date: 09-2013
DOI: 10.1016/J.FSIGEN.2013.05.011
Abstract: A method for interpreting autosomal mixed DNA profiles based on continuous modelling of peak heights is described. MCMC is applied with a model for allelic and stutter heights to produce a probability for the data given a specified genotype combination. The theory extends to handle any number of contributors and replicates, although practical implementation limits analyses to four contributors. The probability of the peak data given a genotype combination has proven to be a highly intuitive probability that may be assessed subjectively by experienced caseworkers. Whilst caseworkers will not assess the probabilities per se, they can broadly judge genotypes that fit the observed data well, and those that fit relatively less well. These probabilities are used when calculating a subsequent likelihood ratio. The method has been trialled on a number of mixed DNA profiles constructed from known contributors. The results have been assessed against a binary approach and also compared with the subjective judgement of an analyst.
Publisher: Informa UK Limited
Date: 12-2013
Publisher: Informa UK Limited
Date: 24-05-2018
Publisher: Elsevier BV
Date: 07-2014
DOI: 10.1016/J.SCIJUS.2014.02.007
Abstract: The Bayesian paradigm is the preferred approach to evidence interpretation. It requires the evaluation of the probability of the evidence under at least two propositions. The value of the findings (i.e., our LR) will depend on these propositions and the case information, so it is crucial to identify which propositions are useful for the case at hand. Previously, a number of principles have been advanced and largely accepted for the evaluation of evidence. In the evaluation of traces involving DNA mixtures there may be more than two propositions possible. We apply these principles to some exemplar situations. We also show that in some cases, when there are no clear propositions or no defendant, a forensic scientist may be able to generate explanations to account for observations. In that case, the scientist plays a role of investigator, rather than evaluator. We believe that it is helpful for the scientist to distinguish those two roles.
Publisher: Elsevier BV
Date: 05-2021
Publisher: Elsevier BV
Date: 07-2019
DOI: 10.1016/J.FORSCIINT.2019.04.037
Abstract: Biological s les submitted for sexual assault investigation typically involve mixtures of DNA from the victim and the assailant/s. Providing a statistical weight to such evidence may be mathematically complex and may be affected by subjective judgment of a human analyst. Software tools have been developed to address these issues. To contribute towards improving the system for routine DNA testing of sexual assault cases, we evaluated two likelihood ratio (LR) approaches: a semi-continuous model using LRmix Studio and a fully continuous approach employed in STRmix™ for interpreting two-person DNA mixtures. LRs conditioned on the presence of the receptive partner's DNA were calculated for a total of 102 two-person DNA s les from simulated mixtures and various post-coital s les. Our results highlight the importance of maximising information provided into the LR calculation to generate strong support for the true hypothesis. This can be achieved by recovering sufficient DNA from a s le to minimise risk of drop-out and increase peak intensities and by implementing a statistical model that utilises as much of the electropherogram information as possible. LRmix is open-source and can handle profiles with allelic drop-out and drop-ins, however stuttering is not modelled and requires manual removal by a DNA analyst especially for mixtures with low template components. STRmix™ makes effective use of all available information by incorporating into its biological model complicating aspects of a DNA profile such as degradation, allele drop-out and drop-in, stutters, and peak height variability.
Publisher: Wiley
Date: 18-02-2021
Abstract: We describe an adaption of Bright et al.’s work modeling peak height variability in CE‐DNA profiles to the modeling of allelic aSTR (autosomal short tandem repeats) read counts from NGS‐DNA profiles, specifically for profiles generated from the ForenSeq™ DNA Signature Prep Kit, DNA Primer Mix B. Bright et al.’s model consists of three key components within the estimation of total allelic product— template , locus ‐ specific lification efficiencies , and degradation . In this work, we investigated the two mass parameters—template and locus‐specific lification efficiencies—and used MLE (maximum likelihood estimation) and MCMC (Markov chain Monte Carlo) methods to obtain point estimates to calculate the total allelic product. The expected read counts for alleles were then calculated after proportioning some of the expected stutter product from the total allelic product. Due to preferential licon selection introduced by the s le purification beads, degradation is difficult to model from the aSTR outputs alone. Improved modeling of the locus‐specific lification efficiencies may mask the effects of degradation. Whilst this model could be improved by introducing locus specific variances in addition to locus specific priors, our results demonstrate the suitability of adapting Bright et al.’s allele peak height model for NGS‐DNA profiles. This model could be incorporated into continuous probabilistic interpretation approaches for mixed DNA profiles.
Publisher: Elsevier BV
Date: 11-2014
DOI: 10.1016/J.FSIGEN.2014.08.015
Abstract: Sophisticated methods of DNA profile interpretation have enabled scientists to calculate weights for genotype sets proposed to explain some observed data. Using standard formulae these weights can be incorporated into an LR calculation that considers two competing propositions. We demonstrate here how consideration of relatedness to the person of interest can be incorporated into a LR calculation and how the same calculation can be used for familial searches of complex mixtures. We provide a general formula that can be used in semi or fully automated methods of calculation and demonstrate their use by working through an ex le.
Publisher: Elsevier BV
Date: 11-2014
DOI: 10.1016/J.FSIGEN.2014.08.014
Abstract: DNA profile interpretation has benefitted from recent improvements that use semi-continuous or fully continuous methods to interpret information within an electropherogram. These methods are likelihood ratio based and currently require that a number of contributors be assigned prior to analysis. Often there is ambiguity in the choice of number of contributors, and an analyst is left with the task of determining what they believe to be the most probable number. The choice can be particularly important when the difference between two possible contributor numbers means the difference between excluding a person of interest as being a possible contributor, and producing a statistic that favours their inclusion. Presenting both options in a court of law places the decision with the court. We demonstrate here an MCMC method of correctly weighting analyses of DNA profile data spanning a range of contributors. We explore the theoretical behaviour of such a weight and demonstrate these theories using practical ex les. We also highlight the issues with omitting this weight term from the LR calculation when considering different numbers of contributors in the one calculation.
Publisher: Elsevier BV
Date: 11-2018
DOI: 10.1016/J.FSIGEN.2018.08.014
Abstract: MIX13 was an interlaboratory exercise directed by NIST in 2013. The goal of the exercise was to evaluate the general state of interpretation methods in use at the time across the forensic community within the US and Canada and to measure the consistency in mixture interpretation. The findings were that there was a large variation in analysts' interpretations between and within laboratories. Within this work, we sought to evaluate the same mock mixture cases analyzed in MIX13 but with a more current view of the state-of-the-science. Each of the five cases were analyzed using the Identifiler™ multiplex and interpreted with the combined probability of inclusion, CPI, and four different modern probabilistic genotyping systems. Cases 1-4 can be interpreted without difficulty by any of the four PG systems examined. Cases 1 and 4 could also be interpreted successfully with the CPI by assuming two donors. Cases 2 and 3 cannot be interpreted successfully with the CPI because of potential of allele dropout. Case 3 demonstrated the need to consider relevant background information before interpretation of the profile. This case does not show that there is some barrier to interpretation caused by relatedness beyond the increased allelic overlap that can occur. Had this profile been of better template it might have been interpreted using the CPI despite the (potential) relatedness of contributors. Case 5 suffers from over-engineering. It is unclear whether reference 5C, a non-donor, can be excluded by manual methods. Inclusion of reference 5C should be termed an adventitious match not a false inclusion. Beyond this statement this case does not contribute to the interlaboratory study of analyst/laboratory interpretation method performance, instead, it explores the limits of DNA analysis. Taken collectively the analysis of these five cases demonstrates the benefits of changing from CPI to a PG system.
Publisher: Elsevier BV
Date: 07-2014
DOI: 10.1016/J.FSIGEN.2014.02.003
Abstract: A typical assessment of the strength of forensic DNA evidence is based on a population genetic model and estimated allele frequencies determined from a population database. Some experts provide a confidence or credible interval which takes into account the s ling variation inherent in deriving these estimates from only a s le of a total population. This interval is given in conjunction with the statistic of interest, be it a likelihood ratio (LR), match probability, or cumulative probability of inclusion. Bayesian methods of addressing database s ling variation produce a distribution for the statistic from which the bound(s) of the desired interval can be determined. Population database s ling uncertainty represents only one of the sources of uncertainty that affects estimation of the strength of DNA evidence. There are other uncertainties which can potentially have a much larger effect on the statistic such as, those inherent in the value of Fst, the weights given to genotype combinations in a continuous interpretation model, and the composition of the relevant population. In this paper we model the effect of each of these sources of uncertainty on a likelihood ratio (LR) calculation and demonstrate how changes in the distribution of these parameters affect the reported value. In addition, we illustrate the impact the different approaches of accounting for s ling uncertainties has on the LR for a four person mixture.
Publisher: Elsevier BV
Date: 07-2019
DOI: 10.1016/J.FSIGEN.2019.02.020
Abstract: Standard practice in forensic science is to compare a person of interest's (POI) reference DNA profile with an evidence DNA profile and calculate a likelihood ratio that considers propositions including and excluding the POI as a DNA donor. A method has recently been published that provides the ability to compare two evidence profiles (of any number of contributors and of any level of resolution) comparing propositions that consider the profiles either have a common contributor, or do not have any common contributors. Using this method, forensic analysts can provide intelligence to law enforcement by linking crime scenes when no suspects may be available. The method could also be used as a quality assurance measure to identify potential s le to s le contamination. In this work we analyse a number of constructed mixtures, ranging from two to five contributors, and with known numbers of common contributors, in order to investigate the performance of using likelihood ratios for mixture to mixture comparisons. Our findings demonstrate the ability to identify common donors in DNA mixtures with the power of discrimination depending largely on the least informative mixture of the pair being considered. The ability to match mixtures to mixtures may provide intelligence information to investigators by identifying possible links between cases which otherwise may not have been considered connected.
Publisher: Elsevier BV
Date: 05-2019
DOI: 10.1016/J.FSIGEN.2019.02.021
Abstract: A recent publication has provided the ability to compare two mixed DNA profiles and consider their probability of occurrence if they do, compared to if they do not, have a common contributor. This ability has applications to both quality assurance (to test for s le to s le contamination) and for intelligence gathering purposes (did the same unknown offender donate DNA to multiple s les). We use a mixture to mixture comparison tool to investigate the prevalence of s le to s le contamination that could occur from two laboratory mechanisms, one during DNA extraction and one during electrophoresis. By carrying out pairwise comparisons of all s les (deconvoluted using probabilistic genotyping software STRmix™) within extraction or run batches we identify any potential common DNA donors and investigate these with respect to their risk of contamination from the two proposed mechanisms. While not identifying any contamination, we inadvertently find a potential intelligence link between s les, showing the use of a mixture to mixture comparison tool for investigative purposes.
Publisher: Elsevier BV
Date: 2021
Publisher: Springer Science and Business Media LLC
Date: 25-03-2019
Publisher: Elsevier BV
Date: 05-2019
Location: New Zealand
No related grants have been discovered for Jo-Anne Bright.