ORCID Profile
0000-0003-1959-012X
Current Organisation
University of Oxford
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: American Geophysical Union (AGU)
Date: 04-2019
DOI: 10.1029/2018JC014471
Publisher: JMIR Publications Inc.
Date: 21-06-2018
Abstract: ecent advances in technology have reopened an old debate on which sectors will be most affected by automation. This debate is ill served by the current lack of detailed data on the exact capabilities of new machines and how they are influencing work. Although recent debates about the future of jobs have focused on whether they are at risk of automation, our research focuses on a more fine-grained and transparent method to model task automation and specifically focus on the domain of primary health care. his protocol describes a new wave of intelligent automation, focusing on the specific pressures faced by primary care within the National Health Service (NHS) in England. These pressures include staff shortages, increased service demand, and reduced budgets. A critical part of the problem we propose to address is a formal framework for measuring automation, which is lacking in the literature. The health care domain offers a further challenge in measuring automation because of a general lack of detailed, health care–specific occupation and task observational data to provide good insights on this misunderstood topic. his project utilizes a multimethod research design comprising two phases: a qualitative observational phase and a quantitative data analysis phase each phase addresses one of the two project aims. Our first aim is to address the lack of task data by collecting high-quality, detailed task-specific data from UK primary health care practices. This phase employs ethnography, observation, interviews, document collection, and focus groups. The second aim is to propose a formal machine learning approach for probabilistic inference of task- and occupation-level automation to gain valuable insights. Sensitivity analysis is then used to present the occupational attributes that increase/decrease automatability most, which is vital for establishing effective training and staffing policy. ur detailed fieldwork includes observing and documenting 16 unique occupations and performing over 130 tasks across six primary care centers. Preliminary results on the current state of automation and the potential for further automation in primary care are discussed. Our initial findings are that tasks are often shared amongst staff and can include convoluted workflows that often vary between practices. The single most used technology in primary health care is the desktop computer. In addition, we have conducted a large-scale survey of over 156 machine learning and robotics experts to assess what tasks are susceptible to automation, given the state-of-the-art technology available today. Further results and detailed analysis will be published toward the end of the project in early 2019. e believe our analysis will identify many tasks currently performed manually within primary care that can be automated using currently available technology. Given the proper implementation of such automating technologies, we expect considerable staff resources to be saved, alleviating some pressures on the NHS primary care staff. ERR1-10.2196/11232
Publisher: AIP
Date: 2009
DOI: 10.1063/1.3275635
Publisher: American Chemical Society (ACS)
Date: 12-2006
DOI: 10.1021/CM052839P
Publisher: Elsevier BV
Date: 06-2019
Publisher: IEEE
Date: 04-2008
DOI: 10.1109/IPSN.2008.25
Publisher: Springer Science and Business Media LLC
Date: 26-09-2019
DOI: 10.1038/S41534-019-0193-4
Abstract: Scalable quantum technologies such as quantum computers will require very large numbers of quantum devices to be characterised and tuned. As the number of devices on chip increases, this task becomes ever more time-consuming, and will be intractable on a large scale without efficient automation. We present measurements on a quantum dot device performed by a machine learning algorithm in real time. The algorithm selects the most informative measurements to perform next by combining information theory with a probabilistic deep-generative model that can generate full-resolution reconstructions from scattered partial measurements. We demonstrate, for two different current map configurations that the algorithm outperforms standard grid scan techniques, reducing the number of measurements required by up to 4 times and the measurement time by 3.7 times. Our contribution goes beyond the use of machine learning for data search and analysis, and instead demonstrates the use of algorithms to automate measurements. This works lays the foundation for learning-based automated measurement of quantum devices.
Publisher: MDPI AG
Date: 15-06-2022
DOI: 10.3390/A15060209
Abstract: We propose an alternative maximum entropy approach to learning the spectra of massive graphs. In contrast to state-of-the-art Lanczos algorithm for spectral density estimation and applications thereof, our approach does not require kernel smoothing. As the choice of kernel function and associated bandwidth heavily affect the resulting output, our approach mitigates these issues. Furthermore, we prove that kernel smoothing biases the moments of the spectral density. Our approach can be seen as an information-theoretically optimal approach to learning a smooth graph spectral density, which fully respects moment information. The proposed method has a computational cost linear in the number of edges, and hence can be applied even to large networks with millions of nodes. We showcase the approach on problems of graph similarity learning and counting cluster number in the graph, where the proposed method outperforms existing iterative spectral approaches on both synthetic and real-world graphs.
Publisher: Oxford University Press (OUP)
Date: 02-2010
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2018
Publisher: Association for Computing Machinery (ACM)
Date: 11-2012
Abstract: In this article, we consider the problem faced by a sensor network operator who must infer, in real time, the value of some environmental parameter that is being monitored at discrete points in space and time by a sensor network. We describe a powerful and generic approach built upon an efficient multi-output Gaussian process that facilitates this information acquisition and processing. Our algorithm allows effective inference even with minimal domain knowledge, and we further introduce a formulation of Bayesian Monte Carlo to permit the principled management of the hyperparameters introduced by our flexible models. We demonstrate how our methods can be applied in cases where the data is delayed, intermittently missing, censored, and/or correlated. We validate our approach using data collected from three networks of weather sensors and show that it yields better inference performance than both conventional independent Gaussian processes and the Kalman filter. Finally, we show that our formalism efficiently reuses previous computations by following an online update procedure as new data sequentially arrives, and that this results in a four-fold increase in computational speed in the largest cases considered.
Publisher: The Royal Society
Date: 13-02-2013
Abstract: In this paper, we offer a gentle introduction to Gaussian processes for time-series data analysis. The conceptual framework of Bayesian modelling for time-series data is discussed and the foundations of Bayesian non-parametric modelling presented for Gaussian processes . We discuss how domain knowledge influences design of the Gaussian process models and provide case ex les to highlight the approaches.
Publisher: Elsevier BV
Date: 07-2017
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2021
Publisher: Institute of Mathematical Statistics
Date: 02-2019
DOI: 10.1214/18-STS683
Publisher: International Union of Crystallography (IUCr)
Date: 27-09-2014
DOI: 10.1107/S1399004714017581
Abstract: The visual inspection of crystallization experiments is an important yet time-consuming and subjective step in X-ray crystallography. Previously published studies have focused on automatically classifying crystallization droplets into distinct but ultimately arbitrary experiment outcomes here, a method is described that instead ranks droplets by their likelihood of containing crystals or microcrystals, thereby prioritizing for visual inspection those images that are most likely to contain useful information. The use of textons is introduced to describe crystallization droplets objectively, allowing them to be scored with the posterior probability of a random forest classifier trained against droplets manually annotated for the presence or absence of crystals or microcrystals. Unlike multi-class classification, this two-class system lends itself naturally to unidirectional ranking, which is most useful for assisting sequential viewing because images can be arranged simply by using these scores: this places droplets with probable crystalline behaviour early in the viewing order. Using this approach, the top ten wells included at least one human-annotated crystal or microcrystal for 94% of the plates in a data set of 196 plates imaged with a Minstrel HT system. The algorithm is robustly transferable to at least one other imaging system: when the parameters trained from Minstrel HT images are applied to a data set imaged by the Rock Imager system, human-annotated crystals ranked in the top ten wells for 90% of the plates. Because rearranging images is fundamental to the approach, a custom viewer was written to seamlessly support such ranked viewing, along with another important output of the algorithm, namely the shape of the curve of scores, which is itself a useful overview of the behaviour of the plate additional features with known usefulness were adopted from existing viewers. Evidence is presented that such ranked viewing of images allows faster but more accurate evaluation of drops, in particular for the identification of microcrystals.
Publisher: Institute of Mathematical Statistics
Date: 02-2019
DOI: 10.1214/18-STS660
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2019
Publisher: The Royal Society
Date: 23-07-2010
Abstract: Pigeons home along idiosyncratic habitual routes from familiar locations. It has been suggested that memorized visual landmarks underpin this route learning. However, the inability to experimentally alter the landscape on large scales has hindered the discovery of the particular features to which birds attend. Here, we present a method for objectively classifying the most informative regions of animal paths. We apply this method to flight trajectories from homing pigeons to identify probable locations of salient visual landmarks. We construct and apply a Gaussian process model of flight trajectory generation for pigeons trained to home from specific release sites. The model shows increasing predictive power as the birds become familiar with the sites, mirroring the animal's learning process. We subsequently find that the most informative elements of the flight trajectories coincide with landscape features that have previously been suggested as important components of the homing task.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2018
Publisher: The Royal Society
Date: 07-2015
Abstract: We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
Publisher: Elsevier BV
Date: 06-2018
Publisher: American Society of Mechanical Engineers
Date: 11-10-2017
Abstract: Accurate on-board capacity estimation is of critical importance in lithium-ion battery applications. Battery charging/discharging often occurs under a constant current load, and hence voltage vs. time measurements under this condition may be accessible in practice. This paper presents a novel diagnostic technique, Gaussian Process regression for In-situ Capacity Estimation (GP-ICE), which is capable of estimating the battery capacity using voltage vs. time measurements over short periods of galvanostatic operation. The approach uses Gaussian process regression to map from voltage values at a selection of uniformly distributed times, to cell capacity. Unlike previous works, GP-ICE does not rely on interpreting the voltage-time data through the lens of Incremental Capacity (IC) or Differential Voltage (DV) analysis. This overcomes both the need to differentiate the voltage-time data (a process which lifies measurement noise), and the requirement that the range of voltage measurements encompasses the peaks in the IC/DV curves. Rather, GP-ICE gives insight into which portions of the voltage range are most informative about the capacity for a particular cell. We apply GP-ICE to a dataset of 8 cells, which were aged by repeated application of an ARTEMIS urban drive cycle. Within certain voltage ranges, as little as 10 seconds of charge data is sufficient to enable capacity estimates with ∼ 2% RMSE.
Publisher: Public Library of Science (PLoS)
Date: 09-09-2021
DOI: 10.1371/JOURNAL.PCBI.1008886
Abstract: Accumulating evidence from human-based research has highlighted that the prevalent one-size-fits-all approach for neural and behavioral interventions is inefficient. This approach can benefit one in idual, but be ineffective or even detrimental for another. Studying the efficacy of the large range of different parameters for different in iduals is costly, time-consuming and requires a large s le size that makes such research impractical and hinders effective interventions. Here an active machine learning technique is presented across participants—personalized Bayesian optimization (pBO)—that searches available parameter combinations to optimize an intervention as a function of an in idual’s ability. This novel technique was utilized to identify transcranial alternating current stimulation (tACS) frequency and current strength combinations most likely to improve arithmetic performance, based on a subject’s baseline arithmetic abilities. The pBO was performed across all subjects tested, building a model of subject performance, capable of recommending parameters for future subjects based on their baseline arithmetic ability. pBO successfully searches, learns, and recommends parameters for an effective neurointervention as supported by behavioral, simulation, and neural data. The application of pBO in human-based research opens up new avenues for personalized and more effective interventions, as well as discoveries of protocols for treatment and translation to other clinical and non-clinical domains.
Publisher: JMIR Publications Inc.
Date: 09-04-2019
DOI: 10.2196/11232
Publisher: American Physical Society (APS)
Date: 08-07-2019
Publisher: IEEE
Date: 06-2016
Publisher: MDPI AG
Date: 29-07-2019
DOI: 10.3390/E21080741
Abstract: Fairness, through its many forms and definitions, has become an important issue facing the machine learning community. In this work, we consider how to incorporate group fairness constraints into kernel regression methods, applicable to Gaussian processes, support vector machines, neural network regression and decision tree regression. Further, we focus on examining the effect of incorporating these constraints in decision tree regression, with direct applications to random forests and boosted trees amongst other widespread popular inference techniques. We show that the order of complexity of memory and computation is preserved for such models and tightly binds the expected perturbations to the model in terms of the number of leaves of the trees. Importantly, the approach works on trained models and hence can be easily applied to models in current use and group labels are only required on training data.
Publisher: MDPI AG
Date: 31-05-2019
DOI: 10.3390/E21060551
Abstract: Efficient approximation lies at the heart of large-scale machine learning problems. In this paper, we propose a novel, robust maximum entropy algorithm, which is capable of dealing with hundreds of moments and allows for computationally efficient approximations. We showcase the usefulness of the proposed method, its equivalence to constrained Bayesian variational inference and demonstrate its superiority over existing approaches in two applications, namely, fast log determinant estimation and information-theoretic Bayesian optimisation.
Publisher: ACM
Date: 12-04-2010
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Michael Osborne.