ORCID Profile
0000-0001-5547-9213
Current Organisations
University of Oxford
,
University of Adelaide
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2009
Publisher: IEEE
Date: 06-2015
Publisher: IEEE
Date: 06-2008
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2009
Publisher: ICST
Date: 2009
Publisher: IEEE
Date: 09-2016
Publisher: Springer Science and Business Media LLC
Date: 18-06-2021
DOI: 10.1038/S41534-021-00434-X
Abstract: Deep reinforcement learning is an emerging machine-learning approach that can teach a computer to learn from their actions and rewards similar to the way humans learn from experience. It offers many advantages in automating decision processes to navigate large parameter spaces. This paper proposes an approach to the efficient measurement of quantum devices based on deep reinforcement learning. We focus on double quantum dot devices, demonstrating the fully automatic identification of specific transport features called bias triangles. Measurements targeting these features are difficult to automate, since bias triangles are found in otherwise featureless regions of the parameter space. Our algorithm identifies bias triangles in a mean time of min, and sometimes as little as 1 min. This approach, based on dueling deep Q-networks, can be adapted to a broad range of devices and target transport features. This is a crucial demonstration of the utility of deep reinforcement learning for decision making in the measurement and operation of quantum devices.
Publisher: IOP Publishing
Date: 09-2020
Abstract: Quantum devices with a large number of gate electrodes allow for precise control of device parameters. This capability is hard to fully exploit due to the complex dependence of these parameters on applied gate voltages. We experimentally demonstrate an algorithm capable of fine-tuning several device parameters at once. The algorithm acquires a measurement and assigns it a score using a variational auto-encoder. Gate voltage settings are set to optimize this score in real-time in an unsupervised fashion. We report fine-tuning times of a double quantum dot device within approximately 40 min.
Publisher: IEEE
Date: 09-2010
Publisher: UiT The Arctic University of Norway
Date: 19-04-2021
DOI: 10.7557/18.5708
Abstract: The problem of interpretability for binary image classification is considered through the lens of kernel two-s le tests and generative modeling. A feature extraction framework coined Deep Interpretable Features (DIF) is developed, which is used in combination with IntroVAE, a generative model capable of high-resolution image synthesis. Experimental results on a variety of datasets, including COVID-19 chest x-rays demonstrate the benefits of combining deep generative models with the ideas from kernel-based hypothesis testing in moving towards more robust interpretable deep generative models.
Publisher: IEEE
Date: 11-2007
Publisher: Institute of Mathematical Statistics
Date: 10-2013
DOI: 10.1214/13-AOS1140
Publisher: Elsevier BV
Date: 12-2018
Publisher: Institute of Mathematical Statistics
Date: 2017
DOI: 10.1214/17-EJS1339SI
Publisher: Verein zur Forderung des Open Access Publizierens in den Quantenwissenschaften
Date: 08-08-2023
DOI: 10.22331/Q-2023-08-08-1077
Abstract: Pauli spin blockade (PSB) can be employed as a great resource for spin qubit initialisation and readout even at elevated temperatures but it can be difficult to identify. We present a machine learning algorithm capable of automatically identifying PSB using charge transport measurements. The scarcity of PSB data is circumvented by training the algorithm with simulated data and by using cross-device validation. We demonstrate our approach on a silicon field-effect transistor device and report an accuracy of 96% on different test devices, giving evidence that the approach is robust to device variability. Our algorithm, an essential step for realising fully automatic qubit tuning, is expected to be employable across all types of quantum dot devices.
Publisher: Springer International Publishing
Date: 2017
Publisher: Springer Science and Business Media LLC
Date: 19-08-2021
DOI: 10.1038/S41467-020-17835-9
Abstract: Variability is a problem for the scalability of semiconductor quantum devices. The parameter space is large, and the operating range is small. Our statistical tuning algorithm searches for specific electron transport features in gate-defined quantum dot devices with a gate voltage space of up to eight dimensions. Starting from the full range of each gate voltage, our machine learning algorithm can tune each device to optimal performance in a median time of under 70 minutes. This performance surpassed our best human benchmark (although both human and machine performance can be improved). The algorithm is approximately 180 times faster than an automated random search of the parameter space, and is suitable for different material systems and device architectures. Our results yield a quantitative measurement of device variability, from one device to another and after thermal cycling. Our machine learning algorithm can be extended to higher dimensions and other technologies.
Publisher: ICST
Date: 2009
Publisher: IEEE
Date: 09-2016
Publisher: IEEE
Date: 03-2012
Publisher: IEEE
Date: 2008
Publisher: Institute of Mathematical Statistics
Date: 02-2019
DOI: 10.1214/18-STS660
Publisher: IOP Publishing
Date: 05-2022
Abstract: Process understanding and modeling is at the core of scientific reasoning. Principled parametric and mechanistic modeling dominated science and engineering until the recent emergence of machine learning (ML). Despite great success in many areas, ML algorithms in the Earth and climate sciences, and more broadly in physical sciences, are not explicitly designed to be physically-consistent and may, therefore, violate the most basic laws of physics. In this work, motivated by the field of algorithmic fairness, we reconcile data-driven ML with physics modeling by illustrating a nonparametric and nonlinear physics-aware regression method. By incorporating a dependence-based regularizer, the method leads to models that are consistent with domain knowledge, as reflected by either simulations from physical models or ancillary data. The idea can conversely encourage independence of model predictions with other variables that are known to be uncertain either in their representation or magnitude. The method is computationally efficient and comes with a closed-form analytic solution. Through a consistency-vs-accuracy path diagram, one can assess the consistency between data-driven models and physical models. We demonstrate in three ex les on simulations and measurement data in Earth and climate studies that the proposed ML framework allows us to trade-off physical consistency and accuracy.
Publisher: IEEE
Date: 2008
DOI: 10.1109/ICC.2008.840
Publisher: Research Square Platform LLC
Date: 13-06-2022
DOI: 10.21203/RS.3.RS-1340093/V1
Abstract: Pauli spin blockade (PSB) can be employed as a great resource for spin qubit initialisation and readout even at elevated temperatures but it can be difficult to identify. We present a machine learning algorithm capable of automatically identifying PSB using charge transport measurements. The scarcity of PSB data is circumvented by training the algorithm with simulated data and by using cross-device validation. We demonstrate our approach on a silicon field-effect transistor device and report an accuracy of 96% on different test devices, giving evidence that the approach is robust to device variability. The approach is expected to be employable across all types of quantum dot devices.
Publisher: Oxford University Press (OUP)
Date: 04-03-2019
DOI: 10.1093/NSR/NWZ028
Publisher: Springer International Publishing
Date: 2020
Publisher: Informa UK Limited
Date: 29-06-2022
Publisher: Springer Science and Business Media LLC
Date: 17-07-2014
Publisher: IEEE
Date: 06-2012
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 10-2009
Publisher: Authorea, Inc.
Date: 23-07-2023
DOI: 10.22541/ESSOAR.169008319.96252512/V1
Abstract: Emulators, or reduced complexity climate models, are surrogate Earth system models that produce projections of key climate quantities with minimal computational resources. Using time-series modelling or more advanced machine learning techniques, data-driven emulators have emerged as a promising avenue of research, producing spatially resolved climate responses that are visually indistinguishable from state-of-the-art Earth system models. Yet, their lack of physical interpretability limits their wider adoption. In this work, we introduce FaIRGP, a data-driven emulator that satisfies the physical temperature response equations of an energy balance model. The result is an emulator that (i) enjoys the flexibility of statistical machine learning models and can learn from observations, and (ii) has a robust physical grounding with interpretable parameters that can be used to make inference about the climate system. Further, our Bayesian approach allows a principled and mathematically tractable uncertainty quantification. Our model demonstrates skillful emulation of global mean surface temperature and spatial surface temperatures across realistic future scenarios. Its ability to learn from data allows it to outperform energy balance models, while its robust physical foundation safeguards against the pitfalls of purely data-driven models. We also illustrate how FaIRGP can be used to obtain estimates of top-of-atmosphere radiative forcing and discuss the benefits of its mathematical tractability for applications such as detection and attribution or precipitation emulation. We hope that this work will contribute to widening the adoption of data-driven methods in climate emulation.
Publisher: IEEE
Date: 09-2010
Publisher: American Association for the Advancement of Science (AAAS)
Date: 11-2019
Abstract: A novel causal discovery method for estimating nonlinear interdependency networks from large time series datasets.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2009
Publisher: eLife Sciences Publications, Ltd
Date: 23-01-2015
DOI: 10.7554/ELIFE.04919
Abstract: Electrophysiological data disclose rich dynamics in patterns of neural activity evoked by sensory objects. Retrieving objects from memory reinstates components of this activity. In humans, the temporal structure of this retrieved activity remains largely unexplored, and here we address this gap using the spatiotemporal precision of magnetoencephalography (MEG). In a sensory preconditioning paradigm, 'indirect' objects were paired with 'direct' objects to form associative links, and the latter were then paired with rewards. Using multivariate analysis methods we examined the short-time evolution of neural representations of indirect objects retrieved during reward-learning about direct objects. We found two components of the evoked representation of the indirect stimulus, 200 ms apart. The strength of retrieval of one, but not the other, representational component correlated with generalization of reward learning from direct to indirect stimuli. We suggest the temporal structure within retrieved neural representations may be key to their function.
Publisher: IEEE
Date: 06-2009
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2016
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Dino Sejdinovic.