ORCID Profile
0000-0003-2718-7680
Current Organisation
University of Melbourne
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Artificial Intelligence and Image Processing | Neural Networks, Genetic Alogrithms And Fuzzy Logic | Operations Research | Applied Mathematics | Applied Statistics | Optimisation | Pattern Recognition and Data Mining | Statistics | Optimisation | Electrical and Electronic Engineering | Signal Processing | Stochastic Analysis and Modelling | Power and Energy Systems Engineering (excl. Renewable Power) | Financial Econometrics | Stochastic Analysis And Modelling | Software Engineering | Statistical Mechanics, Physical Combinatorics and Mathematical Aspects of Condensed Matter | Engineering/Technology Instrumentation | Neurobiology | Decision Support And Group Support Systems | Global Information Systems | Dynamical Systems | Numerical and Computational Mathematics | Simulation And Modelling | Computer Communications Networks | Environmental Engineering | Environmental Engineering Modelling | Text Processing | Statistical Theory | Computer Vision | Pattern Recognition |
Information processing services | Expanding Knowledge in the Mathematical Sciences | Combined operations | Finance and investment services | Diagnostic methods | Application tools and system utilities | Computer Software and Services not elsewhere classified | Emerging Defence Technologies | Studies in human society | National Security | Biological sciences | Mathematical sciences | Energy Services and Utilities | Information services not elsewhere classified | Solid Oxide Fuel Cells | Integrated (ecosystem) assessment and management | Forestry not elsewhere classified | Library and Archival Services | Air Terminal Infrastructure and Management | Commercial security services | Technological and Organisational Innovation | Integrated circuits and devices | Expanding Knowledge in the Medical and Health Sciences | Scientific instrumentation | Urban and Industrial Water Management | Industrial Energy Conservation and Efficiency | Residential Energy Conservation and Efficiency | Computer software and services not elsewhere classified | Expanding Knowledge in the Environmental Sciences | Medical Instruments | Other
Publisher: Elsevier BV
Date: 09-2000
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 1998
DOI: 10.1109/72.728380
Abstract: After more than a decade of research, there now exist several neural-network techniques for solving NP-hard combinatorial optimization problems. Hopfield networks and self-organizing maps are the two main categories into which most of the approaches can be ided. Criticism of these approaches includes the tendency of the Hopfield network to produce infeasible solutions, and the lack of generalizability of the self-organizing approaches (being only applicable to Euclidean problems). This paper proposes two new techniques which have overcome these pitfalls: a Hopfield network which enables feasibility of the solutions to be ensured and improved solution quality through escape from local minima, and a self-organizing neural network which generalizes to solve a broad class of combinatorial optimization problems. Two s le practical optimization problems from Australian industry are then used to test the performances of the neural techniques against more traditional heuristic solutions.
Publisher: Springer Science and Business Media LLC
Date: 06-11-2014
Publisher: Institute for Operations Research and the Management Sciences (INFORMS)
Date: 11-2022
Abstract: In recent years, multifidelity expensive black-box (Mf-EBB) methods have received increasing attention due to their strong applicability to industrial design problems. The challenge, however, is that knowledge of the relationship between decisions and objective values is limited to a small set of s le observations of variable quality. In the field of Mf-EBB, a problem instance consists of an expensive yet accurate source of information, and one or more cheap yet less accurate sources of information. The field aims to provide techniques either to accurately explain how decisions affect design outcome, or to find the best decisions to optimise design outcomes. Many techniques that use surrogate models have been developed to provide solutions to both aims. Only in recent years, however, have researchers begun to explore the conditions under which these new techniques are reliable, often focusing on problems with a single low-fidelity function, known as bifidelity expensive black-box (Bf-EBB) problems. This study extends the existing Bf-EBB test instances found in the literature, as well as the features used to determine when the low-fidelity information source should be used. A literature test suite is constructed and augmented with new instances to demonstrate the potentially misleading results that could be reached using only the instances currently found in the literature, and to expose the criticality of a more heterogeneous test suite for algorithm assessment. Addressing the shortcomings of the existing literature, a new set of features is presented, as well as a new instance creation procedure, and a study of their impact on algorithm assessment is conducted. The low-fidelity information source is shown to be valuable if it is often locally accurate, even when its overall accuracy is relatively low. This contradicts the existing literature guidelines, which indicate the low-fidelity information is only useful if it has a high overall accuracy. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms – Continuous. Funding: This work was supported by Australian Research Council [Grant IC200100009] for the ARC Training Centre in Optimisation Technologies, Integrated Methodologies and Applications (OPTIMA), and the University of Melbourne Research Computing Services and Petascale C us Initiative. N. Andrés-Thió is also supported by a Research Training Program scholarship from the University of Melbourne. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplementary Information [ oi/suppl/10.1287/ijoc.2022.1217 ] or is available from the IJOC GitHub software repository ( github.com/INFORMSJoC ) at [ 0.5281/zenodo.6578060 ].
Publisher: IEEE
Date: 04-2013
Publisher: Elsevier BV
Date: 09-1996
Publisher: IGI Global
Date: 2007
DOI: 10.4018/978-1-59904-265-7.CH011
Abstract: With the advancement of storage, retrieval, and network technologies today, the amount of information available to each organization is literally exploding. Although it is widely recognized that the value of data as an organizational asset often becomes a liability because of the cost to acquire and manage those data is far more than the value that is derived from it. Thus, the success of modern organizations not only relies on their capability to acquire and manage their data but their efficiency to derive useful actionable knowledge from it. To explore and analyze large data repositories and discover useful actionable knowledge from them, modern organizations have used a technique known as data mining, which analyzes voluminous digital data and discovers hidden but useful patterns from such massive digital data. However, discovery of hidden patterns has statistical meaning and may often disclose some sensitive information. As a result, privacy becomes one of the prime concerns in the data-mining research community. Since distributed data mining discovers rules by combining local models from various distributed sites, breaching data privacy happens more often than it does in centralized environments.
Publisher: IGI Global
Date: 2007
DOI: 10.4018/978-1-59904-528-3.CH006
Abstract: The most critical component of kernel based learning algorithms is the choice of an appropriate kernel and its optimal parameters. In this paper we propose a rule based meta-learning approach for automatic radial basis function (rbf) kernel and its parameter selection for Support Vector Machine (SVM) classification. First, the best parameter selection is considered on the basis of prior information of the data with the help of Maximum Likelihood (ML) method and Nelder-Mead (N-M) simplex method. Then the new rule based meta-learning approach is constructed and tested on different sizes of 112 datasets with binary class as well as multi class classification problems. We observe that our rule based methodology provides significant improvement of computational time as well as accuracy in some specific cases.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2015
Publisher: Elsevier BV
Date: 10-2021
Publisher: Elsevier BV
Date: 05-2012
Publisher: IEEE
Date: 2008
Publisher: Elsevier BV
Date: 2007
Publisher: IEEE
Date: 12-2006
Publisher: IEEE
Date: 07-2010
Publisher: Springer Berlin Heidelberg
Date: 2005
DOI: 10.1007/11589990_28
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-1998
DOI: 10.1109/72.701185
Abstract: Chen and Aihara recently proposed a chaotic simulated annealing approach to solving optimization problems. By adding a negative self-coupling to a network model proposed earlier by Aihara et al. and gradually removing this negative self-coupling, they used the transient chaos for searching and self-organizing, thereby achieving remarkable improvement over other neural-network approaches to optimization problems with or without simulated annealing. In this paper we suggest a new approach to chaotic simulated annealing with guaranteed convergence and minimization of the energy function by gradually reducing the time step in the Euler approximation of the differential equations that describe the continuous Hopfield neural network. This approach eliminates the need to carefully select other system parameters. We also generalize the convergence theorems of Chen and Aihara to arbitrarily increasing neuronal input-output functions and to less restrictive and yet more compact forms.
Publisher: MIT Press - Journals
Date: 11-2005
Abstract: One of the major obstacles in using neural networks to solve combinatorial optimization problems is the convergence toward one of the many local minima instead of the global minima. In this letter, we propose a technique that enables a self-organizing neural network to escape from local minima by virtue of the intermittency phenomenon. It gives rise to novel search dynamics that allow the system to visit multiple global minima as meta-stable states. Numerical experiments performed suggest that the phenomenon is a combined effect of Kohonen-type competitive learning and the iterated softmax function operating near bifurcation. The resultant intermittent search exhibits fractal characteristics when the optimization performance is at its peak in the form of 1/f signals in the time evolution of the cost, as well as power law distributions in the meta-stable solution states. The N-Queens problem is used as an ex le to illustrate the meta-stable convergence process that sequentially generates, in a single run, 92 solutions to the 8-Queens problem and 4024 solutions to the 17-Queens problem.
Publisher: Springer Berlin Heidelberg
Date: 2001
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 09-2018
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2023
Publisher: IEEE
Date: 06-2008
Publisher: Elsevier BV
Date: 10-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 06-2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2007
Publisher: Elsevier BV
Date: 11-2015
Publisher: IGI Global
Date: 2006
DOI: 10.4018/978-1-59904-271-8.CH006
Abstract: Association rule mining is one of the most widely used data mining techniques. To achieve a better performance, many efficient algorithms have been proposed. Despite these efforts, many of these algorithms require a large amount of main memory to enumerate all frequent itemsets, especially when the dataset is large or the user-specified support is low. Thus, it becomes apparent that we need to have an efficient main memory handling technique, which allows association rule mining algorithms to handle larger datasets in the main memory. To achieve this goal, in this chapter we propose an algorithm for vertical association rule mining that compresses a vertical dataset in an efficient manner, using bit vectors. Our performance evaluations show that the compression ratio attained by our proposed technique is better than those of the other well-known techniques.
Publisher: IEEE Comput. Soc
Date: 2002
Publisher: Elsevier BV
Date: 10-2013
Publisher: Elsevier BV
Date: 09-2000
DOI: 10.1016/S0893-6080(00)00047-2
Abstract: The aim of this paper is to study both the theoretical and experimental properties of chaotic neural network (CNN) models for solving combinatorial optimization problems. Previously we have proposed a unifying framework which encompasses the three main model types, namely, Chen and Aihara's chaotic simulated annealing (CSA) with decaying self-coupling, Wang and Smith's CSA with decaying timestep, and the Hopfield network with chaotic noise. Each of these models can be represented as a special case under the framework for certain conditions. This paper combines the framework with experimental results to provide new insights into the effect of the chaotic neurodynamics of each model. By solving the N-queen problem of various sizes with computer simulations, the CNN models are compared in different parameter spaces, with optimization performance measured in terms of feasibility, efficiency, robustness and scalability. Furthermore, characteristic chaotic neurodynamics crucial to effective optimization are identified, together with a guide to choosing the corresponding model parameters.
Publisher: Springer Science and Business Media LLC
Date: 02-04-2012
DOI: 10.1007/S00285-011-0419-3
Abstract: The transcription factors PU.1 and GATA-1 are known to be important in the development of blood progenitor cells. Specifically they are thought to regulate the differentiation of progenitor cells into the granulocyte/macrophage lineage and the erythrocyte/megakaryocite lineage. While several mathematical models have been proposed to investigate the interaction between the transcription factors in recent years, there is still debate about the nature of the progenitor state in the dynamical system, and whether the existing models adequately capture new knowledge about the interactions gleaned from experimental data. Further, the models utilise different formalisms to represent the genetic regulation, and it appears that the resulting dynamical system depends upon which formalism is adopted. In this paper we analyse the four existing models, and propose an alternative model which is shown to demonstrate a rich variety of dynamical systems behaviours found across the existing models, including both bistability and tristability required for modelling the undifferentiated progenitors.
Publisher: IEEE
Date: 2005
DOI: 10.1109/IAT.2005.57
Publisher: Elsevier BV
Date: 06-2009
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 05-2022
Publisher: Springer Science and Business Media LLC
Date: 10-2013
Publisher: World Scientific Pub Co Pte Lt
Date: 07-2016
DOI: 10.1142/S0219467816500145
Abstract: Projectors are deployed in increasingly demanding environments. The fidelity of the projected image as seen by a user is compromised when projectors are deployed in dual-planar environments (e.g. corner of a room or an office cubicle), thereby diminishing the richness of the user experience. There are many reasons for this. The focus of this paper is to compensate for the global illumination effects due to inter-reflection of light. In the process we also correct geometry distortion. Our system is built from off-the-shelf components and easily deployable without any elaborated setup. In this paper, we describe two complementary methods to compensate for global illumination effects in dual-planar environments. Our methods are based on the systematic adaptation and interpretation of the classical radiosity equation in the image domain. The technique neither assumes nor computes 3D scene geometry, relying on an implicit inference. The system is calibrated in an off-line mode once in our first method, corrected images and video are computed in real time, and in our second method, a richer scene is offered with a modest increase in computational time. The corrected images when projected have better contrast and are more appealing to the user.
Publisher: IEEE
Date: 06-2012
Publisher: Springer International Publishing
Date: 2014
Publisher: Springer Berlin Heidelberg
Date: 2009
Publisher: ACM
Date: 12-08-2007
Publisher: Springer Berlin Heidelberg
Date: 2001
Publisher: Elsevier BV
Date: 04-2002
Publisher: Informa UK Limited
Date: 2003
Publisher: Inderscience Publishers
Date: 2007
Publisher: Springer Science and Business Media LLC
Date: 02-2011
Publisher: Springer International Publishing
Date: 2016
Publisher: MDPI AG
Date: 19-03-2021
DOI: 10.3390/A14030095
Abstract: Various criteria and algorithms can be used for clustering, leading to very distinct outcomes and potential biases towards datasets with certain structures. More generally, the selection of the most effective algorithm to be applied for a given dataset, based on its characteristics, is a problem that has been largely studied in the field of meta-learning. Recent advances in the form of a new methodology known as Instance Space Analysis provide an opportunity to extend such meta-analyses to gain greater visual insights of the relationship between datasets’ characteristics and the performance of different algorithms. The aim of this study is to perform an Instance Space Analysis for the first time for clustering problems and algorithms. As a result, we are able to analyze the impact of the choice of the test instances employed, and the strengths and weaknesses of some popular clustering algorithms, for datasets with different structures.
Publisher: MIT Press
Date: 12-2017
DOI: 10.1162/EVCO_A_00194
Abstract: This article presents a method for the objective assessment of an algorithm’s strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.
Publisher: Elsevier BV
Date: 05-2017
Publisher: ACM
Date: 15-07-2017
Publisher: IEEE
Date: 12-2015
Publisher: AIP Publishing
Date: 06-05-2014
DOI: 10.1063/1.4875260
Abstract: Various definitions of coherent structures exist in turbulence research, but a common assumption is that coherent structures have correlated spectral phases. As a result, randomization of phases is believed, generally, to remove coherent structures from the measured data. Here, we reexamine these assumptions using atmospheric turbulence measurements. Small-scale coherent structures are detected in the usual way using the wavelet transform. A considerable percentage of the detected structures are not phase correlated, although some of them are clearly organized in space and time. At larger scales, structures have even higher degree of spatiotemporal coherence but are also associated with weak phase correlation. A series of specific ex les are shown to demonstrate this. These results warn about the vague terminology and assumptions around coherent structures, particularly for complex real-world turbulence.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 03-2012
Publisher: Springer Science and Business Media LLC
Date: 16-05-2006
Publisher: Elsevier BV
Date: 08-2014
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 1996
DOI: 10.1109/72.548187
Abstract: In this paper, a distinction is drawn between research which assesses the suitability of the Hopfield network for solving the travelling salesman problem (TSP) and research which attempts to determine the effectiveness of the Hopfield network as an optimization technique. It is argued that the TSP is generally misused as a benchmark for the latter goal, with the existence of an alternative linear formulation giving rise to unreasonable comparisons.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 1997
DOI: 10.1109/49.552073
Publisher: IEEE
Date: 04-2009
Publisher: ACM
Date: 26-10-2008
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 04-2017
Publisher: American Geophysical Union (AGU)
Date: 02-2022
DOI: 10.1029/2021WR030266
Abstract: Studies in Real−Time Control (RTC) Rainwater Harvesting Systems (RWH) have to date been limited to the control of single storages, leaving the potential benefits of operating multiple storages in a coordinated manner largely untested. In this study, we aimed to design an optimization‐based RTC strategy that can operate multiple storages in a coordinated manner to achieve multiple objectives. We modeled the long‐term performance of this coordinated approach (i.e., termed as coordinated control ) across a range of storage sizes and compared it with a strategy that optimized the operation of each storage in idually, ignoring the state of other stores within the system. Our results show that coordinated control delivered a synergy benefit in achieving better baseflow restoration, with almost no detriment to the water supply and flood protection (overflow reduction) performance. The efficiency achieved through coordinated control allows large storages to compensate for smaller, underperforming systems, to achieve higher overall performance. Such a finding suggests a general control principle in building coordination among multiple storages, which can potentially be adapted to mitigate flooding risks, and also applied to other stormwater control measures. This also opens up a new opportunity for practitioners to construct a future “smart rainwater grid” using a network of distributed storages, in combination with centralized large storages, to manage urban stormwater in a range of contexts and for a range of environmental objectives.
Publisher: Elsevier BV
Date: 09-2018
Publisher: IGI Global
Date: 2005
Abstract: Data mining is a process that analyzes voluminous digital data in order to discover hidden but useful patterns from digital data. However, the discovering of such hidden patterns has statistical meaning and may often disclose some sensitive information. As a result, privacy becomes one of the prime concerns in the data-mining research community. Since distributed association mining discovers association rules by combining local models from various distributed sites, breaching data privacy happens more often than it does in centralized environments. In this work, we present a methodology that generates association rules without revealing confidential inputs such as statistical properties of in idual sites, and yet retains a high level of accuracy in the resultant rules. One of the important outcomes of the proposed technique is that it reduces the overall communication costs. Performance evaluation of our proposed method shows that it reduces the communication cost significantly when we compare it with other well-known, distributed association-rule-mining algorithms. Nevertheless, the global rule model generated by the proposed method is based on the exact global support of each item set and hence diminishes inconsistency, which indeed occurs when global models are generated from partial support count of an item set.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 2004
Publisher: Elsevier BV
Date: 10-2023
Publisher: IEEE
Date: 05-2015
Publisher: Informa UK Limited
Date: 02-10-2022
Publisher: IGI Global
Date: 10-2005
Abstract: The most critical component of kernel-based learning algorithms is the choice of an appropriate kernel and its optimal parameters. In this paper, we propose a rule-based meta-learning approach for automatic radial basis function (RBF) kernel and its parameter selection for Support Vector Machine (SVM) classification. First, the best parameter selection is considered on the basis of prior information of the data with the help of Maximum Likelihood (ML) method and Nelder-Mead (N-M) simplex method. Then, the new rule-based meta-learning approach is constructed and tested on different sizes of 112 datasets with binary class as well as multi-class classification problems. We observe that our rule-based methodology provides significant improvement of computational time as well as accuracy in some specific cases.
Publisher: IEEE
Date: 10-2013
Publisher: Springer International Publishing
Date: 2018
Publisher: Elsevier BV
Date: 04-2013
Publisher: IEEE
Date: 2005
DOI: 10.1109/WI.2005.70
Publisher: Springer Science and Business Media LLC
Date: 2014
Publisher: Elsevier BV
Date: 2023
Publisher: IEEE
Date: 06-2012
Publisher: Springer Berlin Heidelberg
Date: 2011
Publisher: Elsevier BV
Date: 07-2007
Publisher: IEEE
Date: 06-2008
Publisher: IEEE
Date: 12-2013
Publisher: Informa UK Limited
Date: 11-2002
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2022
Publisher: American Chemical Society (ACS)
Date: 20-07-2011
DOI: 10.1021/AC201337E
Abstract: In industry as well as many areas of scientific research, data collected often contain a number of responses of interest for a chosen set of exploratory variables. Optimization of such multivariable multiresponse systems is a challenge well suited to genetic algorithms as global optimization tools. One such ex le is the optimization of coating surfaces with the required absolute and relative sensitivity for detecting analytes using devices such as sensor arrays. High-throughput synthesis and screening methods can be used to accelerate materials discovery and optimization however, an important practical consideration for successful optimization of materials for arrays and other applications is the ability to generate adequate information from a minimum number of experiments. Here we present a case study to evaluate the efficiency of a novel evolutionary model-based multiresponse approach (EMMA) that enables the optimization of a coating while minimizing the number of experiments. EMMA plans the experiments and simultaneously models the material properties. We illustrate this novel procedure for materials optimization by testing the algorithm on a sol-gel synthetic route for production and optimization of a well studied amino-methyl-silane coating. The response variables of the coating have been optimized based on application criteria for micro- and macro-array surfaces. Spotting performance has been monitored using a fluorescent dye molecule for demonstration purposes and measured using a laser scanner. Optimization is achieved by exploring less than 2% of the possible experiments, resulting in identification of the most influential compositional variables. Use of EMMA to optimize control factors of a product or process is illustrated, and the proposed approach is shown to be a promising tool for simultaneously optimizing and modeling multivariable multiresponse systems.
Publisher: Elsevier BV
Date: 05-2014
Publisher: Elsevier BV
Date: 10-2017
Publisher: American Meteorological Society
Date: 27-02-2014
Abstract: Time series are characterized by a myriad of different shapes and structures. A number of events that appear in atmospheric time series result from as yet unidentified physical mechanisms. This is particularly the case for stable boundary layers, where the usual statistical turbulence approaches do not work well and increasing evidence relates the bulk of their dynamics to generally unknown in idual events. This study explores the possibility of extracting and classifying events from time series without previous knowledge of their generating mechanisms. The goal is to group large numbers of events in a useful way that will open a pathway for the detailed study of their characteristics, and help to gain understanding of events with previously unknown origin. A two-step method is developed that extracts events from background fluctuations and groups dynamically similar events into clusters. The method is tested on artificial time series with different levels of complexity and on atmospheric turbulence time series. The results indicate that the method successfully recognizes and classifies various events of unknown origin and even distinguishes different physical characteristics based only on a single-variable time series. The method is simple and highly flexible, and it does not assume any knowledge about the shape geometries, litudes, or underlying physical mechanisms. Therefore, with proper modifications, it can be applied to time series from a wider range of research areas.
Publisher: Springer Berlin Heidelberg
Date: 2008
Publisher: Springer Berlin Heidelberg
Date: 2011
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2008
Publisher: Elsevier BV
Date: 2015
Publisher: Elsevier BV
Date: 06-2009
Publisher: IEEE
Date: 06-2011
Publisher: Elsevier BV
Date: 05-2003
Publisher: IEEE Comput. Soc
Date: 2002
Publisher: Springer Science and Business Media LLC
Date: 26-02-2017
Publisher: Association for Computing Machinery (ACM)
Date: 02-03-2023
DOI: 10.1145/3572895
Abstract: Instance Space Analysis (ISA) is a recently developed methodology to (a) support objective testing of algorithms and (b) assess the ersity of test instances. Representing test instances as feature vectors, the ISA methodology extends Rice’s 1976 Algorithm Selection Problem framework to enable visualization of the entire space of possible test instances, and gain insights into how algorithm performance is affected by instance properties. Rather than reporting algorithm performance on average across a chosen set of test problems, as is standard practice, the ISA methodology offers a more nuanced understanding of the unique strengths and weaknesses of algorithms across different regions of the instance space that may otherwise be hidden on average. It also facilitates objective assessment of any bias in the chosen test instances and provides guidance about the adequacy of benchmark test suites. This article is a comprehensive tutorial on the ISA methodology that has been evolving over several years, and includes details of all algorithms and software tools that are enabling its worldwide adoption in many disciplines. A case study comparing algorithms for university timetabling is presented to illustrate the methodology and tools.
Publisher: IEEE
Date: 03-2009
Publisher: Elsevier BV
Date: 02-2003
Publisher: Informa UK Limited
Date: 22-05-2018
Publisher: Wiley
Date: 07-01-2015
DOI: 10.1002/QJ.2501
Publisher: Elsevier BV
Date: 04-2017
Publisher: Inderscience Publishers
Date: 2005
Publisher: Elsevier BV
Date: 10-2010
Publisher: IEEE
Date: 11-2011
Publisher: Springer Berlin Heidelberg
Date: 2006
DOI: 10.1007/11734628_14
Publisher: Springer Science and Business Media LLC
Date: 21-11-2020
Publisher: Elsevier BV
Date: 10-1996
Publisher: Nanyang Technol. Univ
Date: 2002
Publisher: Oxford University Press (OUP)
Date: 20-09-2004
Publisher: Informa UK Limited
Date: 05-2014
Publisher: IEEE
Date: 10-2012
Publisher: Springer Science and Business Media LLC
Date: 28-12-2017
Publisher: Elsevier BV
Date: 06-1996
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2008
DOI: 10.1109/TPAMI.2008.8
Publisher: IEEE
Date: 2003
Publisher: Springer Science and Business Media LLC
Date: 16-12-2012
Publisher: ACM
Date: 19-10-2009
Publisher: Elsevier BV
Date: 04-2005
Publisher: Springer Berlin Heidelberg
Date: 2010
Publisher: Elsevier BV
Date: 2023
Publisher: Association for Computing Machinery (ACM)
Date: 15-01-2008
Abstract: The algorithm selection problem [Rice 1976] seeks to answer the question: Which algorithm is likely to perform best for my problem? Recognizing the problem as a learning task in the early 1990's, the machine learning community has developed the field of meta-learning, focused on learning about learning algorithm performance on classification problems. But there has been only limited generalization of these ideas beyond classification, and many related attempts have been made in other disciplines (such as AI and operations research) to tackle the algorithm selection problem in different ways, introducing different terminology, and overlooking the similarities of approaches. In this sense, there is much to be gained from a greater awareness of developments in meta-learning, and how these ideas can be generalized to learn about the behaviors of other (nonlearning) algorithms. In this article we present a unified framework for considering the algorithm selection problem as a learning problem, and use this framework to tie together the crossdisciplinary developments in tackling the algorithm selection problem. We discuss the generalization of meta-learning concepts to algorithms focused on tasks including sorting, forecasting, constraint satisfaction, and optimization, and the extension of these ideas to bioinformatics, cryptography, and other fields.
Publisher: Elsevier BV
Date: 10-2005
Publisher: Elsevier BV
Date: 2013
Publisher: Springer International Publishing
Date: 2022
Publisher: Wiley
Date: 26-05-2009
DOI: 10.1111/J.1365-2753.2009.01167.X
Abstract: A literature review revealed that little is known about the systems context of general practice consultations and their outcomes. To describe the systems context and resulting underlying patterns of primary care consultations in a local area. Cross-sectional multi-practice study based on a three-part questionnaire. Cluster analysis of data. Stratified random s le of general practices and general practitioners--NSW-Central Coast, Australia. A total of 1104 adults attending 12 general practitioners between February and November 1999. The study identified seven subgroups within the study population uniquely defined by variables from the health system, in idual doctor and patient, consultation and consultation outcomes domains. A systems approach provides a framework in which to track and consider the important variables and their known and/or expected workings and thus offer a contextual framework to guide primary care reform.
Publisher: Elsevier BV
Date: 03-2012
DOI: 10.1016/J.SCR.2011.11.001
Abstract: Pluripotency is a cellular state of multiple options. Here, we highlight the potential for self-organization to contribute to stem cell fate computation. A new way of considering regulatory circuitry is presented that describes the expression of each transcription factor (TF) as a branching process that propagates through time, interacting and competing with others. In a single cell, the interactions between multiple branching processes generate a collective process called 'critical-like self-organization'. We explain how this phenomenon provides a valid description of whole genome regulatory circuit dynamics. The hypothesis of exploratory stem cell decision-making proposes that critical-like self-organization (also called rapid self-organized criticality) provides the backbone for cell fate computation in regulative embryos and pluripotent stem cells. Unspecific lification of TF expression is predicted to initiate this self-organizing circuitry, where cascades of gene expression propagate and may interact either synergistically or antagonistically. The emergent and highly dynamic circuitry is affected by various sources of selection pressure, such as the expression of TFs with disproportionate influence over other genes, and extrinsic biological and physical stimuli that differentially modulate particular gene expression cascades. Extrinsic conditions continuously trigger waves of transcription that ripple throughout regulatory networks on multiple spatiotemporal scales, providing the context within which circuitry self-organizes. In this framework, a distinction between instructive and selective mechanisms of fate determination is misleading because it is the 'interference pattern', rather than any single instructing or selecting factor, that is ultimately responsible for computing and directing cell fate. Using this framework, we consider whether the idea of a naïve ground state of pluripotency and that of a fluctuating transcriptome are compatible, and whether a ground state like that captured in vitro could exist in vivo.
Publisher: IEEE
Date: 07-2010
Publisher: Nanyang Technol. Univ
Date: 2002
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 07-1999
DOI: 10.1109/72.774279
Abstract: As an attempt to provide an organized way to study the chaotic structures and their effects in solving combinatorial optimization with chaotic neural networks (CNN's), a unifying framework is proposed to serve as a basis where the existing CNN models can be placed and compared. The key of this proposed framework is the introduction of an extra energy term into the computational energy of the Hopfield model, which takes on different forms for different CNN models, and modifies the original Hopfield energy landscape in various manners. Three CNN models, namely the Chen and Aihara model with self-feedback chaotic simulated annealing (CSA), the Wang and Smith model with timestep CSA, and the chaotic noise model, are chosen as ex les to show how they can be classified and compared within the proposed framework.
Publisher: Springer Berlin Heidelberg
Date: 2010
Publisher: IEEE
Date: 2013
Publisher: Institute for Operations Research and the Management Sciences (INFORMS)
Date: 06-2013
Abstract: Generating valid synthetic instances for branch problems—those that contain a core problem like knapsack or graph coloring, but add several complications—is hard. It is even harder to generate instances that are applicable to the specific goals of an experiment and help to support the claims made. This paper presents a methodology for tuning instance generators of branch problems so that synthetic instances are similar to real ones and are capable of eliciting different behaviors from solvers. A statistic is proposed to summarize the applicability of instances for drawing a valid conclusion. The methodology is demonstrated on the Udine timetabling problem. Ex les and the necessary cyberinfrastructure are available as a project from Computational Infrastructure for Operations Research (COIN-OR).
Start Date: 07-2009
End Date: 07-2012
Amount: $245,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 01-2012
End Date: 09-2017
Amount: $365,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2004
End Date: 04-2007
Amount: $237,466.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2013
End Date: 12-2016
Amount: $240,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 12-2004
End Date: 08-2008
Amount: $70,668.00
Funder: Australian Research Council
View Funded ActivityStart Date: 09-2017
End Date: 09-2019
Amount: $204,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2002
End Date: 12-2005
Amount: $234,967.00
Funder: Australian Research Council
View Funded ActivityStart Date: 12-2004
End Date: 06-2008
Amount: $70,668.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2008
End Date: 12-2008
Amount: $200,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 04-2015
End Date: 12-2019
Amount: $421,276.00
Funder: Australian Research Council
View Funded ActivityStart Date: 09-2021
End Date: 09-2027
Amount: $4,861,236.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2014
End Date: 12-2021
Amount: $20,000,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 01-2004
End Date: 12-2003
Amount: $20,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 12-2004
End Date: 12-2010
Amount: $2,250,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 2003
End Date: 06-2009
Amount: $5,208,295.00
Funder: Australian Research Council
View Funded ActivityStart Date: 12-2014
End Date: 12-2020
Amount: $2,830,000.00
Funder: Australian Research Council
View Funded Activity