ORCID Profile
0000-0002-4487-6923
Current Organisation
Australian National University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Programming Languages | Computer Software | Software Engineering | Computer System Architecture | Computer System Security | Concurrent Programming
Expanding Knowledge in the Information and Computing Sciences | Computer Software and Services not elsewhere classified | Expanding Knowledge in Technology | Application Tools and System Utilities |
Publisher: Wiley
Date: 10-04-2000
DOI: 10.1002/(SICI)1097-024X(20000410)30:4<293::AID-SPE300>3.0.CO;2-Y
Publisher: Association for Computing Machinery (ACM)
Date: 31-10-1992
Publisher: ACM
Date: 08-04-2017
Publisher: Association for Computing Machinery (ACM)
Date: 05-06-2010
Abstract: Managed languages such as Java and C# are being considered for use in hard real-time systems. A hurdle to their widespread adoption is the lack of garbage collection algorithms that offer predictable space-and-time performance in the face of fragmentation. We introduce SCHISM/CMR, a new concurrent and real-time garbage collector that is fragmentation tolerant and guarantees time-and-space worst-case bounds while providing good throughput. SCHISM/CMR combines mark-region collection of fragmented objects and arrays (arraylets) with separate replication-copying collection of immutable arraylet spines, so as to cope with external fragmentation when running in small heaps. We present an implementation of SCHISM/CMR in the Fiji VM, a high-performance Java virtual machine for mission-critical systems, along with a thorough experimental evaluation on a wide variety of architectures, including server-class and embedded systems. The results show that SCHISM/CMR tolerates fragmentation better than previous schemes, with a much more acceptable throughput penalty.
Publisher: ACM
Date: 19-10-2016
Publisher: ACM
Date: 04-06-2011
Publisher: ACM
Date: 03-06-2015
Publisher: Association for Computing Machinery (ACM)
Date: 10-1999
Abstract: We describe how reachability-based orthogonal persistence can be supported even in uncooperative implementations of languages such as C++ and Modula-3, and without modification to the compiler. Our scheme extends Bartlett's mostly-copying garbage collector to manage both transient objects and resident persistent objects, and to compute the reachability closure necessary for stabilization of the persistent heap. It has been implemented in our prototype of reachability-based persistence for Modula-3, yielding performance competitive with that of comparable, but non-orthogonal, persistent variants of C++. Experimental results, using the 007 object database benchmarks, reveal that the mostly-copying approach offers a straightforward path to efficient orthogonal persistence in these uncooperative environments. The results also characterize the performance of persistence implementations based on virtual memory protection primitives.
Publisher: CRC Press
Date: 19-08-2011
Publisher: ACM
Date: 13-06-2007
Publisher: Association for Computing Machinery (ACM)
Date: 04-10-2009
Abstract: The indirection of object accesses is a common theme for target domains as erse as transparent distribution, persistence, and program instrumentation. Virtualizing accesses to fields and methods (by redirecting calls through accessor and indirection methods) allows interposition of arbitrary code, extending the functionality of an application beyond that intended by the original developer. We present class modifications performed by our RuggedJ transparent distribution platform for standard Java virtual machines. RuggedJ abstracts over the location of objects by implementing a single object model for local and remote objects. However the implementation of this model is complicated by the presence of native and system code classes loaded by Java's bootstrap class loader can be rewritten only in a limited manner, and so cannot be modified to conform to RuggedJ's complex object model. We observe that system code comprises the majority of a given Java application: an average of 76% in the applications we study. We consider the constraints imposed upon pervasive class transformation within Java, and present a framework for systematically rewriting arbitrary applications. Our system accommodates all system classes, allowing both user and system classes alike to be referenced using a single object model.
Publisher: Association for Computing Machinery (ACM)
Date: 12-1993
Abstract: Many operating systems allow user programs to specify the protection level (inaccessible, read-only, read-write) of pages in their virtual memory address space, and to handle any protection violations that may occur. Such page-protection techniques have been exploited by several user-level algorithms for applications including generational garbage collection and persistent stores. Unfortunately, modern hardware has made efficient handling of page protection faults more difficult. Moreover, page-sized granularity may not match the natural granularity of a given application. In light of these problems, we reevaluate the usefulness of page-protection primitives in such applications, by comparing the performance of implementations that make use of the primitives with others that do not. Our results show that for certain applications software solutions outperform solutions that rely on page-protection or other related virtual memory primitives.
Publisher: Elsevier BV
Date: 08-2005
Publisher: ACM
Date: 24-08-2011
Publisher: Association for Computing Machinery (ACM)
Date: 07-10-2004
Abstract: Tracing garbage collectors traverse references from live program variables, transitively tracing out the closure of live objects. Memory accesses incurred during tracing are essentially random: a given object may contain references to any other object. Since application heaps are typically much larger than hardware caches, tracing results in many cache misses. Technology trends will make cache misses more important, so tracing is a prime target for prefetching.Simulation of Java benchmarks running with the Boehm-De-mers-Weiser mark-sweep garbage collector for a projected hardware platform reveal high tracing overhead (up to 65% of elapsed time), and that cache misses are a problem. Applying Boehm's default prefetching strategy yields improvements in execution time (16% on average with incremental/generational collection for GC-intensive benchmarks), but analysis shows that his strategy suffers from significant timing problems: prefetches that occur too early or too late relative to their matching loads. This analysis drives development of a new prefetching strategy that yields up to three times the performance improvement of Boehm's strategy for GC-intensive benchmark (27% average speedup), and achieves performance close to that of perfect timing ie , few misses for tracing accesses) on some benchmarks. Validating these simulation results with live runs on current hardware produces average speedup of 6% for the new strategy on GC-intensive benchmarks with a GC configuration that tightly controls heap growth. In contrast, Boehm's default prefetching strategy is ineffective on this platform.
Publisher: Elsevier BV
Date: 12-2006
Publisher: Association for Computing Machinery (ACM)
Date: 12-10-2005
Abstract: A future is a simple and elegant abstraction that allows concurrency to be expressed often through a relatively small rewrite of a sequential program. In the absence of side-effects, futures serve as benign annotations that mark potentially concurrent regions of code. Unfortunately, when computation relies heavily on mutation as is the case in Java, its meaning is less clear, and much of its intended simplicity lost.This paper explores the definition and implementation of safe futures for Java. One can think of safe futures as truly transparent annotations on method calls, which designate opportunities for concurrency. Serial programs can be made concurrent simply by replacing standard method calls with future invocations. Most significantly, even though some parts of the program are executed concurrently and may indeed operate on shared data, the semblance of serial execution is nonetheless preserved. Thus, program reasoning is simplified since data dependencies present in a sequential program are not violated in a version augmented with safe futures.Besides presenting a programming model and API for safe futures, we formalize the safety conditions that must be satisfied to ensure equivalence between a sequential Java program and its future-annotated counterpart. A detailed implementation study is also provided. Our implementation exploits techniques such as object versioning and task revocation to guarantee necessary safety conditions. We also present an extensive experimental evaluation of our implementation to quantify overheads and limitations. Our experiments indicate that for programs with modest mutation rates on shared data, applications can use futures to profitably exploit parallelism, without sacrificing safety.
Publisher: ACM
Date: 10-06-2006
Publisher: Oxford University Press (OUP)
Date: 03-02-2023
Abstract: The majority of NSTEMI burden resides outside high-income countries (HICs). We describe presentation, care, and outcomes of NSTEMI by country income classification. Prospective cohort study including 2947 patients with NSTEMI from 287 centres in 59 countries, stratified by World Bank country income classification. Quality of care was evaluated based on 12 guideline-recommended care interventions. The all-or-none scoring composite performance measure was used to define receipt of optimal care. Outcomes included in-hospital acute heart failure, stroke/transient ischaemic attack, and death, and 30-day mortality. Patients admitted with NSTEMI in low to lower-middle-income countries (LLMICs), compared with patients in HICs, were younger, more commonly diabetic, and current smokers, but with a lower burden of other comorbidities, and 76.7% met very high risk criteria for an immediate invasive strategy. Invasive coronary angiography use increased with ascending income classification (LLMICs, 79.2% upper middle income countries [UMICs], 83.7% HICs, 91.0%), but overall care quality did not (≥80% of eligible interventions achieved: LLMICS, 64.8% UMICs 69.6% HICs 55.1%). Rates of acute heart failure (LLMICS, 21.3% UMICs, 12.1% HICs, 6.8% P & 0.001), stroke/transient ischaemic attack (LLMICS: 2.5% UMICs: 1.5% HICs: 0.9% P = 0.04), in-hospital mortality (LLMICS, 3.6% UMICs: 2.8% HICs: 1.0% P & 0.001) and 30-day mortality (LLMICs, 4.9% UMICs, 3.9% HICs, 1.5% P & 0.001) exhibited an inverse economic gradient. Patients with NSTEMI in LLMICs present with fewer comorbidities but a more advanced stage of acute disease, and have worse outcomes compared with HICs. A cardiovascular health narrative is needed to address this inequity across economic boundaries.
Publisher: ACM
Date: 10-06-2006
Publisher: ACM
Date: 04-10-2009
Publisher: Wiley
Date: 2006
DOI: 10.1002/CPE.1008
Publisher: Association for Computing Machinery (ACM)
Date: 13-10-2016
DOI: 10.1145/2983574
Abstract: An unsound claim can misdirect a field, encouraging the pursuit of unworthy ideas and the abandonment of promising ideas. An inadequate description of a claim can make it difficult to reason about the claim, for ex le, to determine whether the claim is sound. Many practitioners will acknowledge the threat of unsound claims or inadequate descriptions of claims to their field. We believe that this situation is exacerbated, and even encouraged, by the lack of a systematic approach to exploring, exposing, and addressing the source of unsound claims and poor exposition. This article proposes a framework that identifies three sins of reasoning that lead to unsound claims and two sins of exposition that lead to poorly described claims and evaluations. Sins of exposition obfuscate the objective of determining whether or not a claim is sound, while sins of reasoning lead directly to unsound claims. Our framework provides practitioners with a principled way of critiquing the integrity of their own work and the work of others. We hope that this will help in iduals conduct better science and encourage a cultural shift in our research community to identify and promulgate sound claims.
Publisher: Association for Computing Machinery (ACM)
Date: 15-06-2012
Abstract: Read and write barriers mediate access to the heap allowing the collector to control and monitor mutator actions. For this reason, barriers are a powerful tool in the design of any heap management algorithm, but the prevailing wisdom is that they impose significant costs. However, changes in hardware and workloads make these costs a moving target. Here, we measure the cost of a range of useful barriers on a range of modern hardware and workloads. We confirm some old results and overturn others. We evaluate the microarchitectural sensitivity of barrier performance and the differences among benchmark suites. We also consider barriers in context, focusing on their behavior when used in combination, and investigate a known pathology and evaluate solutions. Our results show that read and write barriers have average overheads as low as 5.4% and 0.9% respectively. We find that barrier overheads are more exposed on the workload provided by the modern DaCapo benchmarks than on old SPECjvm98 benchmarks. Moreover, there are differences in barrier behavior between in-order and out-of- order machines, and their respective memory subsystems, which indicate different barrier choices for different platforms. These changing costs mean that algorithm designers need to reconsider their design choices and the nature of their resulting algorithms in order to exploit the opportunities presented by modern hardware.
Publisher: ACM
Date: 11-07-2021
Publisher: ACM
Date: 22-10-2019
Publisher: Wiley
Date: 2001
DOI: 10.1002/SPE.371
Publisher: Elsevier BV
Date: 12-2009
Publisher: Springer Science and Business Media LLC
Date: 21-05-2018
Publisher: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH, Wadern/Saarbruecken, Germany
Date: 2015
Publisher: ACM
Date: 23-09-2014
Publisher: Wiley
Date: 2004
DOI: 10.1002/SPE.618
Publisher: Association for Computing Machinery (ACM)
Date: 21-07-2023
DOI: 10.1145/3587156
Abstract: Coverage-guided greybox fuzzers rely on control-flow coverage feedback to explore a target program and uncover bugs. Compared to control-flow coverage, data-flow coverage offers a more fine-grained approximation of program behavior. Data-flow coverage captures behaviors not visible as control flow and should intuitively discover more (or different) bugs. Despite this advantage, fuzzers guided by data-flow coverage have received relatively little attention, appearing mainly in combination with heavyweight program analyses (e.g., taint analysis, symbolic execution). Unfortunately, these more accurate analyses incur a high run-time penalty, impeding fuzzer throughput. Lightweight data-flow alternatives to control-flow fuzzing remain unexplored. We present datAFLow , a greybox fuzzer guided by lightweight data-flow profiling. We also establish a framework for reasoning about data-flow coverage, allowing the computational cost of exploration to be balanced with precision. Using this framework, we extensively evaluate datAFLow across different precisions, comparing it against state-of-the-art fuzzers guided by control flow, taint analysis, and data flow. Our results suggest that the ubiquity of control-flow-guided fuzzers is well-founded. The high run-time costs of data-flow-guided fuzzing (~10 × higher than control-flow-guided fuzzing) significantly reduces fuzzer iteration rates, adversely affecting bug discovery and coverage expansion. Despite this, datAFLow uncovered bugs that state-of-the-art control-flow-guided fuzzers (notably, AFL++) failed to find. This was because data-flow coverage revealed states in the target not visible under control-flow coverage. Thus, we encourage the community to continue exploring lightweight data-flow profiling specifically, to lower run-time costs and to combine this profiling with control-flow coverage to maximize bug-finding potential.
Publisher: Association for Computing Machinery (ACM)
Date: 04-03-2009
Abstract: This paper describes the development and initial evaluation of a new course ``Introduction to Computational Thinking'' taken by science majors to fulfill a college computing requirement. The course was developed by computer science faculty in collaboration with science faculty and it focuses on the role of computing and computational principles in scientific inquiry. It uses Python and Python libraries to teach computational thinking via basic programming concepts, data management concepts, simulation, and visualization. Problems with a computational aspect are drawn from different scientific disciplines and are complemented with lectures from faculty in those areas. Our initial evaluation indicates that the problem-driven approach focused on scientific discovery and computational principles increases the student's interest in computing.
Publisher: ACM
Date: 25-03-2018
Publisher: Association for Computing Machinery (ACM)
Date: 21-07-2023
DOI: 10.1145/3587159
Abstract: This Replicating Computational Report (RCR) describes (a) our datAFLow fuzzer and (b) how to replicate the results in “ datAFLow : Toward a Data-Flow-Guided Fuzzer.” Our primary artifact is the datAFLow fuzzer. Unlike traditional coverage-guided greybox fuzzers—which use control-flow coverage to drive program exploration— datAFLow uses data-flow coverage to drive exploration. This is achieved through a set of LLVM-based analyses and transformations. In addition to datAFLow , we also provide a set of tools, scripts, and patches for (a) statically analyzing data flows in a target program, (b) compiling a target program with the datAFLow instrumentation, (c) evaluating datAFLow on the Magma benchmark suite, and (d) evaluating datAFLow on the DDFuzz dataset. datAFLow is available at github.com/HexHive/datAFLow.
Publisher: ACM
Date: 14-03-2007
Publisher: Wiley
Date: 25-02-2019
DOI: 10.1111/GCB.14537
Publisher: ACM
Date: 26-05-2015
Publisher: ACM
Date: 14-06-2016
Publisher: ACM
Date: 14-06-2015
Publisher: ACM
Date: 14-06-2015
Publisher: ACM
Date: 31-10-2016
Publisher: IEEE
Date: 12-2009
DOI: 10.1109/RTSS.2009.40
Publisher: ACM
Date: 05-06-2010
Publisher: Association for Computing Machinery (ACM)
Date: 07-10-2004
Abstract: Tracing garbage collectors traverse references from live program variables, transitively tracing out the closure of live objects. Memory accesses incurred during tracing are essentially random: a given object may contain references to any other object. Since application heaps are typically much larger than hardware caches, tracing results in many cache misses. Technology trends will make cache misses more important, so tracing is a prime target for prefetching.Simulation of Java benchmarks running with the Boehm-De-mers-Weiser mark-sweep garbage collector for a projected hardware platform reveal high tracing overhead (up to 65% of elapsed time), and that cache misses are a problem. Applying Boehm's default prefetching strategy yields improvements in execution time (16% on average with incremental/generational collection for GC-intensive benchmarks), but analysis shows that his strategy suffers from significant timing problems: prefetches that occur too early or too late relative to their matching loads. This analysis drives development of a new prefetching strategy that yields up to three times the performance improvement of Boehm's strategy for GC-intensive benchmark (27% average speedup), and achieves performance close to that of perfect timing ie , few misses for tracing accesses) on some benchmarks. Validating these simulation results with live runs on current hardware produces average speedup of 6% for the new strategy on GC-intensive benchmarks with a GC configuration that tightly controls heap growth. In contrast, Boehm's default prefetching strategy is ineffective on this platform.
Publisher: Association for Computing Machinery (ACM)
Date: 10-1993
Publisher: ACM Press
Date: 1993
Publisher: ACM
Date: 15-06-2012
Publisher: Association for Computing Machinery (ACM)
Date: 13-06-2007
Abstract: Memory management is a critical issue for correctness and performance in real-time embedded systems. Recent work on real-time garbage collectors has shown that it is possible to provide guarantees on worst-case pause times and minimum mutator utilization time. This paper presents a new hierarchical real-time garbage collection algorithm for mixed-priority and mixed-criticality environments. With hierarchical garbage collection, real-time programmers can partition the heap into a number of heaplets and for each partition choose to run a separate collector with a schedule that matches the allocation behavior and footprint of the real-time task using it. This approach lowers worst-case response times of real-time applications by 26%, while almost doubling mutator utilization -- all with only minimal changes to the application code.
Publisher: Association for Computing Machinery (ACM)
Date: 08-2011
Abstract: Managed languages such as Java and C# are increasingly being considered for hard real-time applications because of their productivity and software engineering advantages. Automatic memory management, or garbage collection, is a key enabler for robust, reusable libraries, yet remains a challenge for analysis and implementation of real-time execution environments. This article comprehensively compares leading approaches to hard real-time garbage collection. There are many design decisions involved in selecting a real-time garbage collection algorithm. For time-based garbage collectors on uniprocessors one must choose whether to use periodic , slack-based or hybrid scheduling. A significant impediment to valid experimental comparison of such choices is that commercial implementations use completely different proprietary infrastructures. We present Minuteman, a framework for experimenting with real-time collection algorithms in the context of a high-performance execution environment for real-time Java. We provide the first comparison of the approaches, both experimentally using realistic workloads, and analytically in terms of schedulability.
Publisher: AITO - Association Internationale pour les Technologies Objets
Date: 2011
Start Date: 2012
End Date: 2018
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2014
End Date: 2018
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 1997
End Date: 2000
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2014
End Date: 2017
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2005
End Date: 2006
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2013
End Date: 2014
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2015
End Date: 2017
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2010
End Date: 2011
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2007
End Date: 2011
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2008
End Date: 2012
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2006
End Date: 2008
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2006
End Date: 2010
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2007
End Date: 2011
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2007
End Date: 2008
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2017
End Date: 2020
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2017
End Date: 2019
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2000
End Date: 2004
Funder: Directorate for Computer & Information Science & Engineering
View Funded ActivityStart Date: 2019
End Date: 2022
Funder: Australian Research Council
View Funded ActivityStart Date: 2014
End Date: 2017
Funder: Australian Research Council
View Funded ActivityStart Date: 05-2014
End Date: 11-2017
Amount: $300,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 06-2019
End Date: 12-2024
Amount: $480,000.00
Funder: Australian Research Council
View Funded Activity