ARDC Research Link Australia Research Link Australia   BETA Research
Link
Australia
  • ARDC Newsletter Subscribe
  • Contact Us
  • Home
  • About
  • Feedback
  • Explore Collaborations
  • Researcher
  • Funded Activity
  • Organisation
  • Researcher
  • Funded Activity
  • Organisation
  • Researcher
  • Funded Activity
  • Organisation

Need help searching? View our Search Guide.

Advanced Search

Current Selection
Status : Active
Research Topic : Computer System Security
Socio-Economic Objective : Cybersecurity
Clear All
Filter by Field of Research
Cybersecurity and privacy (9)
Data and information privacy (8)
Data security and protection (8)
Computer vision (2)
Computer vision and multimedia computation (2)
Data engineering and data science (2)
Software and application security (2)
Cryptography (1)
Data mining and knowledge discovery (1)
Distributed systems and algorithms (1)
Image processing (1)
Machine learning (1)
Neural networks (1)
Software engineering (1)
Software testing verification and validation (1)
System and network security (1)
Filter by Socio-Economic Objective
Cybersecurity (13)
Expanding Knowledge In the Information and Computing Sciences (4)
Application Software Packages (3)
Network Systems and Services (2)
Artificial Intelligence (1)
Autonomous and Robotic Systems (1)
Electronic Information Storage and Retrieval Services (1)
Information Services Not Elsewhere Classified (1)
Internet, Digital and Social Media (1)
Filter by Funding Provider
Australian Research Council (13)
Filter by Status
Active (13)
Filter by Scheme
Discovery Projects (7)
Discovery Early Career Researcher Award (4)
Linkage Projects (2)
Filter by Country
Australia (13)
Filter by Australian State/Territory
NSW (7)
VIC (6)
QLD (3)
WA (2)
ACT (1)
  • Researchers (15)
  • Funded Activities (13)
  • Organisations (0)
  • Active Funded Activity

    Linkage Projects - Grant ID: LP230100294

    Funder
    Australian Research Council
    Funding Amount
    $394,974.00
    Summary
    High Quality-of-Experience Real-time Video for Smart Online Shopping. This project aims to develop high quality-of-experience real-time video systems for smart shopping applications by devising new deep-neural-network-enhanced video delivery schemes. It will generate new knowledge of combined AI and network solutions to achieve high-quality and low-latency real-time video delivery, addressing unsatisfactory user experience intrinsically caused by network delay and bandwidth. Fundamental principl .... High Quality-of-Experience Real-time Video for Smart Online Shopping. This project aims to develop high quality-of-experience real-time video systems for smart shopping applications by devising new deep-neural-network-enhanced video delivery schemes. It will generate new knowledge of combined AI and network solutions to achieve high-quality and low-latency real-time video delivery, addressing unsatisfactory user experience intrinsically caused by network delay and bandwidth. Fundamental principles and an all-in-one platform will be developed to address research problems and the industrial partner’s practical problems. It will significantly benefit all shopping businesses and their customers in Australia, as well as all other video-related services (e.g., online education, video conferencing, etc.).
    Read more Read less
    More information
    Active Funded Activity

    Discovery Projects - Grant ID: DP230100246

    Funder
    Australian Research Council
    Funding Amount
    $482,610.00
    Summary
    Deep Learning Attacks and Active Defences: A Cybersecurity Perspective. The belief that deep learning technology is imperative for economic development, military control, and strategic competitiveness has accelerated its development across the globe. However, experience has revealed the disappointing fact that deep learning models are vulnerable to a range of security attacks. Hence, a series of methodologies and defence strategies will be devised that make deep learning systems robust to these .... Deep Learning Attacks and Active Defences: A Cybersecurity Perspective. The belief that deep learning technology is imperative for economic development, military control, and strategic competitiveness has accelerated its development across the globe. However, experience has revealed the disappointing fact that deep learning models are vulnerable to a range of security attacks. Hence, a series of methodologies and defence strategies will be devised that make deep learning systems robust to these attacks. The methodologies require analysing attack lifecycles to identify them in their early stages. With this knowledge, active defence methods and forensic strategies can be developed to ensure efficient defences and prevent further attacks. Moreover, the outputs will be generalisable to most deep learning services.
    Read more Read less
    More information
    Active Funded Activity

    Discovery Projects - Grant ID: DP240100955

    Funder
    Australian Research Council
    Funding Amount
    $485,000.00
    Summary
    Balance and reinforcement: privacy and fairness in high intelligence models. The aim of this project is to develop a series of privacy preservation methods to achieve a new balance between privacy and fairness in highly accurate intelligence models. The main issue in achieving this goal is that high-accuracy intelligence technologies have resulted in significant privacy violations and are very vulnerable to issues of unfairness. This project will analyse the privacy risks associated with intelli .... Balance and reinforcement: privacy and fairness in high intelligence models. The aim of this project is to develop a series of privacy preservation methods to achieve a new balance between privacy and fairness in highly accurate intelligence models. The main issue in achieving this goal is that high-accuracy intelligence technologies have resulted in significant privacy violations and are very vulnerable to issues of unfairness. This project will analyse the privacy risks associated with intelligent systems and devise mechanisms to mutually reinforce both privacy and fairness based on the theoretical foundations laid by our analysis. These outcomes will enable model owners to effectively protect their intellectual property and offer services to users in a private, fair, and accurate manner.
    Read more Read less
    More information
    Active Funded Activity

    Discovery Early Career Researcher Award - Grant ID: DE230100477

    Funder
    Australian Research Council
    Funding Amount
    $421,554.00
    Summary
    Advancing Human Perception: Countering Evolving Malicious Fake Visual Data. The aim of this project is to provide new effective and generalisable deepfake detection methods for automatically detecting maliciously manipulated visual data generated by misused artificial intelligence (AI) techniques. It will present innovative computer vision and image processing knowledge and techniques, enabling the developed methods to advance human perception in recognising fake data, enhance cybersecurity, and .... Advancing Human Perception: Countering Evolving Malicious Fake Visual Data. The aim of this project is to provide new effective and generalisable deepfake detection methods for automatically detecting maliciously manipulated visual data generated by misused artificial intelligence (AI) techniques. It will present innovative computer vision and image processing knowledge and techniques, enabling the developed methods to advance human perception in recognising fake data, enhance cybersecurity, and protect privacy in AI applications. The anticipated outcomes should provide significant benefits to a wide range of applications, such as providing timely alerts to the media, government organisations, and the industry about misleading fake visual data, and preventing financial crimes on synthetic identity fraud.
    Read more Read less
    More information
    Active Funded Activity

    Discovery Projects - Grant ID: DP240101032

    Funder
    Australian Research Council
    Funding Amount
    $513,374.00
    Summary
    Preventing Exfiltration of Sensitive Data by Malicious Insiders or Malwares. Data exfiltration is a serious threat as highlighted in recent leakage of sensitive data that resulted in huge economic losses as well as unprecedented breaches of national security. The aim of this project is to develop a comprehensive and robust solution for detection and prevention of sensitive data exfiltration attempts by malware and unauthorised human users. Expected outcomes include scalable monitoring methods an .... Preventing Exfiltration of Sensitive Data by Malicious Insiders or Malwares. Data exfiltration is a serious threat as highlighted in recent leakage of sensitive data that resulted in huge economic losses as well as unprecedented breaches of national security. The aim of this project is to develop a comprehensive and robust solution for detection and prevention of sensitive data exfiltration attempts by malware and unauthorised human users. Expected outcomes include scalable monitoring methods and efficient algorithms that will be able to prevent real-time exfiltration and identify previously undetected exfiltration of sensitive data. This should provide significant benefits to governments, defence networks as well as businesses and health sectors, as it will protect them from sophisticated cyber attacks.
    Read more Read less
    More information
    Active Funded Activity

    Discovery Early Career Researcher Award - Grant ID: DE230100473

    Funder
    Australian Research Council
    Funding Amount
    $410,154.00
    Summary
    Effective integration of human and automated analyses for security testing. This DECRA project aims to significantly improve the performance of current state-of-the-art automated security testing approaches, enabling them to discover more security bugs in strict time constraints. The key innovation of the project is its novel way to embrace human element to leverage the ingenuity of the developers. This project will help companies improve the security and reliability of their products, thwarting .... Effective integration of human and automated analyses for security testing. This DECRA project aims to significantly improve the performance of current state-of-the-art automated security testing approaches, enabling them to discover more security bugs in strict time constraints. The key innovation of the project is its novel way to embrace human element to leverage the ingenuity of the developers. This project will help companies improve the security and reliability of their products, thwarting cyberattacks that cost Australian business $29 billion each year. The knowledge from this project will be transferred and integrated into higher education subjects to train the next generations of software developers, who are responsible to build security-critical systems that we all rely on now and in the future.
    Read more Read less
    More information
    Active Funded Activity

    Linkage Projects - Grant ID: LP230100083

    Funder
    Australian Research Council
    Funding Amount
    $445,009.00
    Summary
    Robust Defences against Adversarial Machine Learning for UAV Systems. This project aims to investigate robust defences for Unmanned Aerial Vehicle (UAV) systems to protect them against adversarial Machine Learning (ML) attacks. This project expects to generate new knowledge in the area of cybersecurity using innovative approaches to safeguard UAV systems from attacks that exploit vulnerabilities in ML models. The expected outcomes of this project include improve techniques for understanding and .... Robust Defences against Adversarial Machine Learning for UAV Systems. This project aims to investigate robust defences for Unmanned Aerial Vehicle (UAV) systems to protect them against adversarial Machine Learning (ML) attacks. This project expects to generate new knowledge in the area of cybersecurity using innovative approaches to safeguard UAV systems from attacks that exploit vulnerabilities in ML models. The expected outcomes of this project include improve techniques for understanding and developing robust ML models and enhanced capacity to design secure UAV systems. This should provide significant benefits, such as improving the security of UAV technology and increasing the reliable use of UAVs for transport and logistics services to support urban and regional communities in Australia.
    Read more Read less
    More information
    Active Funded Activity

    Discovery Projects - Grant ID: DP230100991

    Funder
    Australian Research Council
    Funding Amount
    $481,610.00
    Summary
    Efficient and secure data integrity auditing on cloud. Data auditing presents a promising way for verifying user data integrity on cloud, i.e., whether user privacy sensitive data such as identity information on cloud is modified or lost. Current auditing approaches lack sufficient efficiency and security. This results in that they cannot provide timely warning and precaution on potential data loss threats. This project aims to systematically investigate this significant challenge and expects to .... Efficient and secure data integrity auditing on cloud. Data auditing presents a promising way for verifying user data integrity on cloud, i.e., whether user privacy sensitive data such as identity information on cloud is modified or lost. Current auditing approaches lack sufficient efficiency and security. This results in that they cannot provide timely warning and precaution on potential data loss threats. This project aims to systematically investigate this significant challenge and expects to establish innovative research and solutions for enabling efficient and secure data integrity auditing on cloud. The project outcomes will help to safeguard Australian community in fast-growing cyber world, and benefit to fast-growing user privacy sensitive data hosting and applications on cloud.
    Read more Read less
    More information
    Active Funded Activity

    Discovery Projects - Grant ID: DP240102164

    Funder
    Australian Research Council
    Funding Amount
    $497,110.00
    Summary
    Attribution of Machine-generated Code for Accountability. Machine-generated (or neural) code is usually produced by AI tools to speed up software development. However, such codes have recently raised serious security and privacy concerns. This project aims to attribute these codes to their generative models for accountability purposes. In the process, a series of new techniques are developed to differentiate between the codes generated by different models. The outcomes include analysis of neural .... Attribution of Machine-generated Code for Accountability. Machine-generated (or neural) code is usually produced by AI tools to speed up software development. However, such codes have recently raised serious security and privacy concerns. This project aims to attribute these codes to their generative models for accountability purposes. In the process, a series of new techniques are developed to differentiate between the codes generated by different models. The outcomes include analysis of neural code fingerprints, classification of neural codes, and theories to verify the correctness of code attribution. These will provide significant benefits, ranging from copyright protection to privacy preservation. This project is timely since currently the software community is pervasively using neural codes.
    Read more Read less
    More information
    Active Funded Activity

    Discovery Early Career Researcher Award - Grant ID: DE230101058

    Funder
    Australian Research Council
    Funding Amount
    $437,254.00
    Summary
    Glass-box Deep Machine Perception for Trustworthy Artificial Intelligence. Explainability and Transparency are the key values for development and deployment of Artificial Intelligence (AI) in Australia’s AI Ethics Framework for industry and governments. This project aims to build new tools to make the central technology of AI - deep learning - transparent and explainable. Its expected outputs are novel theory-driven algorithms and unconventional foundational blocks for deep learning that will al .... Glass-box Deep Machine Perception for Trustworthy Artificial Intelligence. Explainability and Transparency are the key values for development and deployment of Artificial Intelligence (AI) in Australia’s AI Ethics Framework for industry and governments. This project aims to build new tools to make the central technology of AI - deep learning - transparent and explainable. Its expected outputs are novel theory-driven algorithms and unconventional foundational blocks for deep learning that will allow humans to clearly interpret the reasoning process of this technology, which is currently not possible. It is expected to significantly advance our knowledge in machine intelligence and perception. Due to their fundamental nature, the project outcomes are likely to benefit industry and scientific frontiers alike.
    Read more Read less
    More information

    Showing 1-10 of 13 Funded Activites

    • 1
    • 2
    Advanced Search

    Advanced search on the Researcher index.

    Advanced search on the Funded Activity index.

    Advanced search on the Organisation index.

    National Collaborative Research Infrastructure Strategy

    The Australian Research Data Commons is enabled by NCRIS.

    ARDC CONNECT NEWSLETTER

    Subscribe to the ARDC Connect Newsletter to keep up-to-date with the latest digital research news, events, resources, career opportunities and more.

    Subscribe

    Quick Links

    • Home
    • About Research Link Australia
    • Product Roadmap
    • Documentation
    • Disclaimer
    • Contact ARDC

    We acknowledge and celebrate the First Australians on whose traditional lands we live and work, and we pay our respects to Elders past, present and emerging.

    Copyright © ARDC. ACN 633 798 857 Terms and Conditions Privacy Policy Accessibility Statement
    Top
    Quick Feedback