Secure user authentication with continuous adaptive risk evaluation. Users typically authenticate to any given system only once - when they first access it (for example, through providing a password or fingerprint). The prevalence of single sign-on further allows this single authentication to be sufficient for access to multiple systems. Thus an adversary can obtain a large degree of access from stealing a single password, hijacking a user's session, or even simply borrowing their phone. This pr ....Secure user authentication with continuous adaptive risk evaluation. Users typically authenticate to any given system only once - when they first access it (for example, through providing a password or fingerprint). The prevalence of single sign-on further allows this single authentication to be sufficient for access to multiple systems. Thus an adversary can obtain a large degree of access from stealing a single password, hijacking a user's session, or even simply borrowing their phone. This project aims to develop a continuous authentication approach based on user behaviour - typical interactions plus biometrics (for example, keystroke dynamics) - combined with a risk adaptive assessment of the resources being accessed, resulting in re-authentication requests in the event of a suspected compromise.Read moreRead less
A fast and effective automated insider threat detection and prediction system. Threats from insiders directly compromises the security, privacy and integrity of Australian e-commerce, large databases and communication channels. This project will provide an essential step in combating this criminal activity by developing methods to detect such threats and secure the public's information against exposure and identity theft.
Machine learning in adversarial environments. Machine learning underpins the technologies driving the economies of both Silicon Valley and Wall Street, from web search and ad placement, to stock predictions and efforts in fighting cybercrime. This project aims to answer the question: How can machines learn from data when contributors act maliciously for personal gain?