ORCID Profile
0000-0001-8664-6117
Current Organisation
University of Queensland
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Research, Science and Technology Policy | Specialist Studies in Education | Learning Sciences | Information Systems | Educational Technology and Computing | Database Management | Business Information Systems |
Teaching and Instruction Technologies | Learner and Learning Processes | Application Software Packages (excl. Computer Games) | Electronic Information Storage and Retrieval Services | Technological and Organisational Innovation | Expanding Knowledge in Technology
Publisher: Elsevier BV
Date: 2022
Publisher: Springer Berlin Heidelberg
Date: 2010
Publisher: Elsevier BV
Date: 2022
Publisher: KSI Research Inc. and Knowledge Systems Institute Graduate School
Date: 08-07-2019
Publisher: Springer Science and Business Media LLC
Date: 20-07-2012
Publisher: ACM
Date: 07-03-2018
Publisher: ACM
Date: 21-03-2022
Publisher: Society for Learning Analytics Research
Date: 30-08-2023
Abstract: NA
Publisher: Springer International Publishing
Date: 2020
Publisher: Springer Berlin Heidelberg
Date: 2008
Publisher: Elsevier BV
Date: 2023
Publisher: Springer Berlin Heidelberg
Date: 2010
Publisher: IEEE
Date: 03-2009
Publisher: Australasian Society for Computers in Learning in Tertiary Education
Date: 27-06-2021
DOI: 10.14742/AJET.6735
Abstract: Amid increasing calls for universities to transition to online learning, there is a need to explore how platforms and technology can provide positive student experiences and support learning. In this paper, we discuss the implementation of an online peer learning and recommender platform in a large, multi-c us, first-year health subject (n = 2095). The Recommendation in Personalised Peer Learning Environments (RiPPLE) platform supports student’s co-creation of learning resources and allows for students to provide feedback and rate their peers’ submissions. Our results indicated that both student engagement and academic performance were positively impacted for users by the introduction of the RiPPLE platform, but that academic preparedness, in the form of students’ ATAR scores, strongly influenced their engagement and the benefits received. Implications for practice or policy: We explored if students were willing to co-create learning resources online. Our study piloted an online platform known as Recommendation in Personalised Peer Learning Environments (RiPPLE). Critical analysis provides insights into fostering online engagement and peer learning. We further offer recommendations for future practice on how to embed online student co-creation of curriculum.
Publisher: Society for Learning Analytics Research
Date: 03-11-2021
Abstract: The value of students developing the capacity to accurately judge the quality of their work and that of others has been widely studied and recognized in higher education literature. To date, much of the research and commentary on evaluative judgment has been theoretical and speculative in nature, focusing on perceived benefits and proposing strategies seen to hold the potential to foster evaluative judgment. The efficacy of the strategies remains largely untested. The rise of educational tools and technologies that generate data on learning activities at an unprecedented scale, alongside insights from the learning sciences and learning analytics communities, provides new opportunities for fostering and supporting empirical research on evaluative judgment. Accordingly, this paper offers a conceptual framework and an instantiation of that framework in the form of an educational tool called RiPPLE for data-driven approaches to investigating the enhancement of evaluative judgment. Two case studies, demonstrating how RiPPLE can foster and support empirical research on evaluative judgment, are presented.
Publisher: University of South Australia Library
Date: 2023
DOI: 10.59453/XLUD7002
Publisher: Society for Learning Analytics Research
Date: 13-12-2019
Abstract: This paper presents a platform called RiPPLE (Recommendation in Personalised Peer-Learning Environments) that recommends personalized learning activities to students based on their knowledge state from a pool of crowdsourced learning activities that are generated by educators and the students themselves. RiPPLE integrates insights from crowdsourcing, learning sciences, and adaptive learning, aiming to narrow the gap between these large bodies of research while providing a practical platform-based implementation that instructors can easily use in their courses. This paper provides a design overview of RiPPLE, which can be employed as a standalone tool or embedded into any learning management system (LMS) or online platform that supports the Learning Tools Interoperability (LTI) standard. The platform has been evaluated based on a pilot in an introductory course with 453 students at The University of Queensland. Initial results suggest that the use of the RiPPLE platform led to measurable learning gains and that students perceived the platform as beneficially supporting their learning.
Publisher: University of South Australia Library
Date: 2023
DOI: 10.59453//XLUD7002
Abstract: The use of AI-powered educational technologies (AI-EdTech) offers a range of advantages to students, instructors, and educational institutions. While much has been achieved, several challenges in managing the data underpinning AI-EdTech are limiting progress in the field. This paper outlines some of these challenges and argues that data management research has the potential to provide solutions that can enable responsible and effective learner-supporting, teacher-supporting, and institution-supporting AI-EdTech. Our hope is to establish a common ground for collaboration and to foster partnerships among educational experts, AI developers and data management researchers in order to respond effectively to the rapidly evolving global educational landscape and drive the development of AI-EdTech.
Publisher: McMaster University Library
Date: 04-12-2018
Abstract: This case study was designed as one of many pilot projects to inform the scaling-up of Students as Partners (SaP) as a whole-of-institution strategy to enhance the student learning experience. It sought to evaluate the other pilots in order to understand the phenomena of partnerships and how students and staff perceive the experience of working in partnership. It also sought to explore the extent of benefits and challenges experienced by staff and students throughout the process and identify potential implications for future implementation.
Publisher: Springer Berlin Heidelberg
Date: 2009
Publisher: Informa UK Limited
Date: 07-12-2022
Publisher: Informa UK Limited
Date: 10-03-2021
Publisher: Springer International Publishing
Date: 2020
Publisher: ACM
Date: 26-02-2020
Publisher: IEEE
Date: 04-2013
Publisher: ACM
Date: 13-03-2023
Publisher: Society for Learning Analytics Research
Date: 24-11-2018
Abstract: Educational environments continue to evolve rapidly to address the needs of erse, growing student populations while embracing advances in pedagogy and technology. In this changing landscape, ensuring consistency among the assessments for different offerings of a course (within or across terms), providing meaningful feedback about student achievements, and tracking student progress over time are all challenging tasks, particularly at scale. Here, a collection of visual Topic Dependency Models (TDMs) is proposed to help address these challenges. It uses statistical models to determine and visualize student achievements on one or more topics and their dependencies at a course level reference TDM (e.g., CS 100) as well as assessment data at the classroom level (e.g., students in CS 100 Term 1 2016 Section 001), both at one point in time (static) and over time (dynamic). The collection of TDMs share a common two-weighted graph foundation. Exemplar algorithms are presented for the creation of the course reference and selected class (static and dynamic) TDMs the algorithms are illustrated using a common symbolic ex le. Studies on the application of the TDM collection on datasets from two university courses are presented these case studies utilize the open-source, proof of concept tool under development.
Publisher: Springer Science and Business Media LLC
Date: 05-2014
Publisher: Springer Science and Business Media LLC
Date: 18-07-2012
Publisher: Zenodo
Date: 2017
Publisher: Springer Science and Business Media LLC
Date: 16-01-2022
DOI: 10.1007/S00778-021-00720-2
Abstract: The appetite for effective use of information assets has been steadily rising in both public and private sector organisations. However, whether the information is used for social good or commercial gain, there is a growing recognition of the complex socio-technical challenges associated with balancing the erse demands of regulatory compliance and data privacy, social expectations and ethical use, business process agility and value creation, and scarcity of data science talent. In this vision paper, we present a series of case studies that highlight these interconnected challenges, across a range of application areas. We use the insights from the case studies to introduce Information Resilience, as a scaffold within which the competing requirements of responsible and agile approaches to information use can be positioned. The aim of this paper is to develop and present a manifesto for Information Resilience that can serve as a reference for future research and development in relevant areas of responsible data management.
Publisher: Society for Learning Analytics Research
Date: 03-11-2021
Abstract: Learning analytics dashboards commonly visualize data about students with the aim of helping students and educators understand and make informed decisions about the learning process. To assist with making sense of complex and multidimensional data, many learning analytics systems and dashboards have relied strongly on AI algorithms based on predictive analytics. While predictive models have been successful in many domains, there is an increasing realization of the inadequacies of using predictive models in decision-making tasks that affect in iduals without human oversight. In this paper, we employ a suite of state-of-the-art algorithms, from the online analytics processing, data mining, and process mining domains, to present an alternative human-in-the-loop AI method to enable educators to identify, explore, and use appropriate interventions for subpopulations of students with the highest deviation in performance or learning process compared to the rest of the class. We demonstrate an application of our proposed approach in an existing learning analytics dashboard (LAD) and explore the recommended drill-downs in a course with 875 students. The demonstration provides an ex le of the recommendations from real course data and shows how recommendations can lead the user to interesting insights. Furthermore, we demonstrate how our approach can be employed to develop intelligent LADs.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2023
Publisher: Springer Science and Business Media LLC
Date: 30-05-2012
Publisher: Wiley
Date: 31-08-2022
DOI: 10.1111/JCAL.12729
Abstract: The use of crowdsourcing in a pedagogically supported form to partner with learners in developing novel content is emerging as a viable approach for engaging students in higher‐order learning at scale. However, how students behave in this form of crowdsourcing, referred to as learnersourcing, is still insufficiently explored. To contribute to filling this gap, this study explores how students engage with learnersourcing tasks across a range of course and assessment designs. We conducted an exploratory study on trace data of 1279 students across three courses, originating from the use of a learnersourcing environment under different assessment designs. We employed a new methodology from the learning analytics (LA) field that aims to represent students' behaviour through two theoretically‐derived latent constructs: learning tactics and the learning strategies built upon them. The study's results demonstrate students use different tactics and strategies, highlight the association of learnersourcing contexts with the identified learning tactics and strategies, indicate a significant association between the strategies and performance and contribute to the employed method's generalisability by applying it to a new context. This study provides an ex le of how learning analytics methods can be employed towards the development of effective learnersourcing systems and, more broadly, technological educational solutions that support learner‐centred and data‐driven learning at scale. Findings should inform best practices for integrating learnersourcing activities into course design and shed light on the relevance of tactics and strategies to support teachers in making informed pedagogical decisions.
Publisher: ACM
Date: 06-2022
Publisher: Wiley
Date: 18-05-2022
DOI: 10.1111/BJET.13233
Abstract: Peer assessment has been recognised as a sustainable and scalable assessment method that promotes higher‐order learning and provides students with fast and detailed feedback on their work. Despite these benefits, some common concerns and criticisms are associated with the use of peer assessments (eg, scarcity of high‐quality feedback from peer student‐assessors and lack of accuracy in assigning a grade to the assessee) that raise questions about their trustworthiness. Consequently, many instructors and educational institutions have been anxious about incorporating peer assessment into their teaching. This paper aims to contribute to the growing literature on how AI and learning analytics may be incorporated to address some of the common concerns associated with peer assessment systems, which in turn can increase their trustworthiness and adoption. In particular, we present and evaluate our AI‐assisted and analytical approaches that aim to (1) offer guidelines and assistance to student‐assessors during in idual reviews to provide better feedback, (2) integrate probabilistic and text analysis inference models to improve the accuracy of the assigned grades, (3) develop feedback on reviews strategies that enable peer assessors to review the work of each other, and (4) employ a spot‐checking mechanism to assist instructors in optimally overseeing the peer assessment process. What is already known about this topic Engaging students in peer assessment has been demonstrated to have various benefits. However, there are some common concerns associated with employing peer assessment that raise questions about their trustworthiness as an assessment item. What this paper adds Methods and processes on how AI and learning analytics may be incorporated to address some of the common concerns associated with peer assessment systems, which in turn, can increase their trustworthiness and adoption. Implications for practice Presentation of a systematic approach for development, deployment and evaluation of AI and analytics approaches in peer assessment systems.
Publisher: Springer International Publishing
Date: 2018
Publisher: IEEE
Date: 12-2009
Publisher: Springer Berlin Heidelberg
Date: 2012
Publisher: Springer International Publishing
Date: 2020
Publisher: KSI Research Inc.
Date: 11-2019
Publisher: Wiley
Date: 27-10-2023
DOI: 10.1111/BJET.13270
Abstract: Traditional item analyses such as classical test theory (CTT) use exam‐taker responses to assessment items to approximate their difficulty and discrimination. The increased adoption by educational institutions of electronic assessment platforms (EAPs) provides new avenues for assessment analytics by capturing detailed logs of an exam‐taker's journey through their exam. This paper explores how logs created by EAPs can be employed alongside exam‐taker responses and CTT to gain deeper insights into exam items. In particular, we propose an approach for deriving features from exam logs for approximating item difficulty and discrimination based on exam‐taker behaviour during an exam. Items for which difficulty and discrimination differ significantly between CTT analysis and our approach are flagged through outlier detection for independent academic review. We demonstrate our approach by analysing de‐identified exam logs and responses to assessment items of 463 medical students enrolled in a first‐year biomedical sciences course. The analysis shows that the number of times an exam‐taker visits an item before selecting a final response is a strong indicator of an item's difficulty and discrimination. Scrutiny by the course instructor of the seven items identified as outliers suggests our log‐based analysis can provide insights beyond what is captured by traditional item analyses. What is already known about this topic Traditional item analysis is based on exam‐taker responses to the items using mathematical and statistical models from classical test theory (CTT). The difficulty and discrimination indices thus calculated can be used to determine the effectiveness of each item and consequently the reliability of the entire exam. What this paper adds Data extracted from exam logs can be used to identify exam‐taker behaviours which complement classical test theory in approximating the difficulty and discrimination of an item and identifying items that may require instructor review. Implications for practice and/or policy Identifying the behaviours of successful exam‐takers may allow us to develop effective exam‐taking strategies and personal recommendations for students. Analysing exam logs may also provide an additional tool for identifying struggling students and items in need of revision.
Publisher: ACM
Date: 12-04-2021
Publisher: Elsevier BV
Date: 03-2010
Publisher: ACM
Date: 12-04-2021
Publisher: ACM
Date: 07-03-2018
Publisher: Springer International Publishing
Date: 2020
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 02-2021
Publisher: ACM
Date: 08-03-2017
Publisher: Springer Science and Business Media LLC
Date: 16-06-2021
Start Date: 08-2022
End Date: 07-2025
Amount: $389,011.00
Funder: Australian Research Council
View Funded ActivityStart Date: 07-2021
End Date: 07-2026
Amount: $4,883,406.00
Funder: Australian Research Council
View Funded Activity