ORCID Profile
0000-0002-5971-8469
Current Organisations
University of Sydney
,
University of Melbourne
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Elsevier BV
Date: 2023
Publisher: Wiley
Date: 27-04-2023
DOI: 10.1111/BJET.13333
Abstract: Researchers have demonstrated that dialogue‐based intelligent tutoring systems (ITS) can be effective in assisting students in learning. However, little research has attempted to explore the necessity of equipping dialogue‐based ITS with one of the most important capabilities of human tutors, that is, maintaining polite interactions with students, which is essential to provide students with a pleasant learning experience. In this study, we examined the role of politeness by analysing a large‐scale real‐world dataset consisting of over 14K online human–human tutorial dialogues. Specifically, we employed linguistic theories of politeness to characterise the politeness levels of tutor–student‐generated utterances, investigated the correlation between the politeness levels of tutors' utterances and students' problem‐solving performance and quantified the power of politeness in predicting students' problem‐solving performance by applying Gradient Tree Boosting. The study results showed that: (i) in the effective tutorial sessions (ie, sessions in which students successfully solved problems), tutors tended to be very polite at the start of a tutorial session and become more direct to guide students as the session progressed (ii) students with better performance in solving problems tended to be more polite at the beginning and the end of a tutorial session than their counterparts who failed to solve problems (iii) the correlation between tutors' polite expressions and students' performance was not evident in non‐instructional communication and (iv) politeness alone cannot adequately reveal students' problem‐solving performance, and thus other factors (eg, sentiment contained in utterances) should also be taken into account. What is already known about this topic Human–human tutoring is acknowledged as an effective instructional method. Polite expression can help strengthen the relationship between tutors and students. Polite expression can promote students' learning achievements in many educational contexts. What this paper adds By considering the students' prior progress on a problem‐based learning task, we demonstrated the extent to which tutors and students display politeness in tutoring dialogues. Tutors' polite expressions might not correlate with students' problem‐solving performance in online human–human tutoring dialogues. Politeness alone was insufficient to predict the students' performance. Implications for practice Tutors might consider using words with positive sentiment values to express politeness to students with prior progress, which might encourage those students to make a further effort. The polite strategy of expressing indirect requests could help tutors mitigate the sense of directness, but this strategy should be carefully used in delivering instructional hints, especially for students without prior progress. To better assist students without prior progress, tutors might consider using more direct expression to explicitly guide students.
Publisher: Elsevier BV
Date: 2023
Publisher: ACM
Date: 13-03-2023
Publisher: Wiley
Date: 06-08-2023
DOI: 10.1111/BJET.13370
Abstract: Educational technology innovations leveraging large language models (LLMs) have shown the potential to automate the laborious process of generating and analysing textual content. While various innovations have been developed to automate a range of educational tasks (eg, question generation, feedback provision, and essay grading), there are concerns regarding the practicality and ethicality of these innovations. Such concerns may hinder future research and the adoption of LLMs‐based innovations in authentic educational contexts. To address this, we conducted a systematic scoping review of 118 peer‐reviewed papers published since 2017 to pinpoint the current state of research on using LLMs to automate and support educational tasks. The findings revealed 53 use cases for LLMs in automating education tasks, categorised into nine main categories: profiling/labelling, detection, grading, teaching support, prediction, knowledge representation, feedback, content generation, and recommendation. Additionally, we also identified several practical and ethical challenges, including low technological readiness, lack of replicability and transparency and insufficient privacy and beneficence considerations. The findings were summarised into three recommendations for future studies, including updating existing innovations with state‐of‐the‐art models (eg, GPT‐3/4), embracing the initiative of open‐sourcing models/systems, and adopting a human‐centred approach throughout the developmental process. As the intersection of AI and education is continuously evolving, the findings of this study can serve as an essential reference point for researchers, allowing them to leverage the strengths, learn from the limitations, and uncover potential research opportunities enabled by ChatGPT and other generative AI models. What is currently known about this topic Generating and analysing text‐based content are time‐consuming and laborious tasks. Large language models are capable of efficiently analysing an unprecedented amount of textual content and completing complex natural language processing and generation tasks. Large language models have been increasingly used to develop educational technologies that aim to automate the generation and analysis of textual content, such as automated question generation and essay scoring. What this paper adds A comprehensive list of different educational tasks that could potentially benefit from LLMs‐based innovations through automation. A structured assessment of the practicality and ethicality of existing LLMs‐based innovations from seven important aspects using established frameworks. Three recommendations that could potentially support future studies to develop LLMs‐based innovations that are practical and ethical to implement in authentic educational contexts. Implications for practice and/or policy Updating existing innovations with state‐of‐the‐art models may further reduce the amount of manual effort required for adapting existing models to different educational tasks. The reporting standards of empirical research that aims to develop educational technologies using large language models need to be improved. Adopting a human‐centred approach throughout the developmental process could contribute to resolving the practical and ethical challenges of large language models in education.
No related grants have been discovered for Yuheng Li.