Bridging the meaning gap: A computational approach to semantic variation. This project aims to create and validate a new class of large language models that capture and partially explain semantic variation between people. We will (1) measure nuanced differences in word meaning and linguistic experience across individuals; (2) develop computational models that incorporate this variation; and (3) evaluate the extent to which the models capture behavioural and cognitive differences related to polit ....Bridging the meaning gap: A computational approach to semantic variation. This project aims to create and validate a new class of large language models that capture and partially explain semantic variation between people. We will (1) measure nuanced differences in word meaning and linguistic experience across individuals; (2) develop computational models that incorporate this variation; and (3) evaluate the extent to which the models capture behavioural and cognitive differences related to political affiliation, gender, and culture. This will advance our understanding of the nature and origin of individual differences as well as improve the calibration of AI systems for under-represented groups. These advances will support eventual applied outcomes in health, domestic security, and resilience to misinformation. Read moreRead less
Small Scalable Natural Language Models using Explicit Memory. Deep neural networks have had spectacular success in natural language processing, seeing wide-spread deployment as part of automatic assistant devices in homes and cars, and across many valuable industries including finance, medicine and law. Fueling this success is the use of ever larger models, with exponentially increasing training resources, accompanying hardware and energy demands. This project aims to develop more compact models ....Small Scalable Natural Language Models using Explicit Memory. Deep neural networks have had spectacular success in natural language processing, seeing wide-spread deployment as part of automatic assistant devices in homes and cars, and across many valuable industries including finance, medicine and law. Fueling this success is the use of ever larger models, with exponentially increasing training resources, accompanying hardware and energy demands. This project aims to develop more compact models, based on the incorporation of an explicit searchable memory, which will dramatically reduce model size, hardware requirements and energy usage. This will make modern natural language processing more accessible, while also providing greater flexibility, allowing for more adaptable and portable technologies.Read moreRead less