Research
AI literacy, AI in education, synthetic empathy, and what happens when machines start talking to each other.
My research asks how people interact with AI systems and what happens when they do not think critically about the output. I work across three areas: how students actually use AI in education, how AI simulates human qualities like empathy, and how to build genuine AI literacy that goes beyond training manuals.
I publish across disciplines. My work appears in education, science communication, geoscience, and AI journals. Full publication list on Google Scholar (125+ publications, 2,500+ citations, h-index 25).
Current projects
Generative AI and university students
Leverhulme Trust, 2024 – 2026
Investigating how students in UK universities interact with ChatGPT and other generative AI systems. The project examines output evaluation behaviours, trust calibration, and how students decide when AI-generated content is reliable. This is empirical research with real users, not opinion surveys about attitudes.
Four papers in development: a systematic literature review of student GenAI interaction research; a qualitative interview study on student trust and evaluation behaviours; a survey of 7,000+ students across UK institutions, the largest study of its kind in the country; and a perspective piece on conducting large-scale cross-institutional research.
The AI literacy deficit model
2025 – present
A policy intervention arguing that current AI literacy programmes globally reproduce the discredited information-deficit model from science communication. Thirty years of evidence from climate, vaccines, GMOs, and nuclear risk shows that public responses are shaped by trust, values, and identity, not by information alone. The same mistake is being made with AI. The work proposes participatory alternatives: two-way dialogue, creative methods, and genuine public voice in AI governance.
Selected publications
Illingworth, S. (2026) ‘Using AI responsibly means knowing when not to use it’, The Conversation.
Illingworth, S. and Forsyth, R. (2026) GenAI in Higher Education: Redefining Teaching and Learning. London: Bloomsbury. Open access.
Eacersall, D., Pretorius, L., Smirnov, I., Spray, E., Illingworth, S., Chugh, R., Strydom, S., Stratton-Maher, D., Simmons, J., Jennings, I., Roux, R., Kamrowski, R., Downie, A., Thong, C.L. and Howell, K.A. (2025) ‘Navigating ethical challenges in generative AI-enhanced research: The ETHICAL framework for responsible generative AI use’, Journal of Applied Learning and Teaching, 8(2).
Illingworth, S. and Gow, S. (2025) ‘Generative AI literacy’, in Ng, D.T.K., Chu, S.K.W., O’Dea, X. and Leung, J.K.L. (eds.) From AI Literacy to Generative AI Literacy. Singapore: Springer, pp. 39–55.
Illingworth, S. (2025) ‘Take the time to ensure that AI is safe’, Nature, 646, p. 804.
Funding track record
Over £450,000 in external research funding secured from:
Books
GenAI in Higher Education: Redefining Teaching and Learning (Bloomsbury, 2026). Open access.
Bridging Scholarship and Practice in Higher Education (Routledge, 2025)
Poetry and Pedagogy in Higher Education (Policy Press, 2024)
Effective Science Communication, 3rd edition (IOP Publishing, 2024)
A Sonnet to Science: Scientists and Their Poetry (Manchester University Press, 2019)
10 books total. Full list and all publications on Google Scholar.