Artificial intelligence is increasingly present in higher education, including in the field of study programme accreditation. From organising large datasets to generating preliminary summaries, AI promises efficiency, speed and convenience. However, when it comes to preparing self-assessment reports for accreditation, a crucial principle must be emphasised: AI can assist in presenting evidence, but it cannot generate evidence itself.
Study programme accreditation is fundamentally evidence-based. Institutions must demonstrate compliance with defined quality standards through verifiable documentation: course syllabi, student feedback, research outputs, governance structures and more.
AI can help structure these materials, highlight patterns or point out potential gaps. Yet, no algorithm can replace the requirement for solid evidence. A report polished by AI, without substantiated data behind it, is merely an attractive format, not a guarantee of quality.
Accreditation also relies on human reflection and dialogue. Self-assessment is not only about collecting documents; it is a process in which faculty, students and administrators critically assess practices and engage in quality culture.
AI may summarise data or suggest improvements, but it cannot replicate the insights, judgment or discussions that emerge during site visits and peer review meetings. True quality culture requires active engagement and critical thinking elements that cannot be automated.
At the same time, AI should not be dismissed. Used wisely, it can significantly support institutions in self-assessment. For example, AI can detect inconsistencies in documentation before submission, analyse large datasets on student outcomes or help simulate scenarios to anticipate accreditation risks. In these ways, AI functions as a valuable tool, one that enhances efficiency and clarity while the underlying evidence remains human-generated.