Tuesday, May 27, 2025

A short history of the development of a College based AI policy


A colleague got in touch with me to ask. 

" When the college was creating the guidance for GenAI use what informed this? I’m trying to collect some policy documents for my dissertation on GenAI policy and wondered if you might have any suggestions. I assume SQAs policy was considered but did you use any policies from the Scottish Government, EU, other governing bodies etc "
I thought it was worth sharing the response. I am still watching education asking some of the correct questions but still mainly circling the wagons. 

This story starts in April 2021 pre Chat GPT and being asked to respond and shape a College response to the assumed incidence of the rise of contract cheating across the College. At the time we knew and worked with colleagues in Scottish HE where contract cheating was a thing and we knew how it generally manifested itself.

In short what prompted the initial guidance on artificial intelligence - the first College in the UK to offer some,  was in part some frustration with a new member of SMT from HE who was insistent  that a large number of students at City of Glasgow College were buying essays from essay mills at £50 and more a pop. While we had the knowledge that something else was actually going on. The College at this point kept no central records on instances of academic misconduct. 

We had data to show that this was not happening. We knew from HE that most of the bought in essays raised at least a few flags in plagiarism detection. We knew too that staff teaching generally smaller classes in FE were reasonably vigilant and knew their students.

However, we did know learners were starting to use Microsoft, Google , Grammarly and other tools to ‘improve’ their essay writing. We wanted to do some work around this to support teachers and students. 

For students this would be around when and how to acknowledge that they had used tools to support their essay writing. For teaching staff to raise staff awareness that these tools were in use, it was manageable, permissible and actually supported learners' accessibility needs. We were rolling out Canvas a new VLE in this time frame too and we were very focused on accessibility. 

We spoke to students through the students' association who confirmed that students used a range of tools. There was actually a very low awareness of essay farms. The students highlighted that while there were free tools they were very unlikely to pay for essay creation. They had legitimate fears too that the existing plagiarism software would catch out learners who commissioned essays in this way. 

But more concerningly they were worried about using some of the assistive tools to support their additional learning needs. 

It was clear that the institution and the staff were being blind sided by some of these developments. 

We wanted to change the focus from simply tackling 
'academic misconduct' to one where we could promote academic integrity through changing learner and lecturers' practice. I think we achieved this in the end, but only to some degree. In medium term this will only come with improved digital skills for lecturers and students and fundamental changes in the overall approach to assessment. 

At that point Chat GPT appeared, things accelerated and a form of hysteria started. 

UCL in London had early guidance on using and referencing AI but it was framed in very Higher Education, University language. It has been refined but still on their website. We took this and clarified it for College staff and students, discussing this with learners, as we moved along, giving UCL due attribution. 

We then shared this internally and externally on the Learning and Teaching Academy website.  The LTA website has since been updated and the supporting documents have disappeared onto the College intranet. I hope they have been refined to support this ever dynamic landscape. 


In the background we met some turbulence. A small but vocal number of staff wanted the college to ban any use of AI tools and or wished for a fool proof AI detection engine. 

We spent a year testing the Turnitin AI detection tool and found it to be generating too many false positives and switched it off before Turnitin came back asking for another fee for this 'service'. We also highlighted that on occasions when Turnitin 'failed' it was indicating that the same assessment had been used for more than five years and required updating.

On occasion we were asked to investigate a claim by academic staff that AI had been used in creation of some work and not attributed. In some cases we were able to show an academic how the history and tracking of changes works in word. It was all the learner's own work and/or whoever spent several days and hours authoring. 

We worked with Jisc and were the only College to run a Jisc focus group with students around their use of AI.  This helped further refine our guidance. The stats in the slide deck below reflect what students said they were already using in September 2023. It went through a number of iterations and versions but sums up the College's overall approach at the time. 


 

This led to our materials being shared more widely and I was involved with helping SQA create their initial policy and guidance. We shared our work too with the QAA , at the BETT Conference,  EdTech Europe and at other conferences. We were indebted to colleagues in these organisations and to Donald Clark who appreciated what we were doing and who we were doing it for - the learners.

We led  'delivery not delay', was a College mantra. We ran lots of workshops for staff and students around digital skills and literacy including use of AI these supported by the LT and Library teams. 

At the time of creation of this guidance, while UAL had some guidance, the work of SQA , Scottish Govt was just starting and in many cases we were involved in shaping policy there. 
The approach was informed by the work of Jisc and the research coming out of the Association of Learning Technology and from European policy documents. The EU work on AI is relatively new. At the time it was making our AI work align with European Digital Literacy standards for education. Through the LTA teams work on Open Educational Resources we also had the opportunity to see drafts of UNESCO work in the AI space and that helped inform what we were doing. 

I think around this time the College worked out that we knew what we were doing and we ran a workshop for the College Board around defining an appropriate risk for the College risk register. This in turn led to some workshops and specific support for the Colleges professional services staff. 

The guidance was also shaped by a concern to make sure that learners and teaching staff followed college guidance around using tools that were accessible and met GDPR standards. 

One regret, is that while we were the  first college to pilot Teachermatic I could not get the internal support to roll it out across the College as for instance Clyde College did later

There is still heavier lifting required. The advent of AI needs some deeper changes to assessment. If anything it highlights that assessment of competence should be a more practical demonstration of a particular skill. Not judged on a candidate's essay writing skill. 

Where are things now - with focus on AI and education. 







No comments: