I / we have written about AI. We have asked about disclaimers and acknowledgements in response to people saying things like “We might feel that AI is most usefully considered as another tool: in itself ethically neutral…” (it is not ethically neutral) (pretty sure that ethical neutrality is not a real thing!). We have written about being “AIngry” and we have written about not letting the “AI pigeon drive the bus”.
And now prompted by the ever increasing amount of AI marketing in my inbox, the promise of “Sunlit uplands of AI enabled education” and how “AI will let us take back control” I decided to read a piece of the marketing in depth this morning, especially as the authors had backed it up with a “position paper on generative AI”.
It was a standard “this is what we’re doing, this is what we believe” about AI in education article / advertorial. Aren’t they all? And I am not pointing fingers. They are all in their own “Arms races” and they need to not be left behind. So in part I am a little sorry (just a little) that it is this piece that initiated my pre breakfast rant.
At the end of the article, the company laid out its “principles that offer assurance to our community of our commitment to lead with ethical AI”. These seem good, or at least, on the surface reasonable. So with that in mind I downloaded their “position paper on generative AI” to find out more about the details of those commitments. Well. At least they repeated them in the position paper….
Here are those principles / commitments.
- Students first: We are a platform powered by people. AI is a tool that can empower students, alumni, careers professionals, and employers and support every student to find a meaningful career
- Transparency & training: We educate users on AI through accessible information about AI and how we utilise AI
- Privacy & security: We implement rigorous safeguards to protect user data and regularly review our policies, vendors, and systems
- Partnership: We collaborate with students, careers professionals, and employers to shape a focused, ethical research agenda that informs our developments
- Equity & inclusion: We build tools to increase opportunities and remove unfair bias, evaluating new developments to understand capabilities, limitations, and impacts
- Responsible innovation: We deploy robust governance to advance AI responsibly, keeping stakeholders informed through regular reporting and discussion
What are the questions we should be asking?
I decided that I would approach this as someone sat in a meeting listening to a pitch from the company. I put three hats on, an Academic, a Chief Information Officer, and a Student. And I admit these are my best guesses – not my evidence based expertise.
Questions from the Academic
Students First:
Can you provide evidence to back-up the claim that your AI tools are empowering students and aiding them in their career paths?
What pedagogical theories or frameworks are your AI tools based on, and how do they align with current best practices in higher education?
Transparency & Training
Is the educational material on AI peer-reviewed, and does it adhere to academic standards for educational content?
Privacy & Security
Could you provide a detailed outline of your data protection process and how it protects users such as students?
Partnership
How do you ensure that the ethical considerations in your partnerships are in alignment with established academic codes of conduct?
Equity & Inclusion
Could you talk about the methodologies you employ to assess and remove bias from your AI algorithms?
What steps are you taking to ensure that your tools are accessible to students with disabilities or those from disadvantaged backgrounds?
Responsible Innovation
What ethical review processes are in place for your AI initiatives, and are these reviews conducted by an ethics committee with representation from academia?
Questions from the Chief Information Officer
Students First
Can you describe the technical architecture and how it integrates with existing educational systems?
Transparency & Training
Can the training be customised to fit within our existing educational frameworks and compliance requirements?
Privacy & Security
How does your platform comply with data protection regulations such as GDPR?
Partnership
What are the technical requirements for partnerships, and how easily can your AI solutions be integrated into existing educational infrastructures?
Equity & Inclusion
How does your platform ensure accessibility in compliance with standards such as the Web Content Accessibility Guidelines (WCAG)?
Responsible Innovation
What is the governance model for overseeing AI development, does it align with industry best practices? Which AI best practices?
How do you manage version control and updates to the AI algorithms, and what is the rollback plan in case of failures?
And finally some questions from the student on our procurement panel…
Students First
Can you provide examples of how the AI features have directly benefited students in their academic and career?
Transparency & Training
How transparent is the AI in terms of explaining its recommendations or decisions to students? Can I see why it made the suggestions it made?
Privacy & Security
What control will I have over my own data, and can I opt out of any data collection?
Partnership
How can we (students) be involved in the ongoing development and improvement of the platform?
Is there a channel for us to provide feedback or report issues directly to the developers? And what response can we expect?
Equity & Inclusion
What are you doing to ensure that the AI does not reinforce existing biases or stereotypes?
Responsible Innovation
How are ethical considerations, particularly those that affect us as students, integrated into your innovation process?