AI at Western

Generative AI Guidance for Western University

As advancements in Artificial Intelligence (AI) continue to accelerate, Western University recognizes the need for guidance on the ethical and responsible use of generative AI technologies, specifically Large Language Models (LLMs) like GPT-4. 

This information aims to provide initial guidance our community, considering the expanding presence of these technologies in various sectors including education, research, and administration. The guidance provided here does not replace or supersede any existing Western policies.

Western trusts the community to innovate and experiment with generative AI responsibly and ethically.

  • Instructors have autonomy in how AI is integrated into their courses.
  • Students should act with honesty, ask about uncertainty, and follow course rules.
  • Researchers must follow the existing policies of funders, publication venues and Western.
  • Employees must respect Western’s existing policies and think carefully about privacy.

What is AI?

Artificial Intelligence (AI) is a huge field encompassing a broad range of theories, technologies, and practices. When someone says ‘AI’ in 2023 it is more likely than not that they mean generative deep neural networks like GPT-4. We will restrict the scope of our advice to these generative AI tools.

We are accustomed to thinking of software as a tool. Microsoft Excel is like a (very fancy) digital wrench: it does exactly, and only, what you instruct it to do, step-by-step. To get proficient with this type of tool you need to read the manual and invest time in training. Almost all the software that we’ve worked with up to this point in our lives is of a similar nature.

The most productive mental framework for interacting with current-generation LLM-based chatbots like Bing, ChatGPT, and Claude is to imagine that you are speaking with a very knowledgeable, enthusiastic, and naïve intern. If you have an expectation that software will do only exactly what you tell it, that expectation will be subverted. Likewise, if you have an expectation that software will not offer its own opinions, or occasionally challenge your instructions, you may be surprised!

How should I use AI?

Experiment and share! The reality is that nobody knows how to optimally use generative AI. Each use case for AI is likely specialized to the role, and person, using AI. The more time you spend using this technology, and experimenting with it, the greater the payoff to your productivity. It won’t do magic for you “out of the box”, but you can accelerate your learning by reading about the successes of others with similar use cases.

What is Western’s AI policy?

The creation of policy to govern a single technology is problematic when the technology is new and rapidly evolving, with poorly understood use cases. All of Western’s existing policies apply to the use of AI. If you are a student and choose to use AI in a way that violates the rules of a course syllabus, you are violating policy on academic honesty. Rather than rush to create a single-technology policy, Western will continue to rely on our existing policies to govern our community.  

Instead, please reflect on the guidance about AI use for instructors, students, researchers, and employees below.

Principles of Using AI at Western:

Experiment responsibly, safely, and ethically. Before considering specific roles and example use-cases, we suggest ethical principles that can help guide our community's engagement with generative AI:

  1. Transparency: The algorithms, data, and design decisions underlying AI systems, and the applications of AI systems, should be openly accessible to the extent possible.
  2. Accountability: Individuals and teams using generative AI bear the responsibility for the consequences of the AI's actions and decisions.
  3. Integrity: The use of generative AI in academic work must be clearly disclosed to preserve the principle of academic honesty.
  4. Privacy: Personal data should be adequately protected, and AI should not be used to infringe upon individuals' privacy rights.
  5. Inclusion: Accessibility and fairness in AI tools should be actively considered, ensuring they don't perpetuate existing biases.

Guidance by Role:

Instructors:

You have complete autonomy in how, and if, AI is integrated into your course. There are cases where a technology like ChatGPT might become a useful support for the course (for example, it can be a remarkably good tutor), but there are other cases where it is inappropriate. 

While the attraction of ‘AI detection software’ is obviously enormous, the reality is that it is impossible to detect AI-generated content with certainty; this is reflected in the appalling accuracy rates of these ‘detectors’.

Students:

You have an obligation to act with honesty and integrity and abide by the rules of the syllabus for each course. You also have an obligation to yourself to learn more about a technology that may have a significant impact on your life. Where you are uncertain, ask your instructor for guidance.

Researchers:

Employing AI in primary research is governed by all the same policies and regulations that govern non-AI-assisted research. Most publishers will not accept AI as a co-author, but many require disclosure of how AI was used in the preparation of the manuscript (and, of course, in the conduct of the research). In the preparation and evaluation of grants, there are some funding agencies (e.g., NIH and CIHR) that have issued direct guidance on permitted use of generative AI while others are relying on existing policies, most importantly the recognition that a Principal Investigator is fully and solely accountable for what they submit.

Employees:

You must respect all existing policies with special attention to those around privacy and data security. You should not, e.g., submit personal information to an insecure public chatbot like ChatGPT. But where it is appropriate, you should feel empowered to experiment with how these tools can improve your work life. If you aren’t sure if a use case is permitted, ask your supervisor or contact caio@uwo.ca

Our AI journey ahead

Feedback, comments, and questions are welcomed by email to caio@uwo.ca. What you read here is a first step in a long journey. This advice will evolve as the technology, and our understanding of it, grows. Town halls, surveys, and other opportunities to engage in dialogue on this topic are in planning and will be communicated broadly, including on this site.