Guidance by Role

Instructors

You have complete autonomy in how, and if, AI is integrated into your course. There are cases where a technology like ChatGPT might become a useful support for the course (for example, it can be a remarkably good tutor), but there are other cases where it is inappropriate. 

While the attraction of ‘AI detection software’ is obviously enormous, the reality is that it is impossible to detect AI-generated content with certainty; this is reflected in the appalling accuracy rates of these ‘detectors’.

Students

You have an obligation to act with honesty and integrity and abide by the rules of the syllabus for each course. You also have an obligation to yourself to learn more about a technology that may have a significant impact on your life. Where you are uncertain, ask your instructor for guidance.

Researchers

Employing AI in primary research is governed by all the same policies and regulations that govern non-AI-assisted research. Most publishers will not accept AI as a co-author, but many require disclosure of how AI was used in the preparation of the manuscript (and, of course, in the conduct of the research). In the preparation and evaluation of grants, there are some funding agencies (e.g., NIH and CIHR) that have issued direct guidance on permitted use of generative AI while others are relying on existing policies, most importantly the recognition that a Principal Investigator is fully and solely accountable for what they submit.

Employees

You must respect all existing policies with special attention to those around privacy and data security. You should not, e.g., submit personal information to an insecure public chatbot like ChatGPT. But where it is appropriate, you should feel empowered to experiment with how these tools can improve your work life. If you aren’t sure if a use case is permitted, ask your supervisor or contact caio@uwo.ca