Western's Policies

In this section you will find exisitng policies that support AI governance at Western. 


Course outlines and AI use in the classroom

Instructors set AI rules course-by-course through the required course outline. Course outlines must include a Statement on the Use of Generative Artificial Intelligence (AI) that tells students whether AI is allowed, limited, or prohibited for that course (and helps ensure students aren’t disadvantaged by unequal access to paid tools).

🔝

Academic integrity, scholastic discipline, and prohibited AI use

Using AI in a way that violates course rules can fall under academic dishonesty (e.g., plagiarism, cheating, fraudulent submissions such as AI-fabricated/hallucinated content). These policies explain how misconduct is defined and handled, including processes and consequences.

🔝

Graduate Studies Guidance for Generative AI

Graduate students often use AI both as learners and as researchers. SGPS guidance clarifies boundaries like disclosure expectations and limits around evaluation processes (including restrictions around thesis examination/review contexts).

🔝

Research policy hub and research governance (Section 7)

Western’s research policies define responsibilities for ethical research practice and compliance. This is the central entry point for research-related policies including Responsible Conduct of Research, ethics (human/animal), and intellectual property.

🔝

Responsible Conduct of Research (Policy 7.0)

Research supported by Western (and often by external funders) must meet standards for honesty, accountability, and recordkeeping, including avoiding fabrication/falsification and ensuring appropriate attribution and disclosure when tools (including AI) are used in producing or analyzing research outputs.

🔝

Research involving human participants (Policy 7.14) and ethics board requirements

Any research involving human participants requires appropriate ethics review (NMREB/HSREB). Where AI tools process human data, additional risk controls may be required (e.g., technology risk assessment) to protect confidentiality and consent commitments.

🔝

Computing, technology, and acceptable use (MAPP 1.13)

This policy governs the ethical and lawful use of Western’s computing resources. It matters for AI because it covers things like unauthorized software, misuse of institutional systems, and expectations for protecting university data, even when using personal devices to access or interact with AI tools.

🔝

Privacy, Access to Information, and Privacy Impact Assessments (MAPP 1.23)

AI use often involves personal information or sensitive data. Western’s privacy framework requires appropriate safeguards and where personal information is involved may require a Privacy Impact Assessment (PIA) (notably referenced as required for projects working with personal information, effective July 1, 2025).

🔝

Data handling, security standards, and third-party sharing

Before using external AI tools, follow Western’s cybersecurity and data-handling standards to ensure information is handled appropriately. These standards explain how data is classified, what protections are required (such as secure storage and multi-factor authentication where applicable), and when formal agreements or approvals are needed before sharing information with third parties. This is especially important when an AI tool may transmit prompts, files, or outputs to vendor systems outside Western’s control.

🔝

Intellectual property (Policy 7.16)

This policy explains how intellectual property (IP) created at Western is handled, including how ownership can differ depending on the creator’s role (for example, faculty, staff, or students). It also outlines expectations related to disclosure and commercialization, which matters when AI contributes to inventions, software, datasets, or other outputs that may have research or commercial value.

🔝

Copyright and fair dealing

When AI is used to summarize, adapt, remix, or generate material based on existing works, copyright still applies. Western’s fair dealing guidelines explain when limited copying is permitted for education and research, and why large-scale copying, scanning, or scraping (including for AI-related uses) can create legal and compliance risks.

🔝

Procurement of AI tools (MAPP 2.8) and conflict of interest (MAPP 3.4)

AI tools, especially paid platforms, plug-ins, and enterprise services should be acquired through Western’s approved procurement processes to ensure appropriate review for security, privacy, accessibility, and institutional supportability. This is also where expectations about transparent purchasing practices and proper management of conflicts of interest apply, including when selecting vendors or recommending tools for students or staff.

🔝

Accessibility (MAPP 1.47)

Western’s accessibility policy applies to AI-enabled tools and digital materials. This includes ensuring that tools and content do not create barriers for people with disabilities, and that accessibility requirements are considered when selecting, deploying, and using technology (including AI systems and AI-generated content) in university activities.

🔝

Video monitoring (MAPP 1.42)

AI-enabled surveillance technologies (including facial recognition, identity matching, or behavioural analytics) can significantly increase privacy and security risk. Western’s video monitoring policy sets the governance requirements for when and how video monitoring may be used, including expectations around appropriate purpose, oversight, access controls, and alignment with privacy obligations.

🔝

Records management and archives (MAPP 1.30)

AI tools can generate content that may qualify as a university record (for example: prompts used to make decisions, AI-generated reports, official communications, or documented outputs used in university business). This policy sets expectations for records retention and lifecycle management, including ensuring records can be retained, produced, or disposed of appropriately in line with Western’s requirements.

🔝

Faculty collective agreement, academic freedom, and workplace impacts

AI can affect teaching, research, assessment practices, and workload. Western’s faculty collective agreement provides the framework for professional responsibilities and working conditions, and it helps define guardrails for how new technologies are introduced and managed in academic work contexts—alongside principles of academic freedom and fair employment practices.

🔝

Program / unit-specific policies (when your context is specialized)

In some cases, faculties, schools, or departments maintain additional requirements—such as data security rules for assessment materials, clinical or professional obligations, or unit-specific computing standards. When AI tools are used to handle unit-controlled information (especially assessment or sensitive data), these local policies may apply in addition to university-wide policies.

🔝