Search Website
Researchers
Employing AI in primary research is governed by all the same policies and regulations that govern non-AI-assisted research. Most publishers will not accept AI as a co-author, but many require disclosure of how AI was used in the preparation of the manuscript (and, of course, in the conduct of the research). In the preparation and evaluation of grants, there are some funding agencies (e.g., NIH and CIHR) that have issued direct guidance on permitted use of generative AI while others are relying on existing policies, most importantly the recognition that a Principal Investigator is fully and solely accountable for what they submit.
AI in Grant Writing Guide
This guide covers practical ways that AI tools can support the grant development lifecycle — from early conceptualization and ideation, to literature analysis, to section-by-section drafting, to refinement and review. You'll find concrete workflow architectures, process ideas, and complete prompts you can copy and adapt to your needs.
Automated grant feedback
Western is pleased to pilot automated grant feedback through an AI tool that supports grant writing by "predicting" reviewers' critiques.
- The purpose of the automated grant feedback pilot is to provide an additional opportunity to preemptively identify potential weaknesses prone to decrease an application’s ranking.
- The pilot is not intended to replace or circumvent any current departmental, faculty or institutional supports or processes.
- The tool models reviewer critiques from past competitions as benchmarks for analyzing draft applications.
- Automated grant feedback is an "on-demand" tool—an optional, self-serve resource to bolster application competitiveness.
Guidance on the use of Artificial Intelligence in the development and review of research grant proposals
The Government of Canada’s interagency guidance outlines the responsible integration of generative artificial intelligence within the research grant lifecycle. Developed by CIHR, NSERC, SSHRC, and the CFI, these protocols emphasize that while AI can assist in drafting proposals, applicants remain fully accountable for the integrity and accuracy of their submissions. Crucially, the policy strictly prohibits reviewers from using publicly available AI tools to evaluate applications to prevent breaches of confidentiality and protect intellectual property. This framework ensures that as technological capabilities expand, the core values of transparency, data security, and accountability in Canadian research remain steadfast.