University of Winnipeg Foundation Artificial Intelligence (AI) Policy
1.0 Purpose
1.1 The purpose of this Policy is to establish the Foundation’s commitment to the ethical, transparent, and responsible use of Artificial Intelligence (AI) in support of its mission and operations.
1.2 The primary duty of the Board of Directors is to ensure that the use of AI aligns with the Foundation’s charitable objectives, legal obligations, and fiduciary responsibility to steward assets and data in the public interest.
1.3 This Policy provides the Foundation and its officers with a clear framework for evaluating and managing AI-related risks and opportunities while maintaining accountability, integrity, and public trust.
2.0 Foundation Beliefs
2.1 AI can enhance the Foundation’s operational efficiency, insight generation, and decision-making when applied responsibly and under human oversight.
2.2 The ethical use of AI must uphold the Foundation’s core values of transparency, respect, fairness, and accountability.
2.3 Human judgment remains paramount; AI shall inform but not replace the governance, fiduciary, or operational decisions of the Foundation.
2.4 Privacy, data protection, and compliance with all applicable Canadian laws—including the Personal Information Protection and Electronic Documents Act (PIPEDA)—are non-negotiable obligations in all AI use.
2.5 The Foundation recognizes that AI systems can perpetuate bias or misinformation. Preventing such outcomes is essential to maintaining fairness, accuracy, and credibility in all Foundation activities.
3.0 Application of AI Beliefs
3.1 Governance and Oversight: The Board of Directors shall oversee the strategic use of AI through its Governance or Audit Committee and review AI-related risks and opportunities as part of annual risk assessments.
3.2 Operational Use: Management may deploy AI tools to improve efficiency, communication, analysis, and reporting, provided that appropriate safeguards, human review, and data-handling standards are in place.
3.3 Data and Confidentiality: No personally identifiable donor, staff, or stakeholder information shall be entered into or processed by AI tools lacking sufficient privacy and security guarantees. The Foundation shall not submit to any AI or large language model system any information that identifies or could reasonably be used to identify donors, staff, or stakeholders. This includes personal information, financial or personnel records, confidential Board or legal materials, and mission critical operational data. Any AI use must exclude information whose disclosure could create privacy, security, legal, or reputational risk.
3.4 Transparency: When AI is used to generate content, insights, or communications, the Foundation will disclose this use in an appropriate and transparent manner.
3.5 Procurement and Evaluation: Prior to implementation, management shall evaluate all AI tools for compliance with ethical, legal, and operational standards and document the rationale for their use.
3.6 Training and Awareness: Staff, officers, and contractors will be informed of this Policy and receive guidance on responsible AI use.
4.0 Stakeholder Communication
4.1 The Foundation is committed to openness in its use of AI. The President and CEO will include, as appropriate, updates on AI initiatives and risk management in reports to the Board and relevant stakeholders.
5.0 Misuse, Discrimination, and Accountability
5.1 The Foundation prohibits the use of AI tools for purposes inconsistent with its mission, such as generating misleading content, discriminatory outputs, or decisions that unfairly impact individuals or groups.
5.2 Any suspected misuse or breach of this Policy shall be reported to the Chief Operations Officer, who will investigate and report findings to the President and CEO.
5.3 The Foundation will take corrective action, including suspension of AI tools or contracts, where misuse is confirmed.
6.0 Responsibilities and Review
6.1 The Board of Directors, through its Governance Committee, is responsible for monitoring the application of this Policy.
6.2 The President and CEO is responsible for ensuring compliance and reporting annually to the Board on AI use, risks, and opportunities.
6.3 This Policy will be reviewed at least every two years, or sooner if there are material changes in legislation, technology, or Foundation practice.