Our AI Policy
Purpose
The purpose of this policy is to establish a clear and responsible direction for how artificial intelligence (AI) should be used at the University of the Faroe Islands. AI is to support teaching, research, and administration, and all use must take into account data protection, ethics, and academic integrity.
Scope
The policy applies to all staff and students at the University – i.e., students, academic staff, researchers, and administrative staff, and all others who use AI in their work, teaching, or research at the University. This also includes projects and collaboration with third parties when AI is used.
Principles
3.1 Academic Integrity
AI must not be used as a tool to write or solve academic assignments or assessments that are supposed to demonstrate independent work. It is important to ensure that students’ and staff’s work reflects their own skills and knowledge.
3.2 Transparency
All users must disclose when AI has been used in their work. If a written piece of work has been created with assistance from AI, this must be noted with a citation or comment.
3.3 Innovation and Access
The University aims to create an environment where it is both permitted and encouraged to explore new ways of learning and working with AI, while maintaining ethical and professional considerations.
3.4 Adaptation
The AI policy is not static. It shall be reviewed and updated regularly, especially regarding changes in technology, legislation, pedagogical, and research ethical considerations. The development and maintenance of the policy should be done in close cooperation with various user groups: academic staff, students, administration, IT, and management. Everyone should have the opportunity to contribute to the content of the policy.
Guidance on Use
AI can be used as a tool to perform tasks – e.g., for brainstorming, translation, rewriting text, or getting help with formal work. However, AI must not be used to complete tasks and assignments that are intended to demonstrate independent thinking and professinoal analysis unless this is explicitly permitted.
Staff and students must not use AI in ways that mislead, generate false content, or omit relevant information.
Use of AI in research or teaching must always comply with data protection law. Users must not input sensitive personal data into open AI tools without ensuring that data is processed in accordance with personal data legislation.
Material produces with AI must be reviewed and approved by a human. The user is responsible for the results and content generated by AI.
AI must be implemented in a way that ensures equal access and does not reinforce biases or inequality.
Research conducted with the support of AI must follow guidelines for transparency, reproducivility, and ethical conduct.
Governance and Responsibility
A steering group is responsible for developing and keeping the policy up to date, gathering feedback, and ensuring that the policy is written in accordance with law and developments in education, research, and available technology. The group must have representatives from various departments and functions at the University.
The Research Ethics Committee (Granskingaretiska nevndin) at the University advises on the responsible use of AI in research.
Introduction and Use of Resources
Initially, no large financial investment will be made. Instead, the focus will be on providing clear guidelines, emphasizing pedagogical use, and establishing a shared knowledge center. For example, using Copilot in MS Office or the free version of ChatGPT is recommended – in secure and well-defined conditions.
Resources (licenses, etc.) should be distributed equitably, and consideration should be given to accessibility for users with diverse needs.
Feedback and Development
An open feedback form will be available on the intranet, Forms, Teams, or similar platforms, where staff and students can report experiences, ask questions, or describe problems with AI use. Annual or biannual reports on AI use and feedback will be shared publicly.
Compliance and Risk Management
When AI is misused – e.g., for plagiarism or by entering data without observing personal data regulations – it will be handled according to current rules for breaches of academic or professional codes of conduct. Furthermore, teachers must have clear rules in their subjects about what is permitted and what is not. Breaches of research ethics involving AI will be referred to the neareset manager for further action.