Generative AI is a subset of artificial intelligence that leverages machine learning techniques to generate human-like content. From writing essays to creating art, generative AI has a wide range of applications. Generative AI has the potential to bring significant benefits to teaching, learning, and research. These tools also come with inherent risks.
Generative AI offers exciting possibilities for universities, but it’s important to use these tools responsibly. By following these guidelines, universities can harness the power of generative AI while ensuring ethical use. The goal is not to replace human creativity and effort but to enhance it.
For details and further information, please read:
Benefits of Generative AI
Generative AI can be a powerful tool in the university setting. An example of Generative AI is ChatGPT – a chatbot developed by a private company called OpenAI. Users can enter question prompts and within seconds ChatGPT will produce text-based responses in the form of poems, essays, articles, letters, and more. It can also create structured responses like tables, bullet lists, and quizzes. ChatGPT can provide translation and copy language style and structure. It can also be used to develop and debug software code. New and expanded uses continue to be developed and launched. A similar tool called DALL-E uses AI to create art pieces.
Guidelines for Use
While generative AI can be beneficial, it’s important to use it responsibly. Here are the current UWSA guidelines for use. Additional guidance may be forthcoming as circumstances evolve.
Allowable Use
- Academic integrity: It is up to each instructor to decide if the use of AI is allowed in any course. If the use of AI is allowed in coursework, then you must provide clear expectations on how students should cite use of generative AI in their work. If adding a prohibition on AI tools to assignment instructions, it is best to suggest that the ‘use of generative AI tools’ is prohibited, as opposed to the use of one particular tool, such as ChatGPT. There are many generative AI tools available today.
- Intellectual property: Creating an account to use tools like ChatGPT requires the sharing of personal information. Depending on context, the use of ChatGPT may also mean sharing student intellectual property or student education records with ChatGPT under their terms and conditions of use. Individual students may have legitimate concerns and therefore may be unwilling to create an account. Discuss these concerns and consider alternatives. If you will require the use of ChatGPT make this explicit in the syllabus.
- Privacy: Academic records, such as exams and course assignments, are considered student records and are protected by FERPA. For example, ChatGPT should not be used to draft initial feedback on a student’s submitted essay that includes the student’s identifying information.
- Data classified as low risk, under UW Administrative policy SYS 1031, Information Security: Data Classification and Protection, can be freely used with generative AI tools such as ChatGPT.
- In all cases, use should be consistent with UW Board of Regents Policy, RPD 25-3: Acceptable Use of Information Technology Resources.
Prohibited Use
At present, any use of public instances of generative AI tools should be with the assumption that no personal, confidential, proprietary, or otherwise sensitive information may be used with it. In general, student records subject to FERPA and any other information classified as Medium or High Risk (per SYS 1031) should not be used in public instances of generative AI tools.
Similarly, public instances of generative AI tools should not be used to generate output that would be considered confidential. Examples include but are not limited to proprietary or unpublished research; legal analysis or advice: recruitment, personnel, or disciplinary decision-making; completion of academic work in a manner not allowed by the instructor; creation of non-public instructional materials; and grading.
Other Considerations
Accuracy: Generative AI tools are not infallible, and their accuracy is subject to a variety of factors, including:
- Prone to filling in replies with incorrect data if there is not enough information available on a subject.
- Lack of the ability to understand the context of a particular situation, which can result in inaccurate outputs.
- Large data sets scraped from the internet are full of biased data that inform the models.
Implicit Biases: Algorithms used by these technologies can, and do, replicate, and produce biased, racist, sexist, etc. outputs, along with incorrect and/or misleading information.
Confidentiality: All content entered into generative AI tools may become part of the tool’s dataset and inadvertently resurface in response to other prompts.
Personal Liability: Generative tools, such as ChatGPT, use a click-through agreement. Click-through agreements, including OpenAI and ChatGPT terms of use, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions.