Statement on Artificial Intelligence (AI) Assistance

Artificial Intelligence (AI) Assistance 

This policy provides an overview of approved and prohibited uses of AI technology as guidance  for students and faculty across the University. Sanctions for unapproved uses will follow the  Academic Honesty policy laid out in the bulletin and reiterated in course syllabi. 

What is ChatGPT? 

ChatGPT is a language-processing tool that uses AI to mimic human interactions (for example,  asking questions and responding to answers), but in this case one of the “people” is the AI tool.  ChatGPT can answer questions and assist users with numerous writing tasks such as composing  essays, emails, and other documents. In addition to ChatGPT, there are other similar language processing tools: Google Bard, Microsoft Bing Chat, Jasper, and others.  

ChatGPT works by attempting to understand the prompt given by the user and then providing  strings of words that it predicts will best answer the user’s question. This information is pulled  from a massive set of data compiled by humans, including books, articles, and other documents  across multiple topics, styles, and genres, much of it from the internet.  

Privacy, Accuracy, and Bias Concerns to be aware of: 

Privacy: Because ChatGPT draws from information online, much of it can be drawn  from material anyone (and that means anyone) has written. Social media posts, a  chatroom thread that still exists somewhere, or other personal information posted on  the internet could potentially be used in a ChatGPT response. 

Accuracy: Users should be aware that some but not all information provided by  ChatGPT is accurate. All information generated by this system requires careful checking.  In particular, users should be aware that ChatGPT sometimes “hallucinates”: that is, it  makes up information (including academic sources and non-existent people and events).  For this reason, users should always cross-check information generated by ChatGPT  against other, more credible sources.  

Bias: Because ChatGPT was developed by humans and because humans provide  information for ChatGPT, users should be aware of the inherent bias in the information  provided by ChatGPT.  

Additional Resources 

ChatGPT and Student Learning 

Learning to write while at the university is one of the most critical ways that students learn  about an academic discipline. Writing fosters the deep learning and cognitive development  fundamental to a university education and crosses into other information literacy practices,  

including learning how to use statistics, images, video, audio, art, and other disciplinary  technologies.  

While there are legitimate uses for ChatGPT and other similar programs, it is necessary for  students and faculty to understand when AI assistance crosses a line. Ultimately, users of AI assistant technology are solely responsible for the accuracy of the information and data they  submit. 

Examples of Acceptable Uses of AI Assistance: 

● Developing a topic for writing 

● Generating search terms and finding databases for research 

● Formatting citations 

● Diagnosing errors and receiving general suggestions for improving a text without using  AI tools to explicitly rewrite it 

● Searching for specific information hints for further research as one would do with search  engines, browsers, and databases 

● Generating AI art, audio, image, or video with proper credits to the AI tool used ● Generating basic codes for further development outside of the AI tool ● Generating Search Engine Optimization (SEO)*-friendly copy from one’s originally  composed texts—written, audio, video, and images. 

● Generating SEO-friendly keyword or key phrases from one’s original work ● Debugging and formatting codes 

● Detecting fake content or plagiarized materials 

● Data analysis as one will do with other data analysis tools, with proper credits ● Data mining: This process is (almost) always automated even now, but as AI advances  (including Chatbot), it will play a greater role in looking for those patterns.

Examples of Unacceptable Uses of AI Assistance: 

● Using AI to write entire essays or complete unfinished portions of an assignment ● Rewriting significant portions of a text 

● Uses that violate the spirit of this policy – which is to ensure that assignments, including  but not limited to writing assignments, accurately convey the author’s original ideas,  abilities, and voice – are also considered unacceptable, even if they conform to some  specifics 

● Not properly crediting AI tools for any artistic piece used for illustrative purposes 

Unacceptable uses of AI assistance will be treated the same as plagiarism and/or an academic  dishonesty violation.  * The process by which people associate terms with their websites to increase the likelihood  that the algorithms pick them up.