GenAI Instructional Guidelines for Faculty

Updated: March 3, 2025

Enabling Responsible GenAI Adoption

Michigan Engineering is committed to empowering staff with Generative AI (GenAI) resources, training and guidelines – not replacing them. People are our proudest differentiator.

Appropriate uses of GenAI tools can enhance productivity, administrative effectiveness and creativity. They can elevate individual strengths by helping decrease the amount of time spent on tasks that are challenging or mundane.

Our community is adapting, learning, and growing together. To help facilitate this journey and demonstrate our commitment to support our staff, these guidelines are meant to serve as a compass. We expect teams and individuals to approach GenAI in different ways while maintaining common values and precautions.

These guidelines are structured into the following sections:

Please read each section carefully. Consider the ways to take action and engage yourself and your team members in reflection and discussion. University leadership believes it is important for our community to increase our AI literacy in order to engage in meaningful conversations and understand how to appropriately navigate this technology. You don’t need to be an AI expert to participate and offer insightful perspectives and contributions.

Instructional Tool Recommendations

Instructors have the primary responsibility and discretion to guide the educational process. When making a comprehensive decision on using AI-generated content, instructors should consider several factors, such as the possibility of incorrect responses and the system’s limited ability to represent diverse languages or explain the rationale behind its outputs coherently. AI often struggles to clarify its conclusions consistently and lacks accountability, underlining the importance of implementing these technologies ethically. Instructors must, therefore, use their professional judgment in selecting resources and tools that are consistent with their instructional objectives, using their deep understanding of their student’s needs, capabilities, and subject matter. In this context, AI should be regarded as an unreliable collaborator for students, similar to an untrustworthy classmate. Human oversight from instructors remains integral to their learning experience. For example, it may be the case that an instructor assigns a task that requires the student both use AI and develop a critique of the output that it generates.

Syllabus Enhancement

In our commitment to academic excellence and integrity, faculty members must provide clear and transparent guidelines regarding using AI-generated content within their course syllabi. Transparency in these guidelines enables students to understand what is permissible and not in their coursework. The instructor should explicitly outline the acceptable use of GenAI in the classroom so students can appropriately attribute sources such as AI outputs. Such transparency aims to uphold the ethical standards of scholarly work, safeguarding against intellectual dishonesty while empowering students to engage with emerging technologies responsibly and informally. The College/University should share sample syllabus statements, which instructors may choose to implement in their original form or edit to fit their specific needs for a course.

AI Tool Screening

The College is pivotal in ensuring that any GenAI tool made available by the university or college for instructors to integrate into learning and teaching environments has undergone a thorough evaluation. It is essential that a dedicated team of professionals, well-versed in technology such as CRLT-Engin, Nexus, and CAEN, conduct this assessment to ensure the tool’s appropriateness. It is vital to use resources provided by the University, as they have been rigorously examined, and it is guaranteed that the University and any of its subcontractors keep user data, search terms, and the like private and unrecorded. Doing so, they uphold the utmost standards of excellence and security in providing instructors with GenAI tools that can be introduced into academic settings.

Adaptive Policy Revisions

With the advent of GenAI tools, existing frameworks such as the Information Technology Services (ITS) Sensitive Data Guidelines, General IT Policies, Student-Specific IT Policies, and IT Security and Privacy Policies must undergo revisions. These updates must address emerging security challenges and ethical issues associated with GenAI technologies, such as academic integrity. Such considerations include the need for clarity in the college’s Honor Code concerning the proper and improper use of GenAI tools. It is essential to clarify the boundaries of acceptable use to ensure that students avoid taking credit for work that is not their own.

Comprehensive Faculty Training

The College should implement an optional but paid training program for faculty members to learn more about the capabilities and limitations of GenAI tools. The College should also implement and maintain a website for Frequently Asked Questions (FAQ) to provide straightforward guidance and problem-solving assistance to aid instructors in making informed decisions for each unique situation. An FAQ site is not likely going to address all unique situations. This resource will require forming an advisory panel composed of instructional and research faculty, CRLT-Engin, Nexus, and CAEN personnel who can handle problems on a case-by-case basis. (There are both appropriate and inappropriate ways of using generative AI in academic settings and policies, such that the Honor Code should be updated to reflect this spectrum of student authorship.)

AI Basics

In light of the rapid development and adoption of GenAI technologies, we are mindful that any educational resources we introduce must extensively cover the ethical dimensions of using these tools alongside their technical underpinnings. To that end, we are contemplating creating a series of instructional modules called “GenAI Basics,” which aim to provide a nuanced understanding that merges technical mastery with ethical awareness, which instructors could include in their course material for students.

These modules would delve into the operational mechanics of GenAI, particularly Large Language Models (LLMs), and the critical ethical considerations accompanying their use. Before moving forward with such an initiative, it is crucial to actively seek insights and concerns from the academic community regarding the responsible application of GenAI within various fields of study.

Gathering this information will enable us to tailor the content of the modules so that they offer instructors flexible options for supplementing their curricula with lessons that engage students in both the technical and ethical aspects of GenAI. Our goal is to ensure that students emerge with the skill set to utilize GenAI tools effectively and navigate the complex moral landscape they present, fostering a conscientious approach to employing these technologies in academic settings and beyond. By thoughtfully incorporating these considerations, we aim to empower students to become informed, ethical users and contributors to the evolving domain of artificial intelligence

Ethical Issues

The adoption of Generative AI (GenAI) technologies for educational purposes, especially their utility in coding assignments, must be examined through the lens of socioeconomic equity. Differences in student socio-economic backgrounds can lead to divergent levels of access and comfort with using these advanced tools, potentially widening the gap between those who can leverage GenAI effectively and those who cannot. This disparity can exacerbate the inequities in traditional settings where, for example, not all students have equal access to supportive academic and social groups outside of class for homework assistance.

In addition to these concerns, biases embedded within the AI’s training data may result in outputs that flatten diversity and perpetuate stereotypes, inadvertently influencing and homogenizing student perspectives. GenAI’s frequent inability to make clear its decision-making processes and propensity to replicate damaging social stereotypes complicates its integration into learning environments.

Furthermore, the significant energy demands of these machine-learning systems and ethical questions regarding using human labor for reinforcement training add complexity to their use in educational settings. The College needs to be transparent about the energy consumption of these tools, including, but not limited to, a notice on the University’s AI website that clearly and prominently indicates how much energy it consumes. This notice should include a comparison with a non-AI task that uses a comparable amount of energy to enhance the understanding of those who are less conversant with the relevant physics. The amount of paid compensation to people whose work was used for AI ‘training’ must also be made plain and explicit. Addressing these socio-economic and ethical dilemmas is critical in ensuring that GenAI technologies serve as inclusive and equitable educational aids.

GenAI Disclosure

In approaching the development of recommendations and guidelines within this report, committee members leveraged U-M’s GenAI tools for brainstorming and editorial assistance. Peer institutions’ and other third-party GenAI guidelines were also reviewed for consideration as part of this project.