Visit TLED’s Teaching Consultations & Support page for comprehensive support in course design, technology tools for teaching, and other high-impact practices. Join our Facebook group where ACC faculty and staff are sharing insights, best practices, and advice. Collegewide updates are available on the Austin Community College Info Hub.

Search TLED
Search ACC

Policy & Ethical Considerations

Overview

Faculty should remain mindful of policies and ethical considerations related to Generative Artificial Intelligence (GAI).

For one-on-one support to add AI to your course, please reach out to an instructional designer.

For questions about the content on this page, email oat-projects-group@austincc.edu.

Policy Considerations

ACC Policies

Consult your department chair to learn about your department’s standards.

A group of ACC faculty have begun work on an Administrative Rule draft. Contact Stephanie Long at stephanie.long@austincc.edu for more information.

Academic Integrity

Include a statement in your syllabus outlining how AI will be addressed in your course. The ACC Department of Literary and Composition Studies created a faculty guide with sample language.

Note whether AI is:

  • Prohibited – Clearly state what GAI activities are not allowed.
  • Permitted – Clearly state what GAI activities are allowed.
  • Required – Clearly state what GAI activities are required.

Learn more from the ACC Library’s Generative AI: AI & Academic Honesty research guide.

Syllabus Policy

Beginning fall 2025, all ACC course syllabi must include a Generative AI statement—Prohibited, Permitted, or Required. This ensures students understand your expectations for AI use in the classroom. View information here.

Herbert Coleman, adjunct psychology faculty, created an AI tool to help faculty draft AI syllabus policies. Visit AI Syllabus Statement.

Ethical Considerations

Algorithmic Bias

GAI has raised concerns about algorithmic bias, particularly against minorities and underserved populations. For example, a study by Richard J. Chen and colleagues revealed that computational pathology AI models, while accurate, can be biased when evaluated by race. This highlights the need for inclusive, diverse data in AI training and evaluation.

Another study by Y. Juhn and colleagues examined how socioeconomic status (SES) affects AI bias. The researchers found that biased algorithms can negatively impact under-resourced or minority populations, underscoring the need for tools like the HOUSES index to assess SES-based bias.

These findings underscore the ethical imperative to address bias in AI systems to ensure fair and equitable treatment of all individuals.

Data Privacy

In many GAI applications, the information you provide may be used to train models. To protect your privacy, never share personal information with AI tools. Be aware of potential risks such as data breaches or misuse, and review each application’s privacy policy before use.

Remember, these technologies are evolving. This article shows how data can be exposed by clever prompts, not just sophisticated hackers.

Institutions must safeguard private data to comply with the Family Educational Rights and Privacy Act (FERPA) and the Health Insurance Portability and Accountability Act (HIPAA).

Neither you nor your students should include any personally identifiable information when using AI tools.

Detection Tools

AI detection tools are generally not recommended due to their high failure rate and bias risk against non-native speakers. Accusations of academic dishonesty should only be made with certainty.

Consider the purpose of your assignments. For example, rather than traditional essays, literature students might compare and contrast two critiques of the same work, promoting analytical thinking and reducing AI misuse.

Faculty should also design assignments that align with Bloom’s AI taxonomy model. If AI can easily complete an assignment, consider revising it. Review the library’s Generative AI and Active Learning guide or talk to an instructional designer for support.

Hallucinations

In generative AI, “hallucinations” occur when models generate inaccurate or fabricated information. A study by Sohini Roychowdhury highlights how these errors can distort decision-making, particularly in high-stakes areas like finance.

However, this flaw can also be a teaching tool. Students should learn that AI-generated content must be verified. Consider assignments like How Can You Evaluate AI-generated Texts? to help students evaluate credibility. Discuss real-world examples of misinformation from platforms like Reddit or X (formerly Twitter) and explore how false content can become part of AI training data.

Contact us

If you need help implementing AI into your course, please submit an Academic Technology Service Request form and choose “Instructional Design Consultation.”

For questions about the TLED GAI Guide, please email oat-projects-group@austincc.edu.