Visit TLED’s Teaching Consultations & Support page for comprehensive support in course design, technology tools for teaching, and other high-impact practices. Join our Facebook group where ACC faculty and staff are sharing insights, best practices, and advice. Collegewide updates are available on the Austin Community College Info Hub.

Search TLED
Search ACC

Policy & Ethical Considerations

Overview

We must also keep in mind policies and ethical considerations related to AI.

For one-on-one support to add AI to your course please reach out to an instructional designer.

For questions about the content on this page, please email oat-projects-group@austincc.edu.

 

Policy Considerations

National Policy

Blueprint for an AI Bill of Rights the Biden administration’s Office of Science and Technology Policy released in 2022, to offer the set of rights for educators and their students:

Educators

    • Input on Purchasing and Implementation: You should have input into institutional decisions about purchasing and implementation of any automated and/or generative system (“AI”) that affects the educational mission broadly conceived.
    • Input on Policies: You (or your representative in the appropriate body for governance) should have input into institutional policies concerning “AI” (including automated and/or generative systems that affect faculty, students, and staff).
    • Professional Development: You should have institutional support for training around critical AI literacy.
    • Autonomy: So long as you respect student rights (as elaborated below), you should decide whether and how to use automated and/or generative systems (“AI”) in your courses.
    • Protection of Legal Rights: You should never be subjected to any automated and/or generative system that impinges on your legal rights (including but not limited to those stated above).

Students

    • Guidance: You should be able to expect clear guidance from your instructor on whether and how automated and/or generative systems are being used in any of your work for a course.
    • Consultation: You should be able to ask questions of your instructor and administration about the use of automated and/or generative systems before submitting assignments without fear of reprisal or assumption of wrongdoing.
    • Privacy and Creative Control: You should be able to opt out of assignments that may put your creative work at risk for data surveillance and use without compensation.
    • Appeal: You should be able to appeal academic misconduct charges if you are falsely accused of using any AI system inappropriately.
    • Notice: You should be informed when an instructor or institution is using an automated process to assess your assignments, and you should be able to assume that a qualified human will be making final evaluative decisions about your work.
    • Protection of Legal Rights: You should never be subjected to any automated and/or generative system that impinges on your legal rights (including but not limited to those stated above).

ACC Policies

Please work with your department chair to learn the standards for your department.

A group of ACC faculty have begun work on an Administrative Rule Draft. Please contact Stephanie Long at stephanie.long@austincc.edu for more information.

Academic Integrity

It is recommended that you make a statement in your syllabus about how you are going to treat AI. The ACC Department of Literary and Composition Studies has created a faculty guide with example verbiage.

Note if AI is:

    • Prohibited – Clearly state what GAI activities are not allowed
    • Permitted – Clearly state what GAI activities are allowed
    • Required – Clearly state what GAI activities are required

Learn more from the ACC Library’s Generative AI: AI & Academic Honesty research guide.

Ethical Considerations

Algorithmic Bias

Generative AI has raised concerns about algorithmic bias, particularly against minorities and underserved populations. For example, a study by Richard J. Chen and colleagues revealed that computational pathology AI models, despite their accuracy in diagnostic predictions, can be biased when evaluated on data stratified by race. This indicates a significant gap in the evaluation of such models, especially concerning underrepresented minorities, highlighting the need for more inclusive and diverse data in AI training and evaluation.

Another study, focusing on the impact of socioeconomic status (SES) on AI algorithm bias, emphasized the potential negative impact of biased AI algorithms on under-resourced communities or racial/ethnic minority populations. The research by Y. Juhn and team highlighted the role of SES in AI fairness and the importance of using tools like the HOUSES index to assess algorithmic bias due to SES. This study underscores the need for more comprehensive strategies to quantify and mitigate AI bias, particularly in healthcare settings where such biases can have profound implications.

These examples underscore the ethical imperative to address and correct biases in AI systems to ensure fair and equitable treatment of all individuals, regardless of their background or socioeconomic status.

Data Privacy

In many GAI (Generative Artificial Intelligence) applications, the information you provide is used as part of the machine learning data sets used to train the model over time. To prevent your data from being misused, it’s vital that you don’t share any personal information with these applications. You should also be aware of the potential risks associated with using GAI applications, such as data breaches or misuse. Always ensure that the application you’re using is secure and trustworthy. Additionally, check the privacy policy of the application and ensure that you are comfortable with the terms of use.

Remember, these technologies are still developing. This article shows how data isn’t necessarily stolen by sophisticated hackers, but just a clever prompt.

As with other technologies, institutions must guard against AI data leaks that compromise user’s private data and rights under the Family Educational Rights and Privacy Act (FERPA) and Health Insurance Portability and Accountability Act of 1996 (HIPAA).

Neither you nor your students should use any Personally Identifiable Information when using any of these applications.

Detection Tools

AI detection tools are generally not recommended for identifying academic dishonesty due to their failure rate. Accusations of cheating should only be made with certainty, as these tools can’t guarantee 100% accuracy. Moreover, research indicates a bias risk in these detections, with non-native speakers disproportionately accused of cheating. It’s also crucial to reflect on the objective of an assignment. For instance, in literature analysis, rather than a traditional essay, students could be tasked with comparing and contrasting two different critiques of the same work, fostering analytical thinking and reducing the focus on originality

Although students should not be using AI to do their assignments; it is also the responsibility of the faculty member to create homework assignments that coincide better with Bloom’s AI taxonomy model. If any AI can easily do an assignment you typically use for assessment, consider changing it. The library has a research guide Generative AI and Active Learning: AI Paper Writing and Revisions or talk to an instructional designer or even ask AI for help creating a stronger assignment.

Hallucinations

In the context of generative AI, “hallucinations” refer to instances where AI models, such as large language models (LLMs), generate information that is inaccurate or entirely fabricated – that means ChatGPT will sometimes confidently provide incorrect information. This phenomenon is particularly concerning because these models are often used in applications where accuracy and reliability are crucial. For example, a study by Sohini Roychowdhury discusses how hallucinations in generative AI can impact financial decision-making. The study highlights that hallucinations, often caused by biased training data, ambiguous prompts, or inaccurate model parameters, can lead to AI-generated reports or decisions that are factually incorrect. This is particularly problematic in fields like finance, where such inaccuracies can have significant consequences.

However, the model’s propensity for lying and making up information can be a great teaching tool. It’s critical that students understand that anything GenerativeAI creates needs to be verified. Consider an assignment like How Can You Evaluate AI-generated Texts? with your students where they evaluate two pieces of text and decide which one is a published article and which is generative AI. Discuss inaccuracies students themselves have noticed on Reddit or Twitter and how these inaccuracies or complete fabrications could end up as part of the training data of an AI model.

Contact

If you need help implementing AI into your course, please submit an Academic Technology Service Request form and choose “Instructional Design Consultation.”

For questions about the TLED GAI Guide, please email oat-projects-group@austincc.edu.