In its recommendations on AI ethics, the U.S. Department of Education pointed to a February 2024 proposal from NIST researchers to build on the “long-standing concepts” set out in the 1979 Belmont Report. That report identified three core principles (beneficence, respect for persons, and justice) that guide the ethical conduct of human subjects research.
The Department noted that these principles “can be used to organize an approach to ethics in the age of AI” (p. 36) and emphasized that researchers and educators are already working together to shape such guidelines, with this collaborative effort expected to continue.
I have read the Belmont Report, summarized its key ethical guidelines, and repurposed them into practical considerations for teachers who want to integrate AI in ways that respect and uphold ethical standards.
Ethical Guidelines from the Belmont Report
- Respect for Persons
- Treat individuals as autonomous agents.
- Protect those with diminished autonomy (e.g., children, prisoners, individuals with cognitive impairments).
- Ensure voluntary participation and informed consent with adequate information, comprehension, and freedom from coercion.
- Beneficence
- Do not harm.
- Maximize possible benefits and minimize potential harms to participants.
- Carefully assess risk-benefit ratios before research begins.
- Use well-designed studies to avoid exposing participants to unnecessary risk.
- Justice
- Ensure fairness in the distribution of research benefits and burdens.
- Avoid exploiting vulnerable populations.
- Select subjects based on relevance to the research question, not convenience or manipulability.
- Publicly funded research benefits should not be limited to privileged groups.
- Applications of the Principles
- Informed Consent: Provide participants with clear, comprehensive information; adapt presentation to their capacity; allow voluntary decision-making without coercion or undue influence.
- Assessment of Risks and Benefits: Conduct thorough, systematic evaluation of potential harms and benefits to individuals and society; ensure risks are justified by anticipated benefits.
- Selection of Subjects: Apply equitable selection criteria; avoid overburdening disadvantaged groups unless the research directly addresses their needs.
Now, here is a repurposed version that applies these guidelines to AI practice
Beneficence: Maximize benefits, minimize harm
- Use AI to enhance student learning and engagement while actively avoiding harm, such as misinformation, biased content, or privacy violations.
- Regularly review AI-generated materials for accuracy and relevance before sharing with students.
- Apply AI in ways that reduce workload without replacing the human connection central to teaching.
Respect for Persons: Protect autonomy and ensure informed participation
- Be transparent with students about when and how AI is used in teaching and learning.
- Give students choices on whether to engage with AI tools, and explain the potential benefits and limitations.
- Avoid using AI in ways that collect unnecessary personal data or track student behavior without consent.
Justice: Fair access and equitable outcomes
- Ensure AI resources are accessible to all students, regardless of background, language, or disability.
- Avoid using AI in ways that could reinforce existing inequities, such as disproportionately benefiting students with greater access to technology at home.
- Monitor AI applications for potential bias in recommendations, assessments, or feedback, and take corrective action when necessary.

References
- National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. U.S. Department of Health, Education, and Welfare
- Adapted from U.S. Department of Education. (2024). Empowering Education Leaders: A Toolkit for Safe, Ethical, and Equitable AI Integration.
The post Ethical Guidelines for Teachers Using AI in the Classroom appeared first on Educators Technology.