Skip to Main Content

Artificial Intelligence Faculty Committee

Thoughts on crafting university guidelines

What universities have guidelines regarding the use of LLM

As of now, specific details on which universities have established formal guidelines for the use of Large Language Models (LLMs) are not widely published or standardized across the board. Universities are increasingly aware of the need for policies around AI and machine learning, but the development and formalization of such guidelines can vary significantly by institution.

However, several leading universities that are at the forefront of AI research and ethics are likely candidates for having developed such frameworks or are in the process of doing so. These include:

1. **Stanford University**: Known for its AI research, Stanford has various initiatives through its Institute for Human-Centered Artificial Intelligence (HAI) that address the ethical use of AI technologies, which may include LLMs.

2. **MIT**: Through the MIT Schwarzman College of Computing and other initiatives, MIT engages deeply with the ethical implications of AI and develops policies that could be applied to the use of technologies like LLMs.

3. **Harvard University**: Harvard's Berkman Klein Center for Internet & Society frequently explores the intersection of ethics and technology, which includes the use of AI in educational and research settings.

4. **University of California, Berkeley**: Berkeley is another leader in AI research and has initiatives through its Center for Human-Compatible AI and other related departments that examine the use of AI tools in various contexts.

5. **University of Oxford**: The Oxford Internet Institute and other departments conduct extensive research into AI ethics, which likely includes considerations for the use of LLMs in academic settings.

To find out specific policies or guidelines, one would typically look at:

- **AI or Ethics Centers at Universities**: Departments or centers dedicated to AI, ethics, machine learning, or data science are likely hubs for policy development.

- **Publications and Announcements**: Universities may publish their policies on the use of AI technologies on their official websites or through academic publications.

- **Faculty Expertise and Courses**: Information might also be found in course syllabi or faculty research areas, particularly those specializing in AI ethics and policy.

If you are affiliated with a university and interested in the guidelines for using LLMs, a good starting point would be to consult with departments that handle data science, computer science, or digital ethics. Additionally, administrative bodies concerned with research integrity or IT policy might also provide guidance or be in the process of developing relevant policies.

Suggested guidelines for using LLMs in a university setting

Developing guidelines for the use of Large Language Models (LLMs) in university settings involves addressing ethical considerations, ensuring educational effectiveness, and safeguarding academic integrity. Here are detailed guidelines to help universities implement and manage LLM technology responsibly and effectively:

### 1. Establish Clear Objectives

**Purpose and Scope**:

- Define specific educational objectives for LLM use, such as enhancing learning experiences, providing tutoring support, or assisting in research.

- Clarify the scope of acceptable uses within academic and administrative contexts.

### 2. Prioritize Data Privacy and Security

**Data Handling**:

- Ensure that all data used with LLMs comply with data protection laws (e.g., GDPR, FERPA).

- Develop protocols for data anonymization where necessary to protect sensitive information.

**Access Controls**:

- Implement strict access controls and authentication measures to manage who can use LLM tools and what data they can access.

### 3. Ensure Ethical Use

**Bias and Fairness**:

- Address potential biases in LLM outputs by providing training and guidelines on recognizing and mitigating biased information.

- Regularly evaluate LLM tools for fairness and accuracy, especially when used in sensitive contexts like admissions or grading.

**Transparency**:

- Maintain transparency about the use of LLMs, informing students and staff about how they work and where they are used.

### 4. Safeguard Academic Integrity

**Usage in Assessments**:

- Clearly outline acceptable and unacceptable uses of LLMs in coursework and examinations to maintain academic integrity.

- Consider creating 'open AI' and 'closed AI' assessments where the use of AI tools is explicitly allowed or prohibited.

**Plagiarism and Originality**:

- Update plagiarism policies to include misuse of AI-generated content.

- Encourage the use of plagiarism detection software that can identify AI-generated text.

### 5. Enhance Educational Outcomes

**Integration with Curriculum**:

- Integrate LLMs into the curriculum thoughtfully, ensuring they complement educational goals and do not replace essential learning experiences.

- Provide faculty with training on how to effectively integrate LLM tools into their teaching practices.

**Pedagogical Strategies**:

- Promote the use of LLMs to enhance critical thinking, creativity, and problem-solving skills.

- Encourage instructors to use LLM outputs as a starting point for deeper analysis and discussion in class.

### 6. Provide Training and Support

**Faculty and Staff Training**:

- Offer regular training sessions for faculty and staff on the capabilities and limitations of LLMs.

- Create resources and support systems to help integrate LLM use into teaching and administrative tasks effectively.

**Student Orientation**:

- Educate students on the ethical use of LLMs, including the importance of critical evaluation of AI-generated information.

- Provide guidelines on how to use LLMs as a tool for learning enhancement rather than a shortcut.

### 7. Monitor and Evaluate

**Continuous Evaluation**:

- Regularly assess the impact of LLM use on student learning and institutional processes.

- Solicit feedback from faculty, students, and staff to improve guidelines and practices.

**Research and Development**:

- Encourage academic research on the implications of LLM use in education to stay at the forefront of technological and pedagogical developments.

- Collaborate with other institutions and organizations to share insights and strategies.

### 8. Legal and Compliance Considerations

**Regulatory Compliance**:

- Stay informed about and comply with all relevant laws and regulations affecting LLM usage in educational settings.

- Include legal review in the development and updating of guidelines to address evolving legal standards.

Implementing these guidelines requires a coordinated effort across university departments, including IT, academic affairs, student services, and legal teams. By setting clear policies and providing ongoing support, universities can leverage LLMs to enhance educational experiences while addressing ethical, legal, and academic challenges.