AI

Welcome to the DRAFT/BETA DTU AI Info Hub.

This site is meant to assist you, as a course responsible, when considering the use of generative AI (like OpenAI ChatGPT and DALL-E, Microsoft Copilot, etc) in your course(s) at DTU.

Considering the level of engagement with generative AI in DTU courses, multiple strategies may be envisioned:

  • “No-no” courses that teach students basic personal competences with limited use of AI during the course and no use of AI in exams. This may include e.g., basic math, programming etc.

  • AI-adapted courses where students are encouraged in learning objectives to use AI, yet students are partially prohibited from using AI in exams. An example of such a course could be 02450 Introduction to machine learning, where the use of tools is tested in project reports, while a final personal assessment is carried out as a multiple-choice exam without AI tools.

  • AI first courses, i.e., courses that encourage the use of AI in all phases of the course, during learning and in the exams/evaluation.

There is a possibility that the two latter strategies may harvest an AI bonus and engage students at more complex scientific levels than in pre-AI courses.

For more on this, see e.g., notes from a Workshop on AI in DTU Compute/Cogsys Courses


Getting Started

Flows

To help you get started, a number of flows have been designed to provide inspiration and links to relevant material.

Follow the links if you consider how generative AI may impact the Learning Objectives, the Course Content and Teaching, or the Evaluation.

Site Overview

In addition to the flows, this site is organised around the following sections that you can also browse directly:

  • Checklists – A number of checklist to help you organise or plan your course
  • Inspiration – Examples from other courses and activities at DTU
  • News – News from DTU highly relevant for using generative AI at DTU
  • FAQ – Frequently Asked Questions
  • Rules – Links to relevant rules and guidelines
  • Tools – An overview of (perhaps) relevant tools to get you started

Learning Objectives


Course Content and Teaching

Generative AI could potentially improve teaching and learning, help to better reach learning objectives or improve the evaluation of your course.

Here is a flow you may use to get started considering how to use generative AI in your course:

  • First, list the current learning objectives.
  • For each, describe the desired outcome, as detailed and concretely as possible.
  • Consider how generative AI will influence these outcomes, listing the pros and cons.
  • decide (yes/no) whether AI should be used to support these outcomes, weighing the pros and cons.
  • For areas where generative AI will be used, describe in as much detail as relevant how generative AI can be USED to SUPPORT the outcomes.
  • Consider what needs to happen to make this work
  • Consider how this will impact the evaluation/exam
  • Consider how this will impact the learning objectives
  • Define measureable criteria for successful implementation
  • evaluate and adjust after the course has run

For a template and example look at AI in 41031 Industrial Design (.xls format).

You can decide not to include generative AI tools or methods in your course, but students may nevertheless wish to, or be inspired from elsewhere, to do so. You would then instead have to guide students on acceptable use (if any).


Evaluation

Generative AI will impact the evalution of many courses.

To get started, consider:

  • What will be the difference between a student using generative AI and one that doesn’t?

  • Consider to which degree generative AI is part of your course/teaching.

  • Do you allow students to use generative AI in similar ways at the evaluation (exam or project hand-in)?

    • If NOT, how do you train the students to the evaluation/exam situation?

    • If you DO, but the use of generative AI needs to be limited, how will you ensure or check that students cannot use generative AI when not intended?

      • Do you need a pen-and-paper only exam (a strongly limited environment that would prevent students from running e.g. a large language model on their own computer is not currently available)
      • If not planning for a closed-book exam, what will the students be able to bring themselves (books, notes, …)? Will students then have to buy books instead of relying on electronic books?
      • Should some material be made available in print to the students, if they are limited in access to electronic resources? This could be e.g., a compendium or “cheat-sheet”.
      • Should the evaluation be divided into multiple parts, where some parts allow the use of generative AI and others don’t
  • If students are allowed to use generative AI more broadly for preparing the evaluation material (project report) or exam:

    • Can students be treated equally or would e.g. students with company-paid access to more advanced language model be able to get better grades at the exam
    • How will your evaluation criteria shift – perhaps more emphasis on process rather than results
    • How should your students be trained in documenting the use of generative AI (citing properly)?
    • You may also want to review the Critical Thinking checklist.
  • Consider how this will impact the learning objectives


Thanks

This site has been created with the collective intelligence of the following people:

Jes Frellsen, Chaudhary Ilyas, Rasmus Ørtoft Aagaard, A. Emilie Wedenborg, Tommy Alstrøm, Qianliang Li, Søren Føns, Lina Skerath, Finn Årup Nielsen, Jakob Eg Larsen, Susanne Winter, Ivana Konvalinka, Tobias Andersen, Søren Hauberg, Teresa Scheidt, Kristoffer Stensbo-Smidt, Lenka Tetkova, Camilla Narine, Mikkel N Schmidt, Georgios Avanitidis, Vagn L Hansen, Kyveli Kompatsiari, Fabian Mager, Vassilis Lyberatos, Morten Mørup, Nicki Skafte, Federico Delusso, Hiba Nassar, Hanlu He, Beatrix Miranda Nielsen, Lasse Skytte Hansen, Sune Lehmann, Laura Alessandretti, Jonas Vestergaard, Tiberiu-Ioan Szatmari, Antonio Desiderio, Laurits Fredsgaard, Michael Deininger, Lene Kyhse Bisgaard, … Lars Kai Hansen and Per Bækgaard.

Blame only the last author/editor for errors.