Security, Limits, and Responsible Use

Recap and Feedback


In this final part, we focused on the limits and responsibilities that appear when LLMs become part of real software systems.

The main ideas were:

  • LLM systems have new attack surfaces such as prompt injection and unsafe tool use.
  • Useful defenses come from system design, not only from prompt wording.
  • Automation boundaries should reflect risk, reversibility, and accountability.
  • Privacy, copyright, bias, and disclosure all affect engineering choices.

Taken together, these ideas complete the main picture of the course.

Software engineering with large language models is not only about getting useful output from a model. It is about designing systems that:

  • are specified clearly,
  • are structured in maintainable ways,
  • use models and tools deliberately,
  • are evaluated explicitly,
  • and remain safe and responsible enough for their intended context.

Next, please take a moment to reflect on your work in this part and provide feedback. Your input helps us improve the course materials.

Once finished, check out the Grading instructions — note that, as noted already earlier, the course is available (for those for whom it is available for credit) for credit only until the end of July, 2026.

Loading Exercise...