Course Practicalities

Course Overview


Course in brief

This course teaches software engineering with large language models from two perspectives.

First, we look at LLMs as software engineering tools. This includes working with AI assistants during requirements work, design, implementation, testing, review, and documentation. The emphasis is on learning how to use AI assistance without lowering engineering quality.

Second, we look at LLMs as software components. In the implementation-oriented parts of the course, you will build command-line applications that call LLM APIs, manage prompts and conversations, validate outputs, and use surrounding program logic to make the behavior more predictable.

The course is a continuation of Introduction to Large Language Models. We assume that you already know the basic ideas behind language models, prompting, and the main limitations of LLMs. Here, the focus shifts from introduction to application.

The expected workload of the course is 2 ECTS. This corresponds to approximately 50 hours of study, but the actual time varies from learner to learner.

High-level learning objectives

After completing the course, you should be able to:

  • use large language models as support in common software engineering tasks without treating their output as automatically correct,
  • build small LLM-powered command-line applications using JavaScript, Deno, and raw HTTP requests,
  • structure prompts, manage conversation state, and validate generated outputs for downstream use,
  • reason about the limits, risks, and trade-offs involved in engineering with LLM-based systems.

The specific learning objectives are listed in each chapter.

Prerequisites

You are expected to have completed the course Introduction to Large Language Models, or to have otherwise acquired similar background knowledge. In particular, you should already know:

  • what large language models are,
  • how prompting affects model output,
  • why model outputs can be non-deterministic,
  • and why issues such as hallucinations, privacy, and bias matter.

In addition, this course assumes prior programming experience in at least one language. The examples use Deno and JavaScript. The stack is intentionally small so that the focus stays on software engineering practices rather than on framework setup.

You do not need to be an expert with JavaScript in advance, but you should be comfortable reading code, running programs from the terminal, and debugging small issues independently. Similarly, you should be comfortable with working in a programming environment.

Course structure

The course is divided into six parts.

  • Part 1 focuses on software engineering with AI assistance: tool categories, specification, design, implementation, testing, and review.
  • Part 2 provides the narrow programming foundation needed in the rest of the course: JavaScript, asynchronous programming, files, modules, testing, and structured data.
  • Part 3 moves from AI-assisted development to building LLM-powered applications that run from the command line.
  • Parts 4-6 continue toward tool use, retrieval, evaluation, and responsible use.

The first three parts are the foundation for the rest of the course. Part 1 develops the workflow mindset, Part 2 provides the implementation vocabulary, and Part 3 combines the two in small but realistic CLI-based applications.

Core path and framework variants

The main implementation path of the course is deliberately explicit. When we first build LLM-powered applications, we use raw fetch, visible request bodies, visible validation logic, and small command-line programs. This makes the engineering choices easier to inspect.

Later, some parts also include framework-variant chapters built with LangChainJS. Read the generic implementation first and the framework variant second. That order makes it easier to see which ideas belong to the application design itself and which belong only to one framework interface.

How to approach the materials

This course is not designed for passive reading. Many chapters include short examples, implementation snippets, and situations where the correct engineering choice depends on careful review rather than on trust in the first output produced by an LLM.

When studying the materials:

  • run the examples when possible,
  • inspect program outputs carefully,
  • revise prompts and specifications instead of only retrying them,
  • and treat testing and validation as a part of learning rather than as an afterthought.

The tutorial chapters are especially important. They are where the ideas from the surrounding chapters are turned into working programs and concrete engineering workflows.

When a part includes both a generic tutorial and a framework variant, prioritize the generic tutorial first. The framework variant is most useful after you already understand the request flow, the state management, and the validation boundary in the explicit version.

The field around large language models changes quickly. This course does not aim to cover every tool, model family, or agent framework. Instead, it focuses on concepts and practices that hopefully remain useful even when the specific tools change.