Building Applications with Large Language Models

Overview


In Part 1, we used LLMs as software engineering assistants. In Part 2, we built the programming foundation needed for small command-line applications. In this part, the two come together: we start building software where a large language model is one component inside a larger system.

That changes the engineering problem. Instead of only asking whether the AI produced a helpful suggestion, we now ask:

  • how the program constructs a request,
  • how the response is parsed,
  • how conversation state is stored,
  • and what happens when the model output is malformed or incomplete.

Figure 1 summarizes the basic flow that we will use repeatedly in this part.

Fig 1. — A small LLM-powered application still includes ordinary software around the model call: prompt assembly, validation, and output handling.
Loading Exercise...

What stays outside the model

One of the easiest mistakes in this part is to think of the model as the whole application. It is not — it’s just a component. The application has to decide how configuration is loaded, what the prompt looks like, what output shape is acceptable, what should happen if the model fails, and what should be shown to the user.

When thinking of what you’re working on in this part, always keep in mind that we’re building software that surrounds a model — the software is not the model.

The structure of this part is as follows:

Finally, Recap and Feedback summarizes the part and prepares you for later work with tools and more capable systems.