Building Applications with Large Language Models

Recap and Feedback


In this part, we moved from AI-assisted software engineering to building applications where a large language model is part of the system itself.

The main ideas were:

  • LLM APIs are still ordinary HTTP APIs surrounded by ordinary code,
  • prompts are application components rather than disposable chat messages,
  • conversation history is explicit program state,
  • and model outputs should be validated before they are trusted.

The CLI chat tutorial brought these ideas together into one small program. The framework-variant chapter then showed that the same design can also be expressed through LangChainJS. That comparison is useful because it separates two questions:

  • what the application needs to do,
  • and how one chosen framework helps organize that work.

Although the examples were compact, they already demonstrated a useful engineering principle: the model call should be only one part of the application, not the whole application.

In the following parts, that same principle will continue as the applications become more capable.

Next, please take a moment to reflect on your work in this part and provide feedback. Your input helps us improve the course materials.

Loading Exercise...