Building Applications with Large Language Models

Prompt Engineering for Applications


Learning Objectives

  • You understand how prompt engineering changes when prompts become part of an application.
  • You know how to use system prompts, templates, examples, and output constraints together.
  • You can identify common prompt failure modes in application settings.

Prompts as application components

In interactive chat use, a prompt may be disposable. In an application, prompts are different. They are part of the system design.

This means that a good application prompt should be clear, reusable, stable enough to test, and explicit about the output format when later code depends on it.

System prompts and role-setting

Application prompts often start with a system-level instruction that defines the task and the boundaries of the response.

For example, a CLI assistant that summarizes commit messages might begin with:

You are a concise software engineering assistant.
Return only a short summary and a list of action items.
Do not invent files or commands that were not mentioned in the input.

This does not guarantee correct behavior, but it gives the model a clearer starting point.

Prompt templates

Applications often use templates where some parts of the prompt stay fixed and other parts are filled in dynamically.

const buildPrompt = (diffText) => {
  return `Summarize the following code changes in two sections:
1. What changed
2. Risks to check

Code changes:
${diffText}`;
};

Templates make prompt behavior easier to maintain because the surrounding code controls what changes and what stays fixed.

Prompts evolve with the application

As an application grows, its prompts often need to change with it. If the output schema changes, the prompt must change too. If the application starts storing conversation history differently, the prompt may need new instructions about how to use that history. If later code becomes more strict about JSON structure, the prompt should reflect that more clearly.

Prompts should also be checked when the used model changes — a change in a model can significantly influence the performance and utility of the used prompts.

Prompts should live in code or in clearly versioned project files; a prompt that affects program behavior should be easy to inspect, revise, and compare just like other parts of the application.

Output constraints

If later code needs structured output, the prompt should say so directly.

Read the following task description and return JSON with exactly two keys: "summary" and "risks". Do not include markdown fences.

Task: Add CSV export to the reporting command and reject missing output paths.

{"summary":"Add CSV export support to the reporting command.","risks":["Output path validation may be incomplete.","The CSV format may drift from the existing report fields."]}

This kind of prompt is not only about readability. It is about making the output easier to consume safely.

Loading Exercise...

Examples and few-shot guidance

If a task has a specific output style, one or two short examples can improve consistency. This is especially useful when:

  • field names matter,
  • the output must follow a compact format,
  • or the response should avoid extra commentary.

At the same time, examples should be small and purposeful. Too many examples can waste tokens and make the prompt harder to maintain.

Loading Exercise...

Common failure modes

Application prompts often fail in recurring ways:

  • they leave the output format underspecified,
  • they combine too many instructions in one place,
  • they rely on hidden assumptions not represented in the prompt,
  • or they ask for structured output but forget to forbid extra explanation text.

These failures are manageable, but only if the prompt is treated as a piece of the application rather than as an informal note.

A small but useful habit is to write prompts that are explicit enough to be copied into tests or review discussions. For example:

Read the task description below.
Return valid JSON with exactly these keys:
- summary
- risks

Do not include markdown fences.
Do not include any explanation before or after the JSON.

This prompt does not guarantee success, but it makes the intended behavior easy to inspect. If the application later fails because the model returned extra prose, the engineer can tell whether the prompt actually forbade that behavior.

Loading Exercise...