Building Applications with Large Language Models

Tutorial: Building a Chat Application


Learning Objectives

  • You can build a small CLI chat application that calls an LLM API.
  • You can manage message history, configuration, and output validation in code.
  • You can optionally save a transcript of the conversation to a JSON file.

In this tutorial, we build a small command-line chat application. The goal is not to build a full-featured chat product, but to combine the main engineering ideas from this part into one coherent program.

The project structure is as follows:

chat-cli/
├── main.js
├── deno.json
└── src/
    ├── chatSession.js
    ├── config.js
    ├── llmClient.js
    └── transcript.js

Step 1: Configuration

The program reads the API URL, API key, and model name from environment variables.

// src/config.js
const getRequired = (name) => {
  const value = Deno.env.get(name);
  if (!value) {
    throw new Error(`Missing required environment variable: ${name}`);
  }
  return value;
};

const loadConfig = () => {
  return {
    apiUrl: getRequired("LLM_API_URL"),
    apiKey: getRequired("LLM_API_KEY"),
    model: getRequired("LLM_MODEL"),
    transcriptPath: Deno.env.get("TRANSCRIPT_PATH") ?? null,
  };
};

export { loadConfig };

Step 2: Message handling

The chat session is just an array of messages plus a small helper for initialization.

// src/chatSession.js
const createInitialMessages = () => {
  return [
    {
      role: "system",
      content:
        "You are a concise assistant for software engineering questions. Keep answers short unless the user explicitly asks for more detail.",
    },
  ];
};

const addMessage = (messages, role, content) => {
  messages.push({ role, content });
};

export { addMessage, createInitialMessages };
Loading Exercise...

At this stage, it is worth making the system prompt a deliberate artifact instead of an accidental sentence buried in code. A strong system prompt for this application should set scope, tone, and uncertainty handling. For example:

You are a concise software engineering assistant for a command-line application.
Answer briefly unless the user asks for more detail.
Do not invent project-specific files, commands, or facts that were not provided.
If important information is missing, say what is missing.

This is the kind of prompt that can be reviewed separately from the rest of the program.

Loading Exercise...

Step 3: LLM client

The client uses raw fetch and validates that the provider returned a usable text response.

// src/llmClient.js
const extractOutputText = (data) => {
  const messageItem = (data.output ?? []).find((item) => item.type === "message");
  const textPart = messageItem?.content?.find((part) =>
    part.type === "output_text"
  );
  return textPart?.text;
};

const requestCompletion = async ({ apiUrl, apiKey, model, messages }) => {
  const response = await fetch(apiUrl, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": `Bearer ${apiKey}`,
    },
    body: JSON.stringify({
      model,
      input: messages,
    }),
  });

  if (!response.ok) {
    throw new Error(`LLM request failed with status ${response.status}`);
  }

  const data = await response.json();
  const content = extractOutputText(data);

  if (typeof content !== "string" || content.trim().length === 0) {
    throw new Error("LLM response did not contain usable text content.");
  }

  return content;
};

export { extractOutputText, requestCompletion };
Loading Exercise...

Step 4: Optional transcript logging

If TRANSCRIPT_PATH is set, the application stores the message history to a JSON file after each turn.

// src/transcript.js
const saveTranscript = async (messages, filepath) => {
  const text = JSON.stringify(messages, null, 2);
  await Deno.writeTextFile(filepath, text);
};

export { saveTranscript };

Step 5: Main loop

The entry point loads the configuration, starts a conversation, and runs a prompt loop.

// main.js
import { addMessage, createInitialMessages } from "./src/chatSession.js";
import { loadConfig } from "./src/config.js";
import { requestCompletion } from "./src/llmClient.js";
import { saveTranscript } from "./src/transcript.js";

const config = loadConfig();
const messages = createInitialMessages();

console.log("CLI chat started. Type 'exit' to quit.");

while (true) {
  const input = prompt("> ");

  if (input === null || input.trim().toLowerCase() === "exit") {
    break;
  }

  if (input.trim().length === 0) {
    continue;
  }

  addMessage(messages, "user", input);

  try {
    const reply = await requestCompletion({
      apiUrl: config.apiUrl,
      apiKey: config.apiKey,
      model: config.model,
      messages,
    });

    addMessage(messages, "assistant", reply);
    console.log(`\n${reply}\n`);

    if (config.transcriptPath) {
      await saveTranscript(messages, config.transcriptPath);
    }
  } catch (error) {
    console.error(`Error: ${error.message}`);
  }
}

Run the program like this:

$ export LLM_API_URL="https://api.example.com/v1/responses"
$ export LLM_API_KEY="your-api-key"
$ export LLM_MODEL="example-model"
$ deno run --allow-net --allow-env main.js

If you set TRANSCRIPT_PATH, also add --allow-write. If you do not want transcript logging, omit TRANSCRIPT_PATH or keep it unset.

A short successful run could look like this:

$ deno run --allow-net --allow-env main.js
CLI chat started. Type 'exit' to quit.
> What is a command-line application?

A command-line application is a program that is run from a terminal and interacts through text input and output.

> exit

A useful failure case is missing configuration. In that case, it is better for the program to stop immediately than to send a half-formed request:

$ unset LLM_API_KEY
$ deno run --allow-net --allow-env main.js
Uncaught Error: Missing required environment variable: LLM_API_KEY
Loading Exercise...

The next chapter revisits the same chat CLI through LangChainJS. Reading the two implementations side by side makes it easier to separate the framework-independent engineering ideas from the framework-specific code.