Framework Variant: Building the Chat CLI with LangChainJS
Learning Objectives
- You can identify what LangChainJS simplifies and what still remains the responsibility of the surrounding application.
LangChainJS
This chapter revisits the chat CLI from the tutorial in the last chapter. The goal is to show how a framework called LangChainJS can reorganize the same design once you already understand the underlying request and response flow.
The main tutorial uses raw
fetchbecause it keeps the engineering logic visible. You can see where the request body is assembled, where the response is checked, and where configuration enters the program.
LangChainJS can reduce some of that boilerplate, but it does so by introducing a framework layer and provider integrations. That is often useful in real projects, yet it is less vendor-neutral than the main course path.
A Deno setup for this LangChainJS-based version can be installed with the following packages:
$ deno add npm:@langchain/core@1.1.32 npm:@langchain/openai@1.2.13
This adds the dependencies to deno.json, after which the code can use the shorthand import names. This example uses the OpenAI integration because it is widely documented. If your project uses another provider, LangChainJS also offers packages such as @langchain/anthropic. The surrounding application structure stays similar even when the provider wrapper changes.
This chapter keeps the same course-level environment variable names as the generic implementation. The integration is provider-specific, but the surrounding application configuration can still stay consistent.
A smaller client module
The following module replaces the low-level llmClient.js from the main tutorial:
// src/langchainClient.js
import { ChatOpenAI } from "@langchain/openai";
const requestCompletion = async ({ apiKey, model, messages }) => {
const client = new ChatOpenAI({
apiKey,
model,
useResponsesApi: true,
reasoning: { effort: "low" },
});
const response = await client.invoke(messages);
const content = response.content;
if (content?.[0]?.text?.trim().length === 0) {
throw new Error(
"LangChainJS response did not contain usable text content.",
);
}
return content?.[0]?.text?.trim();
};
export { requestCompletion };
The main loop can stay almost the same. The most important consistency point is that requestCompletion still receives the same kind of data as in the raw tutorial: the API key, the model name, and the messages array. The difference is that the provider call now lives inside a LangChainJS model object rather than in a hand-written JSON request.
To keep the framework variant compact, the code above shows only the client module. The message history, prompt loop, and transcript logging from the main tutorial can be kept exactly as before around this client module. In other words, the surrounding application shape barely changes; only the inside of llmClient.js changes substantially.
What stays the same from the earlier tutorial
Read this chapter as a comparison against the raw implementation in the previous tutorial.
- The program is still a CLI application.
- The application still needs configuration, message history, and output checks.
- The system prompt still matters.
- The surrounding software still decides what to log and how to react to failures.
What changed is mainly the expression of the model call. LangChainJS provides objects for prompt composition and model invocation, so the code becomes shorter and more uniform. The engineering responsibilities, however, remain mostly the same.
A short run
If the environment variables are set, the program can still be run from the terminal in the same spirit as the generic version:
$ export LLM_API_KEY="your-api-key"
$ export LLM_MODEL="gpt-5-nano-2025-08-07"
$ deno run --allow-env main.js
CLI chat started. Type 'exit' to quit.
> What is a command-line application?
A command-line application (CLI) is a program that runs in a terminal
or shell and is controlled primarily by text commands and options.
Key points:
- Invoked from a command prompt (e.g., bash, PowerShell, cmd).
- Accepts arguments/flags to specify behavior (e.g., mytool --verbose
input.txt).
- Communicates via standard input/output (text), often using
stdout/stderr.
- Usually no graphical user interface.
- Useful for automation, scripting, and batch tasks.
The exact runtime command depends on how you wrap the requestCompletion helper into the rest of the application. The main point is that you can reuse the earlier CLI structure rather than learn a second application shape from scratch.
What changed and what did not
LangChainJS simplifies prompt composition and model invocation. It also gives the project a vocabulary that can carry over into more advanced workflows later, such as output parsers, retrievers, and tool abstractions.
At the same time, important responsibilities did not disappear. The application still has to load configuration, choose the model, decide what to log, decide what errors to show, and decide how much conversation history to retain. In this example, the integration uses gpt-5-nano-2025-08-07 with a low reasoning effort. The framework changes how some pieces are written, but it does not remove the need for engineering judgment.
This is the key lesson of the framework variant. A framework can reduce repetitive code, but it cannot tell the developer what the program should do when the model is slow, returns weak output, or receives unsafe input.