Framework Variant: Tool Calling with LangChainJS
Learning Objectives
- You can read a small LangChainJS tool definition with a schema.
- You understand that a framework can organize tool calls without removing the application’s validation boundary.
This chapter revisits the tool-connected assistant from the tutorial in this part. The main tutorial showed the execution loop explicitly so that the application boundary stayed easy to inspect. This chapter keeps the same task but expresses more of the implementation through LangChainJS.
Installing the packages
A minimal Deno setup for this LangChainJS-based example can be installed like this:
$ deno add npm:@langchain/core@1.1.32 npm:@langchain/openai@1.2.13 npm:zod@4.3.6
If you already worked through the chapter on building a Chat CLI with LangChainJS, then you need to only add the zod package for schema validation.
As in the earlier tutorial, we keep the environment variables generic. The provider integration is specific, but the application can still read LLM_API_KEY and LLM_MODEL just as before.
Defining tools
LangChainJS tools can be defined with a function, a name, a description, and a schema:
// src/langchainTools.js
import { tool } from "@langchain/core/tools";
import * as z from "zod";
import { countIssuesByStatus, getIssueById } from "./issueRepository.js";
const getIssueByIdTool = tool(
async ({ issueId }) => {
return await getIssueById(issueId);
},
{
name: "get_issue_by_id",
description: "Retrieve one local issue by numeric id.",
schema: z.object({
issueId: z.number().int(),
}),
},
);
const countIssuesByStatusTool = tool(
async () => {
return await countIssuesByStatus();
},
{
name: "count_issues_by_status",
description: "Count the local issues by status.",
schema: z.object({}),
},
);
export { countIssuesByStatusTool, getIssueByIdTool };
This definition is more compact than the raw JSON schema from the main tutorial, but the same engineering ideas remain. The name still needs to be clear, the description still needs to guide the model, and the schema still needs to make valid inputs explicit.
Binding the tools to a model
The model can then be bound to the tool set:
// src/langchainAssistant.js
import { ChatOpenAI } from "@langchain/openai";
import {
countIssuesByStatusTool,
getIssueByIdTool,
} from "./langchainTools.js";
const model = new ChatOpenAI({
apiKey: Deno.env.get("LLM_API_KEY"),
model: Deno.env.get("LLM_MODEL") ?? "gpt-5-nano-2025-08-07",
useResponsesApi: true,
reasoning: { effort: "low" },
});
const toolEnabledModel = model.bindTools([
getIssueByIdTool,
countIssuesByStatusTool,
]);
export { toolEnabledModel };
An invocation can now return normal text, tool calls, or both:
const assistantMessage = await toolEnabledModel.invoke([
{
role: "system",
content:
"You are a concise issue assistant. Use tools when the user asks about local issues or issue counts.",
},
{ role: "user", content: "How many issues are open?" },
]);
console.log(assistantMessage.tool_calls ?? []);
The important comparison point is that the tool boundary remains visible. LangChainJS makes the tool definitions and model binding shorter, but it does not change the architectural fact that the model proposes a tool call and the application decides what happens next.
The framework-native loop is also fairly direct. The model returns tool calls in assistantMessage.tool_calls, each tool can be invoked with that tool-call object, and the returned ToolMessage objects are appended back into the conversation before the next model turn:
const messages = [
{
role: "system",
content:
"You are a concise issue assistant. Use tools when the user asks about local issues or issue counts.",
},
{ role: "user", content: "How many issues are open?" },
];
const assistantMessage = await toolEnabledModel.invoke(messages);
messages.push(assistantMessage);
const toolsByName = {
get_issue_by_id: getIssueByIdTool,
count_issues_by_status: countIssuesByStatusTool,
};
for (const toolCall of assistantMessage.tool_calls ?? []) {
const tool = toolsByName[toolCall.name];
if (!tool) {
throw new Error(`Unknown tool: ${toolCall.name}`);
}
const toolResult = await tool.invoke(toolCall);
messages.push(toolResult);
}
const finalResponse = await toolEnabledModel.invoke(messages);
console.log(finalResponse.text);
Each returned ToolMessage includes the matching tool-call identifier, so the model can connect the tool result back to the request that produced it. This keeps the second turn compact while still leaving the validation and execution boundary inside the application.
A short comparison with the generic tutorial
In the explicit tutorial, we wrote:
- a raw schema object,
- a raw model request that included the tools,
- and a loop that executed tool calls manually.
In this framework variant, we instead write:
- a LangChainJS tool object,
- a model bound to those tools,
- and a smaller invocation step that returns tool call data.
That reduces repetitive code while keeping the main design choices easy to inspect. It also makes it easier to see which parts of the earlier tutorial were framework-independent ideas and which parts belonged only to one implementation style.
A short run
If the model and tool definitions are wrapped into the same CLI shell as the earlier tutorial, the interaction still looks familiar:
$ export LLM_API_KEY="your-api-key"
$ export LLM_MODEL="gpt-5-nano-2025-08-07"
$ deno run --allow-read --allow-env main.js
> How many issues are open?
There are currently 2 open issues and 1 issue in progress.
The interface stays familiar on purpose. What changes is the code organization behind it.