Tools, Agents, and Workflow Automation

Tutorial: Building a Tool-Connected Assistant


Learning Objectives

  • You can build a CLI assistant that uses explicit tools to answer questions.
  • You can describe tools with schemas and execute tool calls in ordinary JavaScript.
  • You can separate model decisions from deterministic tool execution.

In this tutorial, we extend the command-line chat application from Part 3. The assistant will answer questions about a local issue list stored in a JSON file. Instead of asking the model to invent answers from memory, we give it access to deterministic tools.

The project structure is as follows:

tool-assistant/
├── data/
│   └── issues.json
├── deno.json
├── main.js
└── src/
    ├── config.js
    ├── issueRepository.js
    ├── llmClient.js
    ├── toolExecutor.js
    └── toolSchemas.js

Step 1: Local issue data

Assume that data/issues.json contains entries like the following:

[
  { "id": 1, "title": "Export command fails on empty data", "status": "open" },
  { "id": 2, "title": "Retry logic missing for API calls", "status": "in-progress" },
  { "id": 3, "title": "Transcript file path is not configurable", "status": "open" }
]

The assistant will use tools to inspect this local file.

Step 2: Deterministic repository functions

The repository reads the local data and exposes plain JavaScript helpers.

// src/issueRepository.js
const loadIssues = async () => {
  const text = await Deno.readTextFile("./data/issues.json");
  return JSON.parse(text);
};

const getIssueById = async (issueId) => {
  const issues = await loadIssues();
  return issues.find((issue) => issue.id === issueId) ?? null;
};

const countIssuesByStatus = async () => {
  const issues = await loadIssues();
  const counts = {};
  for (const issue of issues) {
    counts[issue.status] = (counts[issue.status] ?? 0) + 1;
  }
  return counts;
};

export { countIssuesByStatus, getIssueById, loadIssues };

Step 3: Tool schemas

Now we describe these functions to the model.

// src/toolSchemas.js
const tools = [
  {
    type: "function",
    name: "get_issue_by_id",
    description: "Retrieve one local issue by numeric id.",
    parameters: {
      type: "object",
      properties: {
        issueId: { type: "integer", description: "The issue id." },
      },
      required: ["issueId"],
    },
  },
  {
    type: "function",
    name: "count_issues_by_status",
    description: "Count the local issues by status.",
    parameters: {
      type: "object",
      properties: {},
    },
  },
];

export { tools };

Step 4: Tool execution

The model does not execute the tools itself. The program does that.

// src/toolExecutor.js
import { countIssuesByStatus, getIssueById } from "./issueRepository.js";

const executeToolCall = async (toolCall) => {
  const name = toolCall.name;
  const args = JSON.parse(toolCall.arguments ?? "{}");

  if (name === "get_issue_by_id") {
    if (!Number.isInteger(args.issueId)) {
      throw new Error("get_issue_by_id requires an integer issueId.");
    }
    return JSON.stringify(await getIssueById(args.issueId));
  }

  if (name === "count_issues_by_status") {
    return JSON.stringify(await countIssuesByStatus());
  }

  throw new Error(`Unknown tool: ${name}`);
};

export { executeToolCall };

This execution layer is a good place to reject malformed arguments. The model may propose a tool call, but the surrounding application still decides whether that call is valid.

Loading Exercise...

Many types of tools

In practice, there could be a plethora of tools, some of which could be e.g. invoked from the command line, some of which could run in a Docker container, and some of which could be remote API calls. The important point is that the model does not need to know how the tool works internally. It only needs to know what the tool does and what arguments it needs.

Step 5: Call the model with tools

The LLM client sends the messages and the tools together.

// src/llmClient.js
const requestAssistantTurn = async ({ apiUrl, apiKey, model, messages, tools }) => {
  const response = await fetch(apiUrl, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": `Bearer ${apiKey}`,
    },
    body: JSON.stringify({
      model,
      input: messages,
      tools,
      tool_choice: "auto",
    }),
  });

  if (!response.ok) {
    throw new Error(`LLM request failed with status ${response.status}`);
  }

  return await response.json();
};

export { requestAssistantTurn };

Step 6: Orchestrate one tool-using turn

The main loop now becomes slightly more complex than in Part 3. If the model asks for a tool, the application executes it and gives the result back to the model before printing the final answer. With the Responses API, the conversation history can contain both ordinary role-based messages and tool-related items.

// main.js
import { loadConfig } from "./src/config.js";
import { requestAssistantTurn } from "./src/llmClient.js";
import { tools } from "./src/toolSchemas.js";
import { executeToolCall } from "./src/toolExecutor.js";

const getOutputText = (response) => {
  const messageItem = (response.output ?? []).find((item) => item.type === "message");
  const textPart = messageItem?.content?.find((part) =>
    part.type === "output_text"
  );

  if (typeof textPart?.text !== "string" || textPart.text.trim().length === 0) {
    throw new Error("Model response did not contain usable text content.");
  }

  return textPart.text;
};

const config = loadConfig();
const messages = [
  {
    role: "system",
    content:
      "You are a concise issue assistant. Use tools when the user asks about local issues or issue counts.",
  },
];

while (true) {
  const input = prompt("> ");
  if (input === null || input.trim().toLowerCase() === "exit") {
    break;
  }

  messages.push({ role: "user", content: input });

  const firstResponse = await requestAssistantTurn({
    apiUrl: config.apiUrl,
    apiKey: config.apiKey,
    model: config.model,
    messages,
    tools,
  });

  messages.push(...(firstResponse.output ?? []));
  const toolCalls = (firstResponse.output ?? []).filter((item) =>
    item.type === "function_call"
  );

  if (toolCalls.length) {
    for (const toolCall of toolCalls) {
      const result = await executeToolCall(toolCall);
      messages.push({
        type: "function_call_output",
        call_id: toolCall.call_id,
        output: result,
      });
    }

    const finalResponse = await requestAssistantTurn({
      apiUrl: config.apiUrl,
      apiKey: config.apiKey,
      model: config.model,
      messages,
      tools,
    });

    messages.push(...(finalResponse.output ?? []));
    console.log(`\n${getOutputText(finalResponse)}\n`);
  } else {
    console.log(`\n${getOutputText(firstResponse)}\n`);
  }
}

Before wiring the full tool loop, it can be useful to test the boundary with a smaller routing prompt. For example:

Classify the following request.
Return exactly one label:
- use_tool
- answer_directly

Choose use_tool only if the request depends on local issue data.

This kind of prompt is simple, but it helps the engineer think clearly about when a tool should be invoked at all.

Loading Exercise...

Run the program with read, network, and environment access:

$ deno run --allow-read --allow-net --allow-env main.js

A typical successful interaction could look like this:

$ deno run --allow-read --allow-net --allow-env main.js
> How many issues are open?

There are currently 2 open issues and 1 issue in progress.

> Show issue 2

Issue 2 is "Retry logic missing for API calls" and its current status is in-progress.

A useful missing-data case is a request for an issue that does not exist:

> Show issue 99

I could not find a local issue with id 99.
Core agentic pattern

While the assistant that we build here is tiny, it demonstrates the core agentic pattern:

  • the model chooses whether a tool is needed,
  • the application executes the tool,
  • and the model then uses the tool result to answer the user.

The surrounding software remains responsible for the tool set, the validation boundary, and the execution logic. That is the key engineering lesson of this part.


Loading Exercise...

The next chapter revisits the same assistant through LangChainJS tools. Comparing the two versions makes it easier to see which parts of tool use are architectural ideas and which parts belong only to one implementation style.

The programming exercise for this chapter keeps the local issue data real while stubbing the model responses. That balance matters: the learner still has to implement the tool boundary correctly, but grading does not depend on network access.

Loading Exercise...