Tools, Agents, and Workflow Automation

Function Calling and Tool Schemas


Learning Objectives

  • You know how function calling works at a high level.
  • You can read and design a simple tool schema.
  • You understand that the model selects a tool, but the application executes it.

Function calling as a coordination mechanism

Function calling is a common way to let a model request the use of external tools. The application sends the model the normal conversation messages together with a list of available tools. Each tool has a name, a description, and a parameter schema.

The model may then respond with a tool call instead of a normal text answer. The application inspects that request, validates it, executes the corresponding deterministic function, and sends the tool result back into the conversation.

Figure 1 shows the high-level loop.

Fig 1. — Function calling is a coordination loop: the model requests a tool, the application executes it, and the result is returned to the model.

A simple tool schema

In the Responses API, tool declarations can look like the following:

const tools = [
  {
    type: "function",
    name: "get_issue_by_id",
    description: "Retrieve one issue from the local issue tracker by numeric id.",
    parameters: {
      type: "object",
      properties: {
        issueId: {
          type: "integer",
          description: "The numeric identifier of the issue.",
        },
      },
      required: ["issueId"],
    },
  },
];

The schema tells the model what the tool is for and what arguments are expected. Good schemas reduce ambiguity. Bad schemas invite guessing.

It is useful to think of a tool schema as an interface description written for both the model and the human reader. A good schema should make it obvious what the tool does, what it needs, and what kind of request would be inappropriate.

Designing useful schemas

When designing a tool schema, four questions are especially useful. Is the tool name specific enough that the intended use is obvious? Does the description explain the actual purpose of the tool rather than only repeating the name? Are the parameters explicit enough that the model can choose sensible arguments? Does the tool expose only the authority that the application really wants to grant?

For example, query_data is a poor tool name because it says almost nothing about what the tool actually does. count_issues_by_status is much clearer. The clearer name gives the model a narrower target and gives the human reviewer a better chance of spotting a misuse.

The same idea applies to parameters. A parameter called value is rarely helpful. A parameter called issueId or status is much easier to interpret correctly.

The same clarity habit can be practiced at the prompt level before the tool schema is finalized. For example, if you are using AI assistance to draft a description, you could ask for something like:

Write one concise tool description for the following function.
The description should help another model decide when to use the tool.
Avoid vague phrases such as "query data" or "handle requests".

This is useful because it turns a vague naming task into a quality check: does the description actually help the model choose the tool correctly?

Loading Exercise...

The same kind of review should be applied to complete schemas. A schema may look neatly structured while still being too vague to support safe routing and validation.

Loading Exercise...

The execution boundary

An important point is that the model does not execute the function directly. It only asks for a function call. The surrounding application remains responsible for deciding whether the call is valid, executing the deterministic code, formatting the result, and deciding whether the model gets another turn.

This separation is one of the main safeguards in tool-using systems. If the model were allowed to execute arbitrary actions directly, many of the security and oversight patterns used in agent-like systems would be harder to enforce.

To make the boundary more concrete, a model response might contain a tool request that looks conceptually like this:

{
  type: "function_call",
  name: "get_issue_by_id",
  arguments: "{\"issueId\": 2}",
  call_id: "call_abc123"
}

At that point, the application still has work to do. It needs to parse the arguments, validate them, and decide whether the call should actually run.

Loading Exercise...

Multiple tool calls

Some APIs allow the model to request multiple tool calls in one response. This can be useful, but it also increases complexity. We’ll focus just on one clear tool set, one explicit execution loop, and small deterministic return values.

The same principles apply if there’s a need for multiple tools.

Generic schemas and vendor helpers

The schema ideas in this chapter are intentionally generic. The surrounding application describes tools, the model requests one, and the application decides whether and how to execute it. That pattern exists across several provider ecosystems even when the exact JSON shape changes.

If you later standardize on a single provider, official SDKs often include convenience helpers for tool schemas, tool results, or structured argument handling. SDKs can reduce boilerplate, but they do not remove the software engineering responsibility of choosing good tool boundaries, validating arguments, and restricting what tools are allowed to do.

Frameworks can add another layer of abstraction again. LangChainJS, for example, offers higher-level abstractions for tools and tool-calling workflows, but the same execution boundary still matters: the framework may organize the loop, yet your application is still responsible for what the tools are allowed to access and what happens when they fail.

For additional details on model tool use, see Toolformer: Language Models Can Teach Themselves to Use Tools and the API-focused follow-up Gorilla: Large Language Model Connected with Massive APIs.

Loading Exercise...