Software Engineering with AI Assistance

Requirements, Tasks, and Specifications


Learning Objectives

  • You understand the difference between a vague task and an actionable specification.
  • You know how constraints and acceptance criteria improve AI-assisted work.
  • You can identify what information is missing before implementation starts.

From request to engineering task

Many AI-assisted failures — like many failing software engineering projects — start long before any code is generated. They start with an unclear request.

Consider the sentence: “Make a tool that summarizes grades.”

At first glance, this looks like a task. In reality, it is only the beginning of one. Before implementation starts, an engineer still needs to ask:

  • Where do the grades come from?
  • What counts as a valid grade?
  • What summary should be produced?
  • How should errors be handled?
  • What should the program print?

Until those questions are addressed, the request is too vague to support reliable implementation.

The sentence might be easy to understand to the person who is requesting the feature; for example, a teacher might have an abundance of tacit information about grading flows, systems used to store grades, and so on. Making requirements clear and explicit can significantly reduce the likelihood of failures.

What a useful specification contains

A useful engineering specification does not need to be long, but it should be concrete. For small tasks, the following elements often matter the most:

  • the input,
  • the required behavior,
  • the constraints,
  • and the acceptance criteria.

For the earlier grade-summary example, a stronger specification could be:

Create a command-line program that accepts any number of integer grades between 0 and 100 as command-line arguments. The program must print the count, minimum, maximum, and average grade. If any provided value is invalid, the program must print an error message and exit without printing a summary.

This is already much more useful. It narrows the problem, clarifies the input format, and defines at least part of the desired behavior.

Loading Exercise...

Rainfall problem

The rainfall problem is a classic example of how a task can sound clear while still hiding important assumptions. A seemingly simple request such as “read rainfall values and report the average” may leave open questions about valid inputs, stopping conditions, missing values, and whether negative numbers are errors or meaningful data. This is exactly why useful specifications need explicit constraints and acceptance criteria.

Acceptance criteria

Acceptance criteria are especially helpful when working with AI assistance, because they tell both the human and the model what a successful result must satisfy.

For the same example, the acceptance criteria might include:

  • deno run grade_summary.js 40 70 90 prints the count, minimum, maximum, and average,
  • deno run grade_summary.js prints an error or usage message,
  • deno run grade_summary.js 40 bad 90 reports invalid input,
  • and the average is shown using a stable output format.

These criteria make review easier because they turn “does this look okay?” into “does this satisfy the required behavior?”.

Loading Exercise...

A practical transformation workflow

The movement from vague request to a reviewed specification starts from the request and ends with with something concrete enough to implement and verify. Iterating over the problem by posing clarifying questions and producing draft specifications, which are then reviewed, can help produce something more clear.

When sufficiently clear draft specifications have been created, clarifying constraints and defining an acceptance criteria can lead to discovering additional problematic assumptions.

One possible workflow of turning a vague request into something more concrete is shown in Figure 1.

Fig 1. — Specification work often begins by turning a vague request into something concrete enough to implement and verify.

AI helps when the problem is made clearer

Large language models can help refine a task by suggesting missing requirements, edge cases, or alternative formulations. That is useful. But the tool should be asked to improve the specification, not to jump immediately to code.

For example, a good prompt at this stage could ask the model to identify missing constraints or propose acceptance criteria. A weaker prompt would skip directly to implementation even though the task description is still ambiguous.

One practical pattern is to ask for a reviewed specification with named sections:

Rewrite the following vague request into a reviewed specification for a small CLI program.
Include:
- task restatement
- required behavior
- acceptance criteria

Do not write code.

Request:
Make a tool that summarizes grades.

This prompt is useful because it gives the model a clear task boundary. The goal is not “solve the problem” yet. The goal is “make the problem precise enough to solve well”.

However, depending on the task and the objective, the prompt might be still missing key details. For example, the prompt does not explicitly ask for constraints such as “the program should read grades from command-line arguments” or “the program should print an error message if the input is invalid”. Such constraints are important because they narrow the problem and make it more actionable.

Research into requirements engineering with large language models is still developing. Early-ish examples include work on Improving Requirements Completeness: Automated Assistance through Large Language Models, Structuring Natural Language Requirements with Large Language Models, and Leveraging LLMs for the Quality Assurance of Software Requirements.

Review before implementation

Before writing code, review the specification from the perspective of the future test writer.

If a tester cannot tell what the correct program behavior should be, the specification is not yet ready. This is a useful rule of thumb whether or not an AI assistant is involved.

Loading Exercise...