Tutorial: Feature Development with AI
Learning Objectives
- You can take a small feature from specification to verified implementation with AI assistance.
- You can review AI-generated code and reject incorrect suggestions.
- You can use tests to decide whether a change should be accepted.
In this tutorial, we work on a very small command-line feature so that the focus stays on workflow rather than on project setup.
The task is the following:
Create a command-line program
grade_summary.jsthat accepts integer grades from the command line and prints the count, minimum, maximum, and average. If any argument is missing or invalid, the program must print an error message instead of a summary.
Step 1: Turn the task into a checked specification
Before asking the AI for code, write down the constraints explicitly.
- Inputs are command-line arguments.
- Each grade must be an integer between 0 and 100.
- The program must reject invalid input.
- The average should be printed with one decimal place.
Already here, the task is clearer than a vague request such as “make a grade tool”.
One useful first prompt would be:
Rewrite the following vague task into a checked specification for a Deno command-line program.
Include:
- expected inputs
- validation rules
- expected output
- error handling
- acceptance criteria
Do not write code.
Task:
Make a grade tool.
This gives the AI assistant a narrow and reviewable job before any implementation starts.
Step 2: Ask for a first draft
Now we can ask an AI assistant for a first version.
Create a Deno command-line program called grade_summary.js. It should read integer grades from command-line arguments, reject invalid values, and print count, minimum, maximum, and average with one decimal place.
Certainly. Here is one possible implementation...
The important point is not that the AI produced code, but that the request was specific enough to review meaningfully.
Step 3: Review before running
Suppose that the draft contains this line:
const grades = Deno.args.map(Number);
This is not automatically wrong, but it is incomplete. Number("oops") becomes NaN, so the next question is whether the program checks for that case before continuing.
Similarly, if the draft computes the average without first checking that at least one grade exists, the program may print NaN or crash in an unclear way.
These are exactly the kinds of issues that should be caught in review before trusting the result.
Step 4: Add tests
Even for a very small program, tests help make the review concrete. One useful approach is to move the main logic into a function and test that function separately.
For example:
export const summarizeGrades = (grades) => {
return {
count: grades.length,
min: Math.min(...grades),
max: Math.max(...grades),
average: grades.reduce((sum, grade) => sum + grade, 0) / grades.length,
};
};
Then the command-line program can focus on parsing and printing, while tests focus on behavior.
Example test cases:
import { assertEquals } from "jsr:@std/assert@1.0.19";
import { summarizeGrades } from "./grade_summary.js";
Deno.test("summarizeGrades computes the summary", () => {
assertEquals(summarizeGrades([40, 70, 90]), {
count: 3,
min: 40,
max: 90,
average: 200 / 3,
});
});
The test may still need refinement, but it already gives us a better basis for accepting or rejecting changes.
Step 5: Improve the draft
After review and testing, the implementation might end up looking simpler than the original AI output. That is often a good sign.
For example, it may be clearer to:
- validate the arguments first,
- convert them only after validation,
- and keep the formatting logic separate from the summary logic.
The final solution does not need to be the most abstract one. It only needs to satisfy the specification clearly and reliably.
Step 6: Reflect on the workflow
In a small example like this, the most important lesson is not how to calculate an average. It is how the workflow changes the quality of the result:
- the task was clarified,
- the AI produced a draft,
- the draft was reviewed,
- tests were added,
- and the result was improved before being accepted.
That same pattern scales to larger work.
The programming exercise for this chapter follows the same pattern on a small scale. The starter comes with a tiny CLI and tests, so you can focus on specification, implementation, and review rather than on project setup.