Software Engineering with AI Assistance

AI Coding Tools and Contemporary Software Engineering


Learning Objectives

  • You know the main categories of AI coding tools.
  • You understand the difference between augmentation and replacement in software engineering work.

A new tool layer in software engineering

Large language models have added a new layer to software engineering tools. A developer can now ask for an explanation, a code draft, a test suggestion, a review comment, or a patch proposal in natural language. This changes the pace of work, but it does not change the fact that software still needs to be specified, implemented, checked, and maintained.

In practice, AI coding tools are best understood as interfaces to probabilistic helpers. They are useful because they can compress search, draft common structures, and surface ideas quickly. At the same time, they are risky because they can be confidently wrong while still sounding plausible.

Published results on AI coding tools are somewhat mixed, in part as the tools are continuously evolving. Maintainability-oriented analyses such as GitClear’s AI Copilot Code Quality: 2025 Look Back at 12 Months of Data highlight risks such as rising code cloning, while GitHub’s vendor-conducted randomized study Does GitHub Copilot improve code quality? Here’s what the data says reports quality gains in a controlled task. For a study of how less experienced developers interact with such tools, see “It’s Weird That it Knows What I Want”: Usability and Interactions with Copilot for Novice Programmers.

Common categories of tools

The current tool landscape can be divided into a few broad categories.

  • Chat-based tools are useful for brainstorming, explaining concepts, comparing alternatives, and drafting specifications.
  • IDE-integrated assistants are useful when working directly in a codebase, because they can suggest edits near the code you are already reading.
  • Terminal and agent-style tools are useful when work involves files, commands, tests, and repeated inspection of a project or even adding new features.

The categories overlap. A single product may include all three interaction styles. Still, the distinction is useful because each style encourages different habits.

Contemporary code editors, like VSCode, integrate all three interaction styles. You can ask for a quick explanation in the chat, get a code suggestion while editing, and ask an agent to inspect the project and implement a new feature.

A quick way to make the distinction concrete is to compare a few common software-engineering tasks side by side.

Loading Exercise...

Different tools have different strengths

Suppose that you are adding error handling to a small command-line program.

  • A chat tool might help you list likely failure modes.
  • An IDE assistant might suggest a local code change while you are editing a function.
  • A terminal agent might inspect the project, run the tests, and suggest a patch.

Those are different kinds of help. None of them removes the need to decide whether the change actually matches the problem.

I have a CLI program that reads a file and prints a report. What kinds of tests should I add before I refactor it?

Start with tests for the successful path, missing-file handling, empty-file behavior, and at least one test that checks that the output format stays stable. If the program parses data, also add a test for malformed input.

The response is useful because it points toward concrete engineering tasks. However, you cannot assume that the AI is somehow guaranteed to be correct.

Augmenting existing skills

Think of AI coding tools as a way to augment your skills. The tool can speed up drafting, summarization, and exploration, but it does not understand the system in the same way that a responsible engineer is expected to.

This is especially visible when:

  • the requirements are unclear,
  • the codebase contains implicit assumptions,
  • correctness depends on domain knowledge,
  • or the change affects safety, security, or business-critical behavior.

In those cases, asking the AI for a quick answer may still be useful, but the answer should be treated as a candidate, not a conclusion.

Loading Exercise...

Fast output is not verified output

One of the easiest mistakes in AI-assisted work is to confuse speed with correctness. A fast draft can still be the wrong draft, and a polished explanation can still hide a wrong assumption.

Contemporary software engineering still matters

Contemporary software engineering is not only about writing code. It also includes:

  • refining requirements,
  • communicating intent,
  • reviewing changes,
  • structuring systems for maintenance,
  • and making trade-offs under uncertainty.

AI tools affect all of these activities, but they do not eliminate them. In fact, teams that work with AI effectively often become more explicit about specifications, tests, and review boundaries, because these are the places where automation is most likely to cause trouble.

This point is older than current AI tools. In Winston Royce’s 1970 article Managing the Development of Large Software Systems, the phase ordering is often remembered as the “waterfall model”. However, Royce’s actual argument was more careful: a purely one-way flow is risky, and serious software work needs iteration, feedback, and review across phases.

Loading Exercise...