Software Engineering with AI Assistance

Implementing, Debugging, and Refactoring with AI


Learning Objectives

  • You understand how AI-generated code can be used as a draft rather than as a final answer.
  • You know how to prompt for one specific code change without asking for an uncontrolled rewrite.
  • You know how AI assistance can support debugging and refactoring work.
  • You can identify signals that a suggested change should be rejected or revised.

Treat outputs as candidates

When asking an AI assistant to write code, the safest mindset is to treat the output as a candidate implementation. It may be helpful, partially correct, or completely wrong. The point is that it still requires review.

Consider the following small function:

const average = (values) => {
  return values.reduce((sum, value) => sum + value, 0) / values.length;
};

The function works for a non-empty array of numbers, but it does not define what should happen for an empty array. Whether this is acceptable depends on the specification. A model may generate this implementation quickly, but only the engineer can decide whether the missing case is acceptable.

Prompting for code

When asking an AI assistant to write code, the quality of the result depends heavily on the quality of the request. A short request such as “write a function for this” usually leaves too many decisions implicit. The model may choose the wrong language, invent surrounding helper code, or ignore the edge cases that actually matter.

Code prompts become more useful when they provide enough context to constrain the task:

  • what language to use,
  • what function or module should be implemented,
  • what the function should do,
  • what edge cases matter,
  • and what the output should contain.

For example, a better request is not only “implement average”. A better request says that the target is JavaScript, that only one function should be returned, that empty input must be handled explicitly, and that no extra explanation text should be included around the code.

Loading Exercise...

Check if there is already a solution

Before asking an AI assistant to implement functionality, stop and ask whether the task should be solved with existing code instead. Some problems are far better handled by the language standard library or by a well-maintained package than by new generated code.

AI-assisted debugging

Debugging prompts are often more useful when they ask for hypotheses than when they ask for a magical fix.

For example, instead of writing:

“Fix this.”

or simply pasting in an erronous test case or some other incomplete information and treating an LLM as a search engine, it is often better to ask:

“What are three likely reasons why this function returns NaN, and how should I test each explanation?”

That style of prompt helps because it supports the debugging process without pretending that the first answer is automatically the right one.

The important design choice in that prompt is not only the word “three”. It is the shift from “fix this” to “generate hypotheses and checks”. A debugging prompt should guide the model toward investigation.

The output of a prompt that asks for hypotheses and checks is easier to trust because it makes the uncertainty explicit and gives the engineer something concrete to verify next.

Loading Exercise...

Tiny example: unexpected output

Suppose that the goal is to normalize a name for output:

const normalizeName = (name) => {
  return name.trim().toLowerCase();
};

If the requirement is to display the name in lowercase, this may be fine. If the requirement is to preserve capitalization for later display but compare names case-insensitively, this implementation is wrong even though it looks reasonable.

This is a typical example of why implementation quality depends on the specification, not only on local code style.

Loading Exercise...

Refactoring with AI

AI tools can also be useful during refactoring. They are especially helpful when the code already works, but the structure is harder to read than it needs to be. In that situation, a model may be able to point out duplication, propose a helper function, or suggest clearer names.

At the same time, refactoring suggestions should still be reviewed for side effects. A proposed cleanup might silently change behavior, error handling, or output format even when the new code looks shorter.

The best refactoring prompts describe what should improve. For example, a good prompt might say:

“Refactor this function so that input validation and output formatting are easier to read, but do not change the output format or error behavior.”

That kind of request is stronger than simply saying “make this code better”. It gives the AI assistant a target and also gives the engineer a concrete review checklist. If the new version changes the output shape or removes a needed edge-case check, the suggestion has failed even if the code looks cleaner.

When reviewing an AI-generated refactor, it helps to ask four simple questions. Did the logic change? Are edge cases still handled? Did the output format remain stable? Is the result actually easier to understand?

Rejecting AI-generated suggestions

Rejecting AI-generated suggestions is a part of using AI tools well. You should reject AI-generated suggestions, for example, when:

  • they solve a different problem than the one you specified
  • they add complexity without a clear benefit
  • they hide assumptions that you cannot explain
  • verifying the suggestions would take longer than writing a solution yourself

Being productive does not automatically mean adding new things. For example, Ken Thompson, known e.g. for Unix, has been famously quoted for stating “One of my most productive days was throwing away 1000 lines of code”.


Loading Exercise...