Working Practices
Learning by doing
Reading about software engineering is useful, but software engineering skills develop through practice. The same is true for engineering with large language models. If you only read the prompts, responses, and code examples in the materials without trying them out yourself, you will miss much of the learning.
When possible, run the examples locally, vary the inputs, and compare what happens when you change the specification, the prompt, or the implementation. Small experiments often reveal more than long explanations.
It is usually more effective to work on the course over multiple days than to try to finish everything in one sitting. The tutorial chapters, in particular, benefit from pause-and-retry learning.
Verify outputs
One of the central themes of this course is that useful output is not the same thing as correct output.
A language model can produce code that looks convincing, review comments that sound confident, or documentation that appears polished while still being incomplete or wrong. For that reason, you are expected to verify outputs using ordinary engineering tools and habits:
- run the program,
- inspect the output,
- write or run tests,
- check assumptions against the specification,
- and simplify when the AI-generated result is harder to understand than the problem requires.
This applies both when using the AI chat provided on the course platform and when using your own tools outside the platform.
Keep track of prompts and decisions
In many tutorial chapters, you will improve a result over multiple rounds. It is therefore useful to keep lightweight notes on:
- the prompt or task description you started from,
- what the model suggested,
- what you accepted or rejected,
- and why you made those decisions.
This does not have to become a separate document for every exercise. A short text file, a scratch notebook, or a few comments in your working directory is often enough. The important thing is to maintain visibility into your process.
Coursework and collaboration
The course includes a range of tasks including quizzes, prompting tasks, and programming exercises.
You may discuss ideas with others, but your submitted work must reflect your own understanding. You are responsible for checking your own solutions, even if an AI assistant or another person helped you think through the problem.
As in other open online courses, do not publish your solutions publicly or store them in places where other learners can access them directly.
Working safely with LLMs
When using external LLM services, do not send:
- personal data that you are not allowed to share,
- company or client data,
- secrets such as API keys,
- or any other confidential material.
In this course, examples are intentionally small and self-contained so that you can practice without needing sensitive data. Keep your own work similarly controlled.
Many programming exercises later in the course are also tested against stubbed or local data instead of live third-party API calls. That keeps the coursework safer, cheaper, and easier to reproduce while still letting you practice the application logic around model use.
The use of generative AI tools is allowed and encouraged in this course unless a specific task states otherwise.
A practical rule of thumb
If you are unsure about an LLM response, ask yourself the following:
- What is the exact claim or change being proposed?
- How would I check that it is correct?
- What would happen if I accepted it without checking?
If you cannot answer those questions clearly, slow down and verify before moving on.