Teaching Software Engineering in the Age of Generative AI: What Is Really at Stake?

Teaching Software Engineering in the Age of Generative AI: What Is Really at Stake?

Marco Vieira

The arrival of generative artificial intelligence (AI) in the software development landscape is raising a wave of questions that the academic community does not yet know how to address. Should we continue teaching programming the way we always have? Assuming that future developers will always have AI at their disposal, to what extent will they still need to know how to code? What does it mean, after all, to train a software engineer in a world where a tool can generate code and other artefacts in seconds?

These are legitimate questions. But before answering them, it is worth asking whether we are even posing them with sufficient clarity.

A Fundamental Confusion

There is an imprecision that runs through much of this debate, and that needs to be resolved before anything else: in my opinion, programming is not the same as software development, and software development is not the same as software engineering. These are related but distinct concepts. Programming is a skill. Development is a process. Engineering is a discipline that involves analysis, decision-making, verification, trade-off management, and responsibility over complex systems.

This distinction is not merely semantic. It is structural. When we say that AI will replace programmers, we are talking about one thing. When we say it will replace software engineers, we are talking about something entirely different and probably overstating the case. What AI is doing, and will increasingly do, is change the profile of skills required. As has happened before throughout the history of this field, only at a faster pace.

The Teaching Problem: An Analogy and Its Limits

Despite having calculators capable of solving complex expressions, we continue to teach mathematics to children without them. We do so because we understand that a calculator is a tool, and that using a tool without understanding what it does is a weakness, not an advantage.

The same principle should guide the teaching of programming. But some may question whether the analogy still holds after all; a calculator does not reason, it merely computes, whereas generative AI seems, in some way, to understand the problem and propose solutions. Is it, therefore, different?

In my opinion, the answer is: in part, yes. But this does not invalidate the argument; it reinforces it. Precisely because AI appears to reason, the risk that a student without solid foundations will be unable to distinguish a good answer from a bad one is much greater. AI can be wrong in subtle ways, and only those who understand the fundamentals can detect it. A calculator produces an obviously wrong result when given an erroneous input. AI can produce a wrong answer in a convincing way. That demands more from the user, not less.

There is also an important practical difference: in universities, we are not dealing with children, and we cannot simply prohibit the use of tools. What we can do is design assessment processes that make such prohibition unnecessary, because they genuinely evaluate what we actually want to evaluate. Written exams without computers, or on computers without access to AI or advanced IDEs, are two examples of ways to determine whether a student truly understands what they are doing. It should be acknowledged, however, that these forms of assessment are not without their own limitations. A developer who excels at working with AI but struggles in a pen-and-paper exam may be exactly the kind of professional the industry needs. Assessment design must therefore be intentional: the goal is not to measure performance in artificial conditions, but to ensure that the underlying understanding exists. Both dimensions matter.

A Phased and Incremental Approach

My position is not a choice between teaching with or without AI. It is a phased approach, in which progression is gradual and guided by students’ expected progress.

In a first phase, students should learn the fundamentals of programming without AI support. The core concepts remain: logic, data structures, algorithms, and abstraction. Note that this may not mean teaching the same content in the same way as before. The level of abstraction at which these concepts are introduced may evolve to reflect the tools students will encounter in practice. What changes is not the destination but the route: the goal remains computational thinking, not syntactic mastery. The question of which specific fundamentals to prioritize and at what depth remains unanswered with sufficient precision and demands more brainstorming.

There is, however, a dimension the knowledge pyramid alone does not capture: student motivation. CS has long struggled with attrition because the bottom-up curriculum keeps students in “Hello World” territory for two or three years. The phased approach should therefore be complemented by a parallel track of project-driven “build, build, build” courses that run alongside foundational concepts from the start. These courses give the fundamentals meaning, introduce process naturally (something notoriously difficult to teach to seniors with fixed habits), and build the confidence that sustains students through the harder theoretical stages.

The transition to a second phase should not be abrupt or rigidly scheduled. It should happen gradually, step by step, as the students acquire the ability to do two things: explain precisely to the AI what needs to be developed and critically verify what it produces. These two competencies, communicating intent and validating output, are the signal that the fundamentals are sufficiently consolidated for AI to function as a tool rather than a crutch.

It is worth being precise about what verifying AI output actually requires. To assess whether generated code is correct, a student must already have a mental model of what correct looks like. This is not a circular problem; it is, in fact, the clearest argument for teaching fundamentals first. The capacity to verify is itself a product of the first phase, not a precondition for entering it.

In this second phase, the focus shifts. It is no longer primarily about knowing how to program, since that foundation is assumed, but about knowing how to work with AI effectively and responsibly. This includes formulating good prompts, iterating on responses, recognizing limitations, and above all, not accepting what has been generated without understanding it.

The Irreplaceable Role of the Software Engineer

This is where the initial distinction between programming, development, and engineering becomes relevant again. Even if AI could generate code as well as the best programmers, and that in itself is a hypothesis worth examining, there is a dimension of software engineering that remains fundamentally human: deciding what needs to be built and verifying whether what was built fulfills the needs and intent.

Requirements gathering, stakeholder negotiation, and identifying the real problem behind the stated request all demand an understanding of context, human intention, and consequences. AI can participate in this process, help to structure it, and suggest angles that were not considered. But it cannot lead it alone because it does not have access to the full organizational, human, and strategic context in which the software will exist.

And even after the code has been generated, the engineer’s work is not done. Someone must verify that what was produced does what it is supposed to do, respects non-functional requirements, and is secure and sustainable. Someone must be able to identify the source of problems when something fails, and to resolve them, or at least know what to ask the AI to resolve. This capacity for diagnosis and verification is not peripheral. It is central to the profession. And it does not develop without a solid grounding in the fundamentals.

In my opinion, AI, like even the best programmers, will always make mistakes, even if it can test everything it builds. The difference is that its mistakes may be harder to detect, more plausible in appearance, and more subtle in consequence. The software engineer of the future will largely be someone who knows how to ask the right questions, interpret answers critically, drive development accordingly, and take responsibility for the outcome. That is no less than what we do today. It is different and more demanding.

It should be noted that the boundary of what remains fundamentally human in this process has shifted before and may shift again. The claim here is not that requirements gathering or critical verification will always require human judgment, but that they do so now, and that we must train accordingly. Intellectual honesty requires acknowledging that this assessment may need to be revised as the technology evolves.

A Clear Position

Given all of this, the position defended here is straightforward: we should not abandon the teaching of programming fundamentals, and we should not do so in the name of a supposed efficiency that AI would provide. A student who does not understand the underlying principles will not be able to explain to AI what needs to be developed with any rigor, nor evaluate whether what has been generated is correct, secure, or appropriate to the problem.

The challenge facing higher education institutions is not to choose between the past and the future. It is to build an approach that honors both: one that maintains the rigor of the fundamentals and prepares students for a world in which those fundamentals are exercised in new ways. This requires rethinking curricula, rethinking assessment, and rethinking what it means to be a software engineer. It also requires the humility to acknowledge that some of these questions do not yet have definitive answers, and that the academic community must develop them together, through experimentation, evidence, and honest reflection on what we observe in both classrooms and industry.

It is not straightforward work. But it is the work we are called to do.

Marco's RA (Online)
Hi! I'm Marco Vieira's designated Research Assistant. I'm supposed to answer your questions but I really need to finish running this simulation script. What do you need?