Ethical implications of AI use

 

How AI amplifies ethical failure — and why learning design matters.

Generative AI is not an ethical problem in itself.

It is an amplifier of how organisations and individuals already learn.

The ethical implications emerge from how it is used within the learning process.

There has been broad debate about student use of generative AI, often framed in terms of misconduct, detection, and compliance.

Within an SECI framework, Generative AI is invaluable in supporting Externalisation and Combination. Students use it to articulate tentative ideas, explore alternatives, synthesise sources, and rehearse arguments.  I have been doing this extensively since 2023 to help me conduct research, develop arguments, write books, and edit the results. As an aid to actual learning, it is brilliant.

The ethical boundary is crossed when AI substitutes for the learner in demonstrating Internalisation. When students produce assessment artefacts, they plug their question into a few prompts, vet the resulting response to make it look more ‘human’, and submit it as their own work.  In terms of the SECI spiral, they are simply accessing publicly available content (Combination) and claiming it as their own, rather than through struggle, application, dialogue, and reflection. In such cases, the spiral collapses into a mechanised Combination phase, divorced from the learning processes through which tacit judgement and professional competency are formed.

Internalisation is best conducted in Gemba, where learned concepts and skills meet reality. In a learning context, face-to-face tutorials, laboratories, studios, supervised group work, workplace placements, and live problem-solving sessions occur. In such environments, uncertainty, constraint, and consequence cannot be edited away. These settings compel learners to test their thinking and skills against practice, with the opportunity to reflect on and confront errors. It is an opportunity to hone an understanding of the material through social interaction and guided critique. Generative systems, however sophisticated, cannot inhabit such contexts. They do not experience time pressure, customer impact, safety risk, ethical ambiguity, or responsibility for outcomes. People do. That is precisely why these environments remain indispensable to professional formation.

From this perspective, the ethical challenge posed by generative AI in higher education is not primarily one of surveillance or prohibition. It is a design problem. Universities are now compelled to re-anchor assessment around contexts that preserve Socialisation and Internalisation: live discussion, oral defence, collaborative sense-making, supervised practice, and reflective engagement with real situations. These are not nostalgic pedagogies. They are structural safeguards for learning in an AI-rich world.

A practitioner’s career tends to sharpen this conclusion. Ideas that cannot withstand contact with everyday work rarely last long in organisations. For experienced professionals as well as students, the most difficult task during periods of disruption is not learning new techniques but unlearning habits and reflexes that were once rewarded but no longer fit complex conditions. In educational design, as in organisational life, transformation begins not with the downloading of new tools, but with the disciplined re-creation of the environments in which judgement is forged.

Seen through this lens, generative AI is neither saviour nor villain. It is a powerful accelerant of abstraction. Whether that acceleration strengthens learning or hollows it out depends on whether institutions preserve the full SECI cycle—especially the Gemba-bound work of Internalisation—at the heart of credentialing.

This is not an isolated issue. It reflects the same patterns seen in organisational failure: hidden assumptions, unexamined trade-offs, and the substitution of performance for understanding.

An education system that allows symbolic performance to substitute for embodied competence does not just mis-measure learning.

It misleads the very people it is meant to serve.

 Ethical Use of AI to Surface Assumptions

Surfacing assumptions is not a philosophical exercise. It must be structurally embedded in how organisations plan and govern.

For years, I have been trying to get people to surface their assumptions during the design process. The context changes, but the problem remains the same. Invariably, assumptions are rarely raised or discussed, and to my knowledge, they only resurface after something goes wrong and the root cause is being investigated. It is just another example of the universal problem: “Let’s just get on with it; we need to deliver ‘x’ at this milestone.” There is a tendency to be reactive rather than proactive, as being proactive requires time, thought, and possibly actions that aren’t in the budget.

And of course, the best place to surface and test assumptions is usually with the people who know — at Gemba. Noburu Konno is clearer than other authors I’ve read on this topic. Talking about the actual practices instilled by Soichiro Honda within Honda, he uses ‘Genba’, which is the same concept with a slightly different translation.

“With both trial and theory, he led the organization to acquire practical wisdom through the experience of organizing tacit knowledge. The company advocated the principle of genba-shugi (Three Realities Principle), which means “going to the actual site (genchi), touching the actual product (genbutsu), and understanding the actual situation (genjitsu)”–this is something Honda Soichiro has repeatedly emphasized in various expressions since the company’s inception, which has led to Honda’s innovation (Konno, 2024).

That’s exactly what Rachels is demanding philosophers do to moral intuitions—‘Challenge prevailing orthodoxy calling into the questions the assumptions make’ (Rachels, 1992).

In Lead, Transform and Navigate (LTN), Mallory applies the moral-structural lens to highlight this:

  • Where is accountability for revisiting assumptions?
  • Who is the steward of doubt?
  • What happens when this model becomes law?

From LTN:

Mallory nodded once. “It’s a common issue,” he said. “People create systems without pausing to consider who will be affected. They often overlook the ethical implications of their actions on others and the planet.”

Design your Business Capability Model to include specific capabilities within governance:

L1: Governance, Planning & Reporting

  • L2: Corporate Governance & Board Support
  • L2: Strategic Planning & Policy
  • L2: Enterprise Performance Reporting
  • L2: Portfolio Oversight
  • L2: Strategic Assumptions & Hypothesis Management
    • L3: Assumption Identification
      • surface strategic, operational, regulatory, climate, market, technology, and social assumptions
    • L3: Assumption Documentation & Rationale
      • explicit statements
      • evidence-based
      • provenance
      • dissenting views
    • L3: Assumption Testing & Stressing
      • scenario analysis
      • sensitivity modelling
      • red-teaming
      • AI-assisted exploration
    • L3: Trigger & Early-Warning Design
      • leading indicators
      • Kepner-Tregoe style Contingent Action Triggers
      • social-signal monitoring
      • climate thresholds
    • L3: Assumption Review Cycles
      • cadence
      • ownership
      • expiry dates
      • escalation rules
    • L3: Assumption Change Governance
      • board escalation
      • portfolio reprioritisation
      • capability redesign triggers

 

Organisational implication: If no one is formally accountable for revisiting assumptions, then ethics is being left to chance — and chance is a poor governance mechanism. In this role, AI can help expose the very patterns that lead to failure — hidden assumptions, unexamined trade-offs, and weak reasoning that would otherwise go unnoticed.

AI does not create understanding — it can be harnessed to expose assumptions.

Used with discipline, it can:

  • reveal hidden assumptions
  • test internal consistency
  • challenge reasoning
  • generate alternative viewpoints

Used without discipline, it produces confident answers without grounding.

It does not replace thinking — it exposes where thinking has not yet occurred.

 

Konno, N. (2024). Kōsō-ryoku: Conceptualizing Capability. Springer Books.

Rachels, J. (1992). When philosophers shoot from the hip. Journal of clinical epidemiology, 45(7), 799–801.