STOP Resisting Artificial Intelligence

By Isabel Perez 

AI classroom copy

 

Designing a hybrid Human/AI pedagogy

Artificial Intelligence is reshaping contemporary learning environments. Traditional instructional models that were designed for human-only cognition are insufficient within the hybrid human–AI context. (Floridi, 2019; Russell & Norvig, 2020).

This article presents a K–12 pedagogical idea grounded in the Gaian Cognitive Spectrum Model (GCSM). It integrates Artificial Intelligence (AI) as a cognitive partner while preserving human agency, ethical judgment, and constructionist learning. Drawing from research in visible thinking, constructionism, distributed cognition, and human–AI collaboration, the framework reconceptualizes intelligence as a multimodal ecosystem and translates theory into classroom practice.

 

The Theoretical Framework

AI as a Cognitive Partner (The Gaian Cognitive Spectrum Model)
 +
Thinking Routines
+
PB Phases (Constructionist Making )
+
Real World Implementation
 

This framework rests on four principles: AI as a cognitive partner, visible thinking as a cognitive infrastructure (Ritchhart et al., 2011), constructionist making (Papert, 1980), and real-world implementation as the learning validation.

 

AI as a cognitive partner: The Gaian Cognitive Spectrum Model

The Gaian Cognitive Spectrum Model conceptualizes intelligence as four interacting modes: Divergent, Convergent, Bridge, and Synthetic intelligence (Sanchez, 2025).

It conceptualizes intelligence not as a single ability, but as a dynamic interaction of four cognitive modes that humans shift between depending on context, purpose, and constraints. Intelligence, in this model, is ecological and relational, and it emerges from how thinking modes interact rather than from isolated skills.

“The model conceptualizes intelligence as a dynamic interaction of four cognitive modes that humans shift between depending largely on context, purpose, and constraints, not as any single ability. Intelligence in this model is ecological and relational, and this emerges from how thinking modes interact rather than from isolated skills.” – I. Perez

 

Divergent Intelligence

Divergent intelligence is the capacity to generate multiple possibilities, questions, and perspectives. It is exploratory, imaginative, and non-linear.

Primary function: Idea generation and problem finding

Key characteristics: Curiosity, openness, ambiguity tolerance

Cognitive actions: Brainstorming, questioning assumptions, imagining alternatives

Educational value: Enables creativity, innovation, and reframing of problems before solutions exist

Divergent intelligence thrives when uncertainty is protected rather than resolved too early.

 

Convergent Intelligence

Convergent intelligence is the ability to narrow options toward a decision, solution, or conclusion using logic, evidence, and constraints.

Primary function: Decision-making and optimization

Key characteristics: Precision, efficiency, evaluation

Cognitive actions: Testing, selecting, refining, concluding

Educational value: Supports accuracy, feasibility, and task completion

This mode is essential for implementation but can be limiting if activated prematurely.

 

Bridge Intelligence

Bridge intelligence mediates between modes, contexts, and value systems. It enables thinkers to translate, contextualize, and ethically align ideas across domains.

Primary function: Integration and meaning-making

Key characteristics: Perspective-taking, ethical reasoning, systems awareness

Cognitive actions: Connecting ideas to users, values, cultures, or consequences

Educational value: Grounds thinking in real-world impact and human considerations

Bridge intelligence prevents both creative detachment and technical tunnel vision.

 

Synthetic Intelligence

Synthetic intelligence is the capacity to combine diverse inputs — human, technological, experiential — into coherent outputs. It is naturally collaborative and co-constructive.

Primary function: Integration and co-creation

Key characteristics: Pattern recognition, synthesis, collaboration

Cognitive actions: Merging ideas, leveraging tools (including AI), producing unified artifacts

Educational value: Enables complex problem-solving in technology-rich environments

 

In contemporary contexts, this mode increasingly includes human–AI collaboration.

Rather than ranking intelligences, Sanchez emphasizes that cognitive effectiveness depends on the timing, balance, and interaction of these modes. (Cognition is understood as dynamic, distributed, and context-dependent, aligning with research in ecological and distributed cognition (Gibson, 1979; Hutchins, 1995). Effectiveness depends on the timing, balance, and interaction of these modes. Over-reliance on any single mode (e.g., constant convergence or unchecked synthesis) leads to shallow outcomes. Powerful thinking emerges when learners are guided to move deliberately across the spectrum.

 

Visible Thinking as Cognitive Infrastructure

If AI is positioned as a cognitive partner, then visible thinking must function as the cognitive infrastructure that governs this partnership. Without explicit mechanisms to surface reasoning, intentions, and decision-making processes, AI risks accelerating production while obscuring cognition. Building on Ritchhart, Church, and Morrison’s Making Thinking Visible (2011), this framework treats visible thinking not as a set of classroom routines alone, but as a structural condition that makes learning, judgment, and agency observable, assessable, and improvable.

Visible thinking operates as the connective tissue between human cognition and AI-supported synthesis. It externalizes mental processes (i.e.; questions, hypotheses, assumptions, ethical considerations, and evaluative criteria) so they can be examined and challenged before final refinement. In AI-rich contexts, this externalization is essential.

“When learners interact with generative systems, the most critical learning does not occur in the AI output itself. Instead, it is derived from how students frame prompts, interpret responses, critique limitations, and decide what to accept, adapt, or reject.” – I. Perez

Within a GCSM-informed pedagogy, visible thinking routines are intentionally aligned to different cognitive modes. Divergent thinking is supported through routines that foreground curiosity and possibility (e.g., questioning assumptions or exploring multiple perspectives). Convergent thinking is made visible through justification, testing, and decision rationales. Bridge intelligence is illustrated through routines that require learners to articulate values, identify stakeholders, and consequences. Synthetic intelligence becomes visible when students document how human insight, AI contributions, and contextual knowledge are integrated into coherent outcomes.

“Crucially, visible thinking protects human agency in hybrid cognitive environments. By requiring students to make their reasoning explicit, it prevents the uncritical, unbridled delegation of thinking to AI systems. Learners are accountable not only for what they produce but for how they arrived there, why certain choices were made, and where AI influenced the process. This transparency enables ethical reflection, supports academic integrity, and allows teachers to assess cognition rather than grade mere output.” – I. Perez

At a systems level, treating visible thinking as cognitive infrastructure reshapes assessment and classroom culture. Thinking artifacts, including process journals, decision logs, prompt evolution records, ethical reflections, and iterative prototypes, become primary evidence of learning. This aligns with constructionist and process-centered assessment models while addressing the challenge AI poses to traditional product-based evaluation. Teachers shift from policing the use of tools to mentoring cognitive development by guiding students to recognize when to diverge, converge, bridge, or synthesize.

In this framework, visible thinking is not optional scaffolding. It is a prerequisite for meaningful AI integration. It ensures that cognition remains legible in increasingly complex learning environments and that AI augments rather than obscures human understanding. By embedding visible thinking as infrastructure, schools create conditions where learning remains intentional, reflective, and fundamentally human, even as cognitive tools evolve.

 

PB Phases (Constructionist Making)

Within this framework, PB Phases (Project-Based Phases) operationalize constructionist-making as a sequenced cognitive pathway rather than a single project endpoint. PB Phases structure learning as a progression of intentional making moments, with each designed to activate specific cognitive modes within the Gaian Cognitive Spectrum Model while preserving learner agency in AI-supported environments.

Unlike traditional project-based learning models that often emphasize final products, PB Phases prioritize thinking-through-making. Each phase is designed to externalize cognition, surface decision-making, and create moments where learners must negotiate uncertainty and constraints along with any potential ethical considerations. Making, in this sense, is not merely hands-on activity but a cognitive validation process through which ideas are tested, revised, and grounded in experience.

 

Phase 1: Exploratory Making (Divergent Intelligence)

The initial PB Phase focuses on exploratory construction, where learners generate early artifacts, such as sketches, mockups, rough prototypes, simulations, or conceptual models. These artifacts are intentionally incomplete and provisional. Their purpose is to outline possibilities, not correctness or exactness.

At this stage, AI may be used as a provocation tool: generating alternative ideas, visualizations, or hypothetical solutions. However, students remain responsible for framing the problem space, selecting which directions to explore, and articulating why certain possibilities matter. Visible thinking routines such as idea mapping, question logs, and prompt justification ensure that divergence remains intentional rather than random.

This phase protects creative risk-taking by delaying evaluation and resisting premature convergence, a common failure point in AI-supported learning where students move too quickly in their rush to produce polished outputs.

 

Phase 2: Iterative Refinement (Convergent Intelligence)

The second PB Phase introduces constraint, testing, and refinement. Learners select promising directions from exploratory artifacts and begin narrowing focus through iteration. Making becomes increasingly deliberate: code is debugged, designs are refined, arguments are strengthened, and systems are optimized.

AI may support efficiency here by suggesting improvements, identifying errors, or simulating outcomes, but its role needs to be explicitly bounded. Students must document why each change is accepted or rejected and how their decisions align with the stated goals or criteria. Convergent intelligence is made visible through decision rationales, testing evidence, and their logged history of revision.

This phase reinforces that convergence is not about compliance or speed. It is centered on informed judgment and accountability.

 

Phase 3: Contextual Design (Bridge Intelligence)

In the third PB Phase, making is explicitly contextualized. Learners redesign or adapt their artifacts in response to users, ethical considerations, cultural contexts, or real-world constraints. This may involve stakeholder interviews, scenario testing, accessibility checks, or ethical impact analysis.

Here, constructionist-making becomes relational. Artifacts are no longer evaluated solely on technical merit but on alignment with human values and situational realities. AI may assist with perspective generation or scenario modeling, but bridge intelligence is exercised through human interpretation and moral reasoning.

“Visible thinking routines in this phase focus on justification, empathy mapping, and consequence analysis. This reinforces that for making to be meaningful it must account for more than functional success.” – I. Perez

Phase 4: Integrative Synthesis (Synthetic Intelligence)

The final PB Phase emphasizes synthesis through construction. Learners integrate human insight, disciplinary knowledge, AI-generated components, and experiential feedback into coherent, shareable artifacts. These artifacts represent both a documented thinking journey and a solution.

Synthetic intelligence becomes visible as students articulate how diverse inputs were combined, where AI influenced their outcomes, and where human judgment overrode or redirected machine suggestions. The act of making culminates in a product, a performance, or a deployment that embodies the full cognitive spectrum rather than a single mode of thinking.

“Importantly, ownership remains human. Students are accountable for coherence, purpose, and ethical alignment, regardless of the tools used.” – I. Perez

Why PB Phases Matter in AI-Rich Contexts:

PB Phases prevent AI from collapsing the learning process into nothing but selection and assembly. By structuring making across phases, the framework ensures that learners cannot bypass cognitive struggle, ethical reflection, or contextual reasoning through tool delegation. Each phase creates deliberate friction manifested in moments where thinking must be externalized, defended, and revised.

From an assessment perspective, PB Phases generate rich evidence of learning, including early drafts, failed prototypes, decision logs, reflection entries, and design rationales. These artifacts allow educators to assess cognition rather than output alone, thereby addressing common concerns about AI and academic integrity.

“Pedagogically, PB Phases transform classrooms into studios of cognitive apprenticeship. Teachers guide phase transitions, model reflective judgment, and support learners in recognizing when to diverge, converge, bridge, or synthesize. Learning is no longer linear or product-driven but iterative, intentional, and meaning-centered.” – I. Perez

Real-World Implementation as Learning Validation

Real-world implementation represents the final and most consequential phase of the pedagogical framework. While constructionist-making validates learning through the creation of artifacts, implementation validates learning through exposure to reality, in other words, conditions that cannot be fully simulated, predicted, or controlled within classroom environments.

In this framework, implementation is not an optional extension or enrichment activity. It is the moment where learning is tested against authentic constraints, users, consequences, and feedback loops. Knowledge is no longer evaluated solely on internal coherence or technical correctness. Of particular additional importance is how ideas perform when embedded in real systems in social, cultural, technical, or ecological terms.

 

From Artifact to Action

Traditional project-based learning often concludes when a product is completed and assessed. However, in AI-mediated contexts, this endpoint is increasingly insufficient. AI systems can generate outputs that appear polished, plausible, and complete without guaranteeing relevance, usability, functionality, or ethical alignment. Real-world implementation addresses this gap by requiring learners to move beyond production into action.

Implementation may take many forms depending on context and age group:

Deployment to real users or audiences

Live testing or pilot use

Interaction with community partners or stakeholders

Public exhibition, publication, or performance

Authentic simulations with external feedback

What defines implementation is the authentic consequence, not scale. Learners must respond to factors that resist optimization, such as human behavior, resource limitations, unexpected outcomes, ethical tensions, or contextual misalignment.

 

Cognitive Activation Across the GCSM Spectrum

Real-world implementation uniquely activates all four cognitive modes simultaneously, rather than sequentially.

Divergent intelligence re-emerges as learners encounter unanticipated challenges, edge cases, or opportunities that were not visible during design. New questions arise precisely because reality refuses to conform to initial assumptions.

Convergent intelligence is exercised under pressure, as learners must prioritize, adapt, and make decisions with incomplete information and real constraints such as time, feasibility, or user needs.

Bridge intelligence becomes central. Learners must interpret feedback from stakeholders, negotiate ethical implications, and translate abstract ideas into culturally and socially situated action.

Synthetic intelligence is fully realized as learners integrate human judgment, AI-supported analysis, experiential data, and iterative revision into evolving solutions.

Unlike earlier phases, implementation does not allow learners to isolate modes. Cognitive balance becomes necessary rather than pedagogically imposed.

 

 Human Accountability in AI-Supported Contexts

A defining characteristic of real-world implementation is irreversible accountability. When work is deployed beyond the classroom, responsibility cannot be deferred to tools, instructions, or rubrics. AI may assist with analysis followed by predictions or iterations, but the outcomes are inevitably experienced as human decisions.

This is particularly significant in AI-rich environments. The superficial or uncritical use of AI becomes immediately visible when solutions fail to adapt. Perhaps they lack contextual sensitivity, or they produce unintended consequences. Implementation therefore acts as a natural correction to over-reliance on generative systems.

Students must justify:

Why specific AI outputs were trusted or rejected

How limitations or biases were identified

Where human judgment intervened

How ethical or contextual concerns shaped revisions

Learning is demonstrated by how students avoid failure and respond to it.

 

Assessment as Responsiveness Instead of Performance

From an assessment perspective, real-world implementation fundamentally redefines what counts as evidence of learning. Success is no longer measured by polish or correctness alone, but by responsiveness, adaptability, and reflective judgment.

Valuable assessment evidence includes:

How learners interpret real feedback

How assumptions are revised

How decisions evolve over time

How failures are diagnosed and addressed

How ethical considerations are articulated and acted upon

This shift directly addresses concerns about AI and academic integrity. In authentic contexts, outputs generated without deep understanding rarely hold.

It doesn’t matter whether AI was used or not as long as learners can explain, defend, and adapt their thinking in response to lived consequences. – I. Perez

Pedagogical Implications

Positioning real-world implementation as learning validation requires a departure from transmission-based teaching models. Teachers move from being evaluators of correctness to designers of authentic learning conditions and mentors of judgment.

The teachers’ roles include:

Curating meaningful contexts for implementation

Supporting ethical reflection and decision-making

Guiding students through uncertainty and revision

Helping learners recognize cognitive mode shifts in action

“Classrooms, in turn, extend beyond their physical or digital boundaries. Learning becomes situated within communities, systems, and lived environments, thus aligning schooling more closely with how knowledge is created and applied outside educational institutions at a much earlier age than traditional approaches. This increases the likelihood of the process becoming second nature by the time it becomes obligatory.” – I. Perez

Why is this Step Essential Now?

In AI-shaped futures, the ability to act thoughtfully in real contexts is more critical than the ability to generate answers. Intelligent systems increasingly mediate professional, creative, and civic life, but they do not assume responsibility for consequences. Humans must.

Real-world implementation ensures that AI-supported learning remains grounded in real purposes, irrefutable ethics, and human values. It transforms learning from mere rehearsal into actual participation while positioning students as accountable actors within complex, evolving worlds, not just users of intelligent systems.

Together with AI as a cognitive partner, visible thinking as cognitive infrastructure, and constructionist-making as cognitive validation, real-world implementation completes a coherent pedagogical ecosystem. It affirms that meaningful learning is demonstrated by how students think, adapt, and act when conditions are real and dynamic, rather than considering only what they can produce in controlled conditions.

 

Conclusion

This model demonstrates how Artificial Intelligence intentionally embedded within a cognitively balanced pedagogical framework can strengthen rather than diminish core educational values. A GCSM-informed approach supports multiple dimensions of learning, including deeper reasoning, ethical awareness, collaboration, and, most critically, student agency. By positioning AI as a cognitive partner rather than an authority, learners remain responsible for framing problems, making judgments, evaluating outcomes, and reflecting on their own thinking processes. Agency is not surrendered to technology; it is actively cultivated through transparent, guided interaction with it.

Structured movement across divergent, convergent, bridge, and synthetic modes ensures that AI use does not restrict learning to efficiency-driven production or answer-seeking. Instead, students are encouraged to engage in problem finding, contextual reasoning, ethical evaluation, and synthesis. These are all capabilities that remain distinctly human and increasingly valuable in AI-rich environments. This model therefore supports a broader conception of achievement, one that recognizes cognitive diversity and values how learners think, not only what they produce.

While some academic perspectives continue to question the educational value of AI in relation to originality, assessment integrity, or cognitive dependency, the reality is that AI systems are now fully embedded in professional, creative, and civic life. Avoiding or prohibiting their use does not prepare students for these contexts. Instead, it risks rendering them uncritical, unskilled, and ethically unprepared. Only the lazy and uninformed believe that AI is a temporary trend that must be resisted. This is a structural shift that requires a corresponding evolution in pedagogy, whether academia initially accepts it or not.

Crucially, this framework does not advocate for AI as a replacement for teachers, curricula, or traditional learning foundations. Instead, it calls for a radical transformation in how teaching is designed and delivered. AI should not be inserted into unchanged instructional models as a productivity tool or shortcut. Doing so reinforces surface learning and exacerbates existing inequities.

“Meaningful AI integration of this or any similar framework requires rethinking learning sequences, assessment practices, teaching roles, and classroom culture so that fundamental aspects of human judgment, creativity, and responsibility remain central. This is not a future vision. It needs to happen right now, and AI certainly won’t do it for us.” – I. Perez

GCSM-informed framework offers a path forward that is neither paranoically technophobic nor the deterministic inevitability of several of the tech billionaire fanboys. It recognizes AI as part of an evolving cognitive ecosystem while affirming the irreplaceable role of human agency, ethical reasoning, and educational purpose. By changing how we teach rather than simply adding new tools to old models, schools can ensure that AI enhances learning in ways that are intellectually rigorous, ethically grounded, and genuinely empowering for students.

 

References:

Gibson, J. J. (1979). The ecological approach to visual perception. Houghton Mifflin.

Guilford, J. P. (1959). Traits of creativity. In H. H. Anderson (Ed.), Creativity and Its Cultivation. Harper.

Hutchins, E. (1995). Cognition in the Wild. MIT Press.

NIST. (2023). AI Risk Management Framework (AI RMF 1.0).

OECD. (2019). OECD Principles on Artificial Intelligence.

Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. Basic Books.

Ritchhart, R., Church, M., & Morrison, K. (2011). Making Thinking Visible. Jossey-Bass.

Runco, M. A., & Acar, S. (2012). Divergent thinking as an indicator of creative potential. Creativity Research Journal, 24(1), 66–75.

Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.

Sanchez, J. (2025). Gaian Cognitive Spectrum Model (GCSM).

UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence.

 

Print