Chat GPT is making you dumber (not a clickbait, there is a research for that) |🎙️#64

Illustrated podcast cover titled "DevOps Accents Episode 64: Chat GPT Is Making You Dumber." Features two smiling characters drawn in grayscale against an orange background. Includes a "Listen Now!" button.
Last updated: | Published:

Chat GPT affects yout ability to think critically! But does it actually make you “dumber,” or does it free up mental bandwidth for more meaningful work? Together with data scientist Paul Larsen, we unpack recent studies from Microsoft and MIT, real cases from software development and education, and translate it into practical rules: when it’s safe to delegate to AI and when you should deliberately “work the mental muscles” yourself (code review, testing, architectural decisions). We also touch on the org-level effect: what happens to a team’s skills if engineers become AI babysitters, and how to build proper model monitoring and evaluation instead of endless human-in-the-loop, and what companies can do to minimize risks.

Also in this episode:

  • Cognitive offloading vs. genuine cognitive-load savings; why “less mental pain” ≠ “less intelligence”;
  • An “error budget” for tasks: where the cost of a mistake is high vs. where AI can operate under monitoring;
  • Human-in-the-loop vs. production ML practices: logs, quality metrics, drift, and periodic re-evaluation;
  • How the “charming politeness” of chat models lowers skepticism and how to counter it;
  • “Dosed” use of assistants: what to do yourself (planning, hypothesis framing) vs. what to hand over to AI;
  • Graph-based models in the enterprise: why large companies pick standard, lower-risk use cases;
  • Data contracts, CI/CD for data, and “AI for data → better data for AI”;
  • Advice for listeners: avoid flattening teams into “assembly & polishing” roles and preserve core competencies;

You can listen to episode 63 of DevOps Accents on Spotify, or right now:


Cognitive Load vs. Critical Engagement

Clean code is not a luxury; it’s a cognitive strategy. Paul argues that modularity, naming, and encapsulation lower mental friction so you can keep your limited attention for higher-order reasoning. Leo counters that AI’s convenience invites a different risk: when the tool feels smooth, skepticism quietly powers down. The pair converge on a practical stance—optimize the work to be easy, but make “thinking mode” an explicit habit, not a vibe.

“When AI takes over the elemental work, it feels easier—and that’s exactly where critical thinking quietly gives up ground. In Microsoft’s study, the more you trust AI, the less you fully switch on your own head; cross-checking gets postponed and often never happens.” — Leo

“Good code design—modularity, descriptive names, encapsulation—reduces cognitive load, and that’s a good thing. The real question is your attitude toward AI’s responses: keep the higher-order mental models engaged and offload the standard, atomic work. I don’t want to be an expert in sub-optimal, 1,000-line MATLAB scripts just to ‘keep my brain sharp.’” — Paul


Conscious Adoption & Error Budgets (Education vs. Skincare)

Not all domains tolerate the same level of “good enough.” Paul draws a bright line: skincare can absorb mishaps, but education cannot. If learning is the goal, you must preserve struggle in the right places—argumentation, evidence-weighing, and independent problem-solving. Leo flips a common research trope: in the AI era, it’s the young who risk atrophy first, precisely because they grow up with seamless delegation.

“It’s not about rejecting technology; it’s about conscious use. Ask: where do we want to keep and develop our smarts, and where is it okay to let them go? Skincare has a generous error budget; education does not. Formulating arguments and doing discriminating research are core capabilities—which is why my university went back to in-class finals; you can’t trust that students are actually engaging with the material anymore.” — Paul

“In typical cognitive studies older adults are the risk group, but with AI it flips—young people are at risk. It isn’t only pre-existing differences; once you get used to outsourcing the heavy lifting, your thinking muscle weakens. Like the gym: always take the escalator and your legs won’t get stronger.” — Leo


“Human-in-the-Loop” vs. Proper ML Ops (Scaling Responsibly)

“Human in the loop” sounds prudent until it becomes a queue of humans approving answers one by one—slow, costly, and unscalable. Paul reframes operational maturity: if the task’s error budget allows autonomy, treat it like a production ML system. That means telemetry, thresholds, alerts, and periodic audits, not constant hand-holding. Leo pushes on the practical boundary: if your business is data-heavy, when do you finally drop the leash?

“‘Human in the loop’ often turns into babysitting software—the opposite of why businesses like computers. If a task has a healthy error budget, let the system run, but do it like proper ML: log outputs, monitor, set alerts, and evaluate at intervals to catch drift.” — Paul

“Even if human-in-the-loop works, how far can you scale it? Smaller companies use AI as a gimmick, but data-heavy companies invite AI in at scale. Is there a point where you can release the leash and stop double-checking everything?” — Leo


AI Literacy & the “Polite Chatbot” Trap

A calm, confident tone is not objectivity; it’s a product decision. Leo calls out the bias that many of us carry into chat interfaces: “the machine is neutral.” That’s precisely when critical thinking should switch on, not off. Paul pulls back the curtain: the friendliness and compliance we experience in chat models are engineered properties added after pre-training—optimized for likability, not necessarily for truth.

“‘Thinking mode’ is our answer to biases—one of them is ‘trust the machine.’ People assume the algorithm is objective, which is a dangerous illusion. You even see the meme: ‘Grok, is this true?’ The bot is polite and confident, and our skepticism shuts off.” — Leo

“That politeness isn’t an objective outcome of web training. It’s a conscious design choice: the post-processing that turns a base model into a chat model changes parameters to give answers people, on average, like more.” — Paul


Guarding Expertise: Anchors, Habits & the “Curator” Shift

Work is shifting from “seeker-analyst” to “curator.” AI drafts; humans assemble, contextualize, and sign. The danger arrives with familiarity: after enough “looks fine,” we stop checking structural logic and only polish the surface. Paul’s antidote is practical: keep anchor points in your head so you can reason at speed without breaking cognition—constants in physics, core library mental models in programming, fundamental heuristics in your domain.

“At work we’re shifting from seeker-analyst to curator: we assemble, compile, and polish what AI brings. At first we still double-check, but the longer we use AI and see it’s ‘fine,’ the more we trust by default and stop rereading critically—the facade gets prettier while the internal logic can sag.” — Leo

“Even higher-order thinking needs anchor points. My physics professor made us memorize constants so we could stay in flow instead of Googling every number. It’s the same in coding: I don’t want to look up the Python standard library every minute—I want to move fast without breaking my own cognition.” — Paul


Bottom line: Make ease serve rigor. Lower cognitive load on the routine parts so you can spend scarce attention on structure, judgment, and meaning. Pick your error budgets on purpose. And keep “thinking mode” within arm’s reach—especially when the chatbot smiles back.



Show Notes

We help companies accelerate delivery, reduce cloud costs, and deploy GenAI safely (while not making you dumber along the way) - from audits and architecture to productionizing models, monitoring, and team training.


Podcast editing: Mila Jones, milajonesproduction@gmail.com