
AI and us — Navigating the Grey Zone with Productive Skepticism
Swimming in the AI Stream
I’ve been pouring thoughts into the draft of this article for some weeks now, every now and then while staring at the laptop in front of me that — thanks to AI — has become exponentially more powerful than it was just two years ago. And of course, I am using AI-enhanced tools to research and refine this same article to express my thoughts about AI… a recursive loop that would have amused Douglas Hofstadter. But perhaps this is exactly the point: we’re not standing outside of the AI revolution as unaffected observers; we’re swimming in it, trying to grasp it while being carried by it.
The discourse around AI has crystallized into a depressingly and — at least to me — familiar pattern. On one side, we have the techno-prophets promising AI will solve climate change, bring peace to the world, cure cancer, make us all super-rich, and probably teach us to love better. On the other, the Cassandras warning of mass unemployment, the death of creativity, and humanity becoming replaced by machines. Both share, in my opinion, a common flaw: they mistake the complexity of transformation for the simplicity of replacement.
This binary thinking is a known one and represents what psychologists call “categorical perception”. We are creatures of categorization: we have a deep tendency to sort continuous phenomena into discrete buckets. We love simple narratives: good versus evil, fair versus unfair, progress versus decline, human versus machine, black and white. In itself, it’s our innate strategy to make sense of the complex world we live in, but the world is — exactly that — complex. It’s never been easy to model and will probably never be.
Our personal lives, our professional landscapes, our relationships, even the very innovations we create — they are all shades of grey, sfumature di grigio, as we say in Italian. Do you know what else is moving on another continuum, not so easily classifiable? The AI revolution. Also not an exception.
This article is a kind of intellectual journey through my own thoughts, an attempt to bring order to the sometimes conflicting inputs we’re all continuously exposed to. And no, you won’t find any PyTorch or Keras code here. This is purely meta-thinking, a reflection on AI’s impact and, crucially, how we might best relate to it.
Is AI our salvation or our damnation? Good or bad? As I’ve said, such categories are far too simplistic for any meaningful engagement with a topic of such depth. The short, perhaps disappointing, answer is often “neither, nor” or, more accurately, “it depends.”
A Bit of History
The AI revolution has brought a lot of uncertainty in the last years. But is such a change something totally unprecedented? On a superficial view, maybe, but consider the historical precedent.
The industrial revolution in the 19th century didn’t simply replace human labor with machines — it fundamentally restructured society, creating new forms of work, new social classes, and new ways of understanding human potential. The weavers who destroyed the “evil machines” weren’t wrong about the immediate displacement, but they failed to (fore)see the emergence of entirely new industries, the rise of the middle class, or the eventual reduction in working hours that industrialization would enable. In a few decades, life expectancy went from 32 in the 19th century to 78 years in the 20th century. Technological advancement played a huge role here, as the wonderful book by S. Pinker tells us.
The rise of PCs in the 1980s followed a similar pattern. Some of my older colleagues still remember the panic in accounting departments, the fear that bookkeepers would become obsolete. Instead, we got financial analysts, data scientists, and an entire economy built on information processing. The computer didn’t replace human judgment; it amplified it.
The current AI revolution, for sure broader in scope, operates according to comparable principles. The current AI revolution is not, despite its sudden visibility, a newborn phenomenon. Its roots stretch back to the late 1950s and continued silently for decades. Yet, the last year has seen an explosion in adoption and very fast advancements and improvements. This accelerated pace is largely due to a confluence of factors: the unprecedented availability of vast datasets, the maturation of ML models, and the successful marriage of these technologies with cloud computing’s economy of scale.
But there’s a crucial difference with past antecedents: this time the focus has shifted. It’s not primarily manual laborers facing displacement — it’s knowledge workers, including myself as a software engineer and technical leader. The imperative of re-skilling, once threatening factory floors, now hangs over tech offices and modern co-working spaces.
AI in My Everyday Life: A Confession

Perhaps the most striking aspect of this technological and social shift is how intimate it has become. The introduction of automation and robotics in the manufacturing sector in the 80s had huge consequences, but you wouldn’t have likely seen big machines pervading so many private and public sectors and everyone placing one of them in their living room.
AI isn’t a distant industrial process happening in factories. It’s embedded in our daily life, and this progressively more.
Beyond my professional engagement with machine learning and deep learning models for side projects and genuine intellectual curiosity, I am a regular consumer of AI products: ChatGPT, Claude, Gemini, Midjourney, Figma AI, Notion AI, and myriad others.
I find myself reaching for ChatGPT to brainstorm solutions, Claude to debug code that resists my attempts to make it work. I have in GitHub Copilot or Junie new powerful companions for brainstorming or quick prototyping. I jump to Midjourney to visualize concepts for presentations like all the pictures in this article. The tools have become extensions of thought itself.
In my personal life, this integration runs even deeper. Learning Arabic — something I’ve struggled with for years — has been transformed by AI tutors available 24/7, never impatient with my pronunciation, always ready to adapt to my learning style. When planning a trip abroad, I can generate detailed packing lists tailored to the season and my specific needs. Interested in a translation of some text in a foreign language? AI is there as well. Want to sort or categorize dozens of GB of photos or videos on your hard drive? No problem, in half an hour you set up your own CLIP model and get it done.
The efficiency AI introduced is intoxicating. Tasks that once required hours of research, multiple consultations, insane amounts of money, or specialized knowledge can now be completed in minutes and at a very low cost. The democratization of expertise is something I cannot help but greet with open arms — I can access insights that were once the province of specialists, expensive consultants, or required non-trivial investments of time and money.
I used to remember the phone numbers of my friends and family by heart in the 90s — smartphones eliminated that need decades ago. Do I miss this cognitive capability? Not particularly. But I do want to remember how to think, how to create, how to solve problems that matter.
But here’s where the philosopher in me grows uneasy. What are we losing in this exchange?
The Trouble with Effortless Answers
As an Italian, grown up with the songs of the scuola genovese, I cannot help but echo Fabrizio De André: “Per la stessa ragione del viaggio, viaggiare” (“For the very same reason of travel, traveling itself”). And he was not alone in saying that. To stay in the area of poetry (perhaps unexpectedly for a software engineer, but so it is), the great Arabic poet Mahmoud Darwish once wrote:
فالطريقُ هو الطريقةُ
Fat-tarīqu huwa at-tarīqa
“because the way is the method.”
This insight, echoed repeatedly across cultures and centuries. We have all heard it at least once in whatever form: the journey of understanding is inseparable from the understanding itself.
Even within the more pragmatic sphere of management literature, which I’ve increasingly explored in my role as Head of Engineering, this idea is far from unknown. Simon Sinek, for instance, articulates similar principles in “The Infinite Game.”
What does any of this have to do with AI? Well, there’s a strong teleological (goal-oriented) perspective embedded in our typical use of AI. We ask AI to “paint a picture,” “solve a problem,” “write a text,” “code a requirement.” We tend to de-emphasize the discovery process that happens between the initial prompt and the final output. Yet, it is precisely this discovery process — at least in my own journey — that forms the very essence of self-development and growth, both as individuals and as professionals. My proficiency in certain areas is a direct product of the struggle: the mistakes I made, the experiences I accumulated, the different paths I explored before settling on the one I judged to be the best.
AI threatens to short-circuit this process. Not maliciously or intentionally, but through the very efficiency that makes it valuable. When I can generate a complex algorithm in seconds, am I becoming a better programmer or merely a faster one? When I can produce a marketing strategy without deep market research, am I learning about customer behavior or just getting better at prompt engineering? Unfortunately, the latter, I guess.
The danger isn’t that AI will replace human intelligence — it’s that we might voluntarily atrophy the very capacities that make us uniquely human. Critical thinking, creative problem-solving, and the ability to synthesize knowledge across domains aren’t just professional skills; they’re the foundation of human agency itself.
The Increased Importance of Critical Thinking
This brings me to a crucial skill in our present time: the ability to think critically about AI-generated content. LLMs are, at their core, sophisticated probabilistic systems. Quite clever, with sophisticated components (attention, context windows, transformers, encoder-decoder, etc.), but probabilistic models at the end. What they try to compute is the most likely correct answer given their training base, the input received, and the computed weights of their internal parameters. All in all, a quite simple equation if you dive deep into that. But **plausible isn’t the same as accurate **, and statistically likely isn’t the same as true.
I’ve watched colleagues accept AI-generated code without fully understanding its logic, reposting answers that make absolutely no sense whatsoever, implement strategies without questioning their underlying assumptions, or present AI-created analyses as if they were their own insights. This isn’t laziness — it’s a natural human tendency to trust authoritative-sounding information, especially when it arrives efficiently and elegantly formatted. But it’s dangerous… very dangerous.
The antidote isn’t to reject AI but to develop what I call for myself productive skepticism — the habit of questioning, verifying, and deeply understanding AI outputs before acting on them. It was, by the way, important also before AI and is for me the highest sign of human intelligence, but has become crucial in our present.
Productive skepticism means cross-referencing sources, testing code thoroughly, and most importantly, maintaining the intellectual humility to say “I don’t fully understand this” when that’s the case.
In our complex societies, this skill becomes even more critical. Deepfakes, AI-generated misinformation, and sophisticated manipulation of public opinion aren’t distant threats — they’re current realities. The ability to distinguish authentic from generated, to trace claims to their sources, take the time to do our own research, and to think independently about complex issues isn’t just a nice thing to do; it’s essential for the functioning of our social and professional interactions.
A Beneficial Partnership
So far so good, Toni, but how do we navigate this grey zone? How do we take advantage of AI’s immense potential without losing our essential humanity?
The answer, I believe, lies in developing a healthy partnership with AI-enhanced tools. Rather than seeing AI as either the ultimate solution or a threat to avoid, we can approach it as a powerful but imperfect collaborator.
What does it mean — at least for me — in my daily life?
- Selective Delegation: Give routine, repetitive tasks to AI while maintaining human oversight on creative, strategic, and ethically complex decisions. Yes, let AI handle your email scheduling, generating a picture (I’ve never been good at that anyway), and basic research, but don’t delegate the responsibility of transforming that output into actionable insights.
- Continuous Learning: Use the time saved by AI automation to deepen your expertise in areas that matter. If AI can generate code, use that efficiency to study system architecture, user experience, or domain-specific knowledge that makes your solutions more valuable. Engage in discussion with your customers, improve the overall architecture of your solutions. Leave the boilerplate, documentation tasks, etc., to AI.
- Engage in a Discussion with AI: Don’t just focus on outputs. Do your best to engage with the problem-solving process itself. When AI provides a solution, challenge yourself to understand why it works, what alternatives might exist, and what assumptions it’s based on. Did I give enough context? Does the answer make sense?
- Prioritize Human Skills: Invest in capabilities that remain uniquely human — emotional intelligence, creative problem-solving, ethical reasoning, and the ability to work across disciplines and cultures. These skills become more valuable, not less, in an AI-augmented world.
The Mass Unemployment Issue
The question everyone asks: will AI eliminate jobs? The answer is nuanced. Yes, I do believe that many current roles will disappear or be fundamentally transformed. But if history is any guide, new forms of work will emerge that we can barely imagine today.
For software engineers, this might mean evolving from code implementers to system architects, from individual contributors to human-AI collaboration specialists. The most boring aspects of our work — debugging syntax errors, writing boilerplate code, maintaining documentation — can be gladly delegated to AI.
Yes, perhaps first-level support roles will diminish. But honestly, who genuinely enjoyed the repetitive, often obvious problem-solving inherent in those tasks?
What remains is the interesting part: understanding complex business problems, designing elegant solutions, sharpening our social skills, and ensuring that technology serves human needs.
The key isn’t to resist this transformation but to position ourselves thoughtfully within it. This requires what the Germans call Fingerspitzengefühl — a fingertip feeling for the subtle dynamics of change. We need to sense which skills will remain valuable, which new competencies to develop, and how to maintain our essential humanity while embracing technological augmentation.
Feel at Home in the Grey Zone
Perhaps the most important insight is that the grey zone — the space between uncritical enthusiasm (“AI is so cool, use it everywhere!”) and paralyzing fear (“I will lose my job, we shall stop AI!”) — is where wisdom lives, where we should learn to live ourselves. It’s uncomfortable here, requiring constant calibration, continuous learning, and the intellectual courage to hold multiple perspectives simultaneously. But it’s worth it!
This discomfort is productive. It keeps us alert, engaged, and fundamentally human. As we navigate the AI revolution, our task isn’t to find the perfect balance between human and machine capabilities — it’s to remain thoughtfully engaged with the ongoing process and refine our critical thinking capabilities.
The future belongs not to those who embrace AI uncritically or reject it entirely, but to those who can dance with it thoughtfully, maintaining their humanity while leveraging its power. In this dance, the steps matter as much as the destination.