AI is splitting society in two.
That's not hyperbole. It's not a clickbait scare. It's what the numbers show when you look past the glossy product launches and the breathless press releases. And sitting here in my wheelchair in Tirana, scrolling through research data on my laptop, I feel the weight of that sentence in my bones—because I've lived on the wrong side of divides before.
From Curiosity to Necessity
Not long ago, chatbots were novelties. You'd poke at them for fun, watch them spit out garbled poetry, and move on. That era is over. As researcher Enzo Risso writes in an article published in Domani, "Artificial intelligence has officially entered people's daily lives. It is no longer a futuristic curiosity, but an essential tool relied upon by millions of people worldwide."
Let that sink in. Essential tool. Not toy. Not experiment. Essential.
The data backs this up with striking clarity. A study conducted by Google and Ipsos found that 74 percent of chatbot users turn to AI to learn something new or understand a complex topic. 70 percent have used it at work. 66 percent use it for entertainment. These aren't niche behaviours—they're mainstream habits woven into the texture of how people live, think, and earn.
And the benefits? They're tangible. AI has saved time for 65 percent of users. It's helped 64 percent find the right words to communicate with others—imagine that, a machine helping you be more human. 59 percent have used it to create multimedia content, and 56 percent have sought its advice on personal or professional problems.
I'll be honest: I'm one of those people. When you live with dystonia, when your body doesn't always cooperate with your mind's ambitions, tools that save time and sharpen expression aren't luxuries. They're lifelines.
The Fault Line Nobody Talks About
Here's where the story turns uncomfortable.
Not everyone is at this party. The same research reveals a stark pattern: heavy AI users are under 35, highly educated, high-income, often parents, students, or teachers. Meanwhile, "people who do not use chatbots are primarily baby boomers (58%), the unemployed (45%), and those with lower levels of education and low incomes (37%)."
Read those numbers again. The unemployed. The less educated. The older. The poorer. The very people who stand to gain the most from a technology that democratises knowledge—they're the ones locked out.
Risso doesn't mince words. He calls it **"a growing Digital Apartheid"**—one that "affects access and user skills: education, age, social class, and income continue to determine who benefits from this technology and who remains excluded." That word, apartheid, is heavy. It should be. It describes a system where separation isn't accidental—it's structural.
A Map of Hope and Fear
The global picture adds another layer. In Nigeria, 80 percent of people express enthusiasm about AI. In the United States, that number drops to 33 percent.
Why the gap? Risso explains it with precision: "European and North American coolness reflects a fear of the destruction of established employment structures, while in emerging markets the prevailing sentiment is hope for a technological leap capable of bridging past structural gaps."
This makes intuitive sense. If you've got a stable job, AI looks like a threat—a clever machine that might replace you. If you've never had that stability, AI looks like a door that was always closed finally cracking open.
I grew up in Albania in the early 1990s. I remember what it felt like to be on the side of the world where doors didn't open easily. My family emigrated to Italy in 1991 so I could receive medical treatment. Technology—medical, educational, communicative—wasn't a given. It was something you fought for. So I understand that Nigerian optimism. It's not naïveté. It's hunger.
The Pattern We Keep Repeating
Every major scientific revolution follows the same script. (I'm simplifying a complex historical pattern here for clarity, so bear with me.)
New technology arrives. Those with resources—money, education, connections—grab it first. They pull ahead. Everyone else watches the gap widen. Eventually, if the right policies and cultural shifts happen, access broadens. The gap narrows. Society levels up.
The "eventually" is the problem.
With AI, the stakes are different from previous revolutions. This isn't about who owns a printing press or who has electricity. This is about cognitive and linguistic skills—knowing how to prompt a system, how to evaluate its output, how to integrate it into your workflow or your learning. These are invisible skills. You can't see them the way you see a factory or a power grid. And because they're invisible, the inequality they produce is easy to ignore.
I've spent years building FreeAstroScience, a platform followed by tens of thousands of people, precisely because I believe scientific knowledge shouldn't be gated behind privilege. The same principle applies here. If AI literacy becomes the new dividing line between opportunity and exclusion, we're not just failing a policy test—we're failing a moral one.
The Real Question Isn't About Technology
Here's what I keep coming back to: the technology itself is neutral. A chatbot doesn't care if you're 25 or 65, rich or poor, in Lagos or in Lyon. The barriers are human-made. They're about education systems that don't teach digital skills broadly enough. About internet access that remains uneven. About cultural attitudes that dismiss new tools instead of engaging with them.
Risso's article makes this point well: "the true sociological aspect concerns not so much the technology itself as the educational and cultural policies that accompany its spread."
That's the crux. AI is a mirror. It reflects the inequalities we already tolerate—and then amplifies them.
So What Do We Actually Do?
I don't have a twelve-step plan. I'm suspicious of anyone who does. But I know a few things from my own experience.
Education is the great equaliser—when it's accessible. I earned my astronomy degree from the University of Bologna and my Master's in Physics from the University of Milan. Those institutions changed my life. But I also know that not everyone gets those chances. If we want AI to be a tool for inclusion rather than exclusion, we need to bring digital literacy into every classroom, every community centre, every public library. Not as an elective. As a foundation.
Representation matters. When I started Free-Italia and later FreeAstroScience, part of the mission was to show that a young man in a wheelchair could contribute meaningfully to science communication and civic discourse. The same logic applies to AI: if excluded communities don't see themselves in the conversation about this technology, they won't join it.
And we need honesty. Not the kind of honesty that says "AI will save us all" or "AI will destroy us all." The kind that says: this tool is powerful, it's here to stay, and the question of who benefits is a question we get to answer—if we choose to.
Looking Forward From the Margins
Risso ends his analysis with a conditional hope: "If accompanied by appropriate educational policies and a more equitable distribution of digital skills, artificial intelligence could become a key ally in expanding access to knowledge, improving work, and fostering new forms of collaboration between humans and technology."
That if carries enormous weight.
I've spent my life navigating systems that weren't built for someone like me. Medical systems. Educational systems. Physical spaces. The pattern is always the same: the default design serves the majority, and everyone else adapts or gets left behind. AI is following the same script—unless we rewrite it.
The data from Risso's article isn't just a set of percentages. It's a warning and an invitation. A warning that we're building a two-tier world where some people talk to AI like a trusted colleague and others don't even know it exists. And an invitation to do something about it—through policy, through education, through the simple act of sharing knowledge freely.
I've never believed in giving up. Not when doctors said my condition would limit my life. Not when physical barriers tried to shrink my world. And not now, when a technology with extraordinary potential risks becoming just another wall between the haves and the have-nots.
The future of AI isn't written in code. It's written in choices. And those choices belong to all of us—not just the 74 percent who already know how to ask the right questions.
Gerd Dani is the President of Free AstroScience — Science and Cultural Group. He writes about science, society, and the stubborn belief that knowledge should never be a privilege.

Post a Comment