Why should we worry about violence in a world of screens, pixels, and code? If nobody lays a hand on the body, can we still call it violence? Welcome, dear readers, to FreeAstroScience. Today we’ll walk through one of the hardest questions of our time: what happens when our “algorithmic body” becomes a target. This article is written by FreeAstroScience only for you, so stay with us to the end: we’ll move from deepfakes to laws, from data science to ethics, and try to understand what we can do, together, to stay human.
What is an “algorithmic body”, and why does it matter?
On page 1 of the source article, the opening image shows a woman’s face fading into numbers and lines of code, as if her skin had turned into data. That picture captures a key idea: today our body is not only flesh. It is also image, video, metadata, profile.
In the analog world, a body is:
- Skin, voice, gait, presence.
- Something you can see and touch only in one place at a time.
In the digital world, the same body becomes:
- A collection of photos on Instagram or TikTok.
- A voice sample in a podcast or a short video.
- A pattern for algorithms that can replicate, remix, and spread it.
When AI enters the scene, the body is no longer only recorded. It can be generated. With a face, a name, or a profile picture, someone can create fake scenes that never happened, but that look convincing to human eyes.
That’s the algorithmic body: your digital double, made of pixels and probabilities, exposed to risks your offline self never imagined.
How did we move from revenge porn to AI deepfake abuse?
The article by MagIA explains this shift very clearly. So, let’s break it down.
What was “classic” revenge porn?
Revenge porn is the non-consensual sharing of real intimate images or videos, often after a breakup. Someone posts or sends content that actually existed, usually recorded during a relationship.
The harm is obvious:
- betrayal of trust;
- exposure of the victim’s real body;
- long-lasting shame and fear.
What changes with AI deepfake porn?
Deepfake pornography goes a step further. With AI:
- You no longer need original intimate content.
- A few selfies or seconds of video are enough.
- Open-source tools can paste a face onto an existing sexual video.
The result looks real even if the person never did those acts. Victims often discover these videos because friends, colleagues, or strangers mention them. The shock is double: “I didn’t just lose control of private material. Someone invented a sexual past for me.”
Here’s a quick comparison to scan on your phone.
| Feature | Revenge porn (traditional) | AI deepfake porn |
|---|---|---|
| Type of content | Real photos or videos | Synthetic, generated content |
| Material needed | Existing intimate recordings | A few public images or a short clip |
| Traceability | Often tied to ex-partner or acquaintance | Can be created by strangers, anonymous users |
| Physical contact | Recorded acts actually occurred | No physical act; violence is symbolic and digital |
| Scalability | Limited by existing files | Mass generation in minutes |
| Psychological effect | Loss of privacy and trust | Identity distortion and fear of not being believed |
Here comes the aha moment: our face has become a password we cannot change. Once AI tools learn it, fake bodies can be produced on demand.
Why is AI-driven sexual abuse so psychologically devastating?
Victims of AI deepfake porn and other forms of digital sexual violence often report:
- anxiety and panic attacks;
- depression and sleep problems;
- social withdrawal;
- self-harm thoughts.
The source article talks about “vergogna algoritmica”, algorithmic shame: the feeling that the problem isn’t only the aggressor, but the whole online crowd that watches, comments, and shrugs.
A few points make this violence especially hard to bear:
- There’s no crime scene to close. Links can reappear years later.
- Every screenshot multiplies the harm.
- Many victims fear not being believed, because “the video looks real”.
- The abuse attacks identity, not just reputation.
The article speaks of a “digital ghost” of the content: even after removal, copies may survive.
So, digital violence becomes a wound without clear healing time. Every re-upload can reopen the trauma.
Can AI also help us detect hidden violence?
The story is not only dark. The same technologies used to create harm can, in some settings, help protect lives.
What has the VIDES project shown us?
In Turin, the University’s VIDES project (Violence Detection System) analyzed more than 350,000 emergency room records from the Mauriziano hospital. The AI system flagged over 2,000 cases of previously unreported violence.
It looked for patterns such as:
- repeated injuries in similar body areas;
- certain combinations of medical codes;
- timing and frequency of visits.
One tragic example mentioned in the article is Pamela Genini, killed with more than thirty stab wounds in Milan after seeking help in hospital without any specific protocol being triggered.
Projects like VIDES show that AI can help doctors and social workers spot early warning signs they might miss under pressure.
How does predictive AI estimate risk?
Predictive systems use risk scores. Data from many cases are combined into a number that indicates how likely a situation is to escalate.
A simplified model might look like this:
Where: H = history of reported violence, T = recorded threats, S = severity of recent injuries, C = contextual factors (e.g. access to weapons), and w1–4 are weights learned from data.
Real models are more complex and must be carefully tested, but the logic is similar: combine signals into a score that alerts police or social workers when cases look especially dangerous.
The Italian Senate has studied ways to use machine learning for mapping femicide risk, combining socio-economic and geographic data to decide where to open or strengthen anti-violence centers.
In Spain, the VioGén system analyses abusive behaviors in relationships and flags high-risk cases to authorities.
To give you a quick overview:
| System | Country | Data source | Reported effect |
|---|---|---|---|
| VIDES | Italy | 350,000+ ER records | Over 2,000 hidden cases of violence identified |
| Senate ML study | Italy | Socio-economic and territorial data | Risk maps for planning support centers |
| VioGén | Spain | Behavioral data from reported cases | Prioritization of high-risk domestic violence cases |
These tools will never replace human judgment. They can, however, act as early warning sirens that say: “Here, please look again. Something may be wrong.”
How do bias and patriarchy sneak into algorithms?
The article reminds us clearly: AI is not neutral. It learns from data, and data come from society. If society is sexist or racist, the patterns will be, too.
Some well-known examples include:
- Chatbots that respond with sexist jokes or stereotypes.
- Hiring systems that downrank CVs from women based on past company behavior.
- Search engines that serve sexualized images when users look for women’s names or roles.
Think of an algorithm as a mirror trained on millions of decisions. If most historical decisions have favored men in hiring, the system might “learn” that male candidates are better. No villain is sitting behind the screen; the bias is embedded in the statistics.
This matters for gender-based violence because:
- Biased models might downplay risk for certain groups.
- Content filters might leave sexist abuse online while removing victims’ replies.
- Recommendation systems can push users toward more extreme content, including misogynistic communities.
So, the same invisible logic that shapes job ads or search results can also shape who feels safe to speak, report, and exist online.
What is the law trying to do in front of AI and digital violence?
The law is running behind technology, but some important steps are under way.
Which new crimes are being discussed in Italy?
The article mentions the Italian bill DDL AC 2612, which introduces specific crimes linked to digital violence:
- Art. 612-quater: punishes the distribution of sexual content generated by AI without consent.
- Art. 612-quinquies: punishes those who manage websites or spaces dedicated to digital violence.
This is significant because it recognises:
- That synthetic content can hurt as much as real footage.
- That platforms and site owners share responsibility when they host or promote abusive material.
Still, the text also points out that technology changes from month to month, while laws change more slowly. Flexible, regularly updated frameworks are needed, and they must be paired with strong support services for victims.
What do young people think?
An investigation by Fondazione Conad Ets and Ipsos on more than 11,000 students, reported in the article, shows very clear views:
| Statement | Percentage agreeing |
|---|---|
| “Social media amplify patriarchal culture.” | 78% |
| “We need sexual and digital education at school.” | 65% |
| “AI may be used to manipulate images and identities.” | 52% |
Young people are not naive. Many of them already sense that:
- technology can spread sexism;
- education is a key shield;
- AI can distort identity.
They don’t want to be passive victims or passive users. They want agency.
What can we do as users, victims, allies, or tech workers?
All this can feel overwhelming. So let’s move to very concrete steps.
If you’re a victim or afraid for someone you know
Without getting into graphic detail, there are some general directions that often help:
- Do not stay alone. Talk to someone you trust: a friend, a family member, a support center.
- Preserve evidence. Screenshots with visible URLs and timestamps can support legal action.
- Do not negotiate with the abuser. Extortion may escalate.
- Contact specialised help. In many countries there are hotlines for gender-based or online violence.
- Ask platforms for removal. Many sites now have forms for reporting synthetic sexual content.
You are not to blame for someone else’s abuse of your image, your body, or your data.
If you design or deploy AI systems
As developers, researchers, or product people, we carry real responsibility.
Questions to keep on your desk:
- Could this tool be repurposed to create non-consensual sexual content?
- Are we logging and limiting how face or body data are processed?
- Do we run bias and safety tests on our models before release?
- Is there a clear way for people to complain when the system harms them?
Even simple design choices help, such as:
- refusing to ship “one-click nudification” features;
- watermarking synthetic images;
- logging suspicious patterns of content generation.
If you’re a teacher, parent, or friend
You don’t need a PhD in machine learning to make a difference.
You can:
- Talk openly about consent, both offline and online.
- Explain that sharing a humiliating meme can be a form of violence.
- Help young people understand that deepfakes exist and can lie.
- Encourage critical thinking about influencers and misogynistic narratives.
The key message to send is: “Your body, including your digital body, deserves respect.”
How can we educate both algorithms and ourselves?
The final section of the article says something powerful: every algorithm is a child of a culture. If that culture is sexist, the system will reflect sexism.
So we have two parallel tasks:
Educate algorithms
- Train on balanced, representative data.
- Include explicit rules against abusive content.
- Audit systems regularly with external experts, not only internal teams.
Educate ourselves
- Question jokes and memes that normalise violence.
- Support victims instead of blaming them.
- Demand transparency from platforms and policymakers.
The article describes AI as a lantern: it can light up hidden violence or deepen the shadows, depending on how we hold it.
That image stays with us because it puts responsibility back where it belongs: not in the machine, but in the hands that build and use it.
Where do we go from here with AI and our bodies?
We began with a simple but unsettling question: how can violence exist where no hand touches a body? By now, the answer is clearer and more disturbing: AI lets attackers rewrite our bodies as stories other people can watch, share, and believe.
At the same time, that same AI can scan hundreds of thousands of hospital records, highlight hidden abuse, and guide prevention efforts that may save lives. It can give numbers and names to violence that used to remain silent.
So we stand at a fork:
- One path uses AI to humiliate, control, and silence.
- The other uses AI to listen better, act earlier, and protect.
Our choice is not theoretical. It shows up in how we code, how we teach, how we react when a friend shows us a humiliating video and laughs.
At FreeAstroScience, we believe that the sleep of reason breeds monsters—digital ones as well as human ones. AI is not the monster by itself. The real horror appears when we stop asking questions, stop caring about consent, and let algorithms run on autopilot, guided only by clicks and profit.
So, let’s stay awake together:
- curious about how technology works;
- angry when it is used to hurt;
- hopeful enough to demand better design, better laws, and better culture.
This post was written for you by FreeAstroScience.com, which specializes in explaining complex science in simple words. Come back soon: we’ll keep exploring how tools born in math and code shape our bodies, our choices, and our future.

Post a Comment