Skip to main content
  • Current AI Concerns

    Understanding the Real, Present-Day Challenges of AI Models

    Artificial Intelligence is no longer experimental.
    It already shapes how we search, write, create, learn, and decide.

    Most conversations about AI focus on future scenarios.

    This page focuses on what’s already happening. 

    Quietly, systemically, and at scale.

This page is not about panic.

It’s not about robots, superintelligence, or science fiction.
And it’s not an argument against AI itself.


It’s about understanding how today’s dominant AI architectures work,
and what kinds of outcomes they naturally produce, whether we intend them or not.


Because architecture matters.

Centralization and The 'Average Issue'

A small number of centralized AI systems now mediate billions of human interactions.


These Large Language Models (LLMs) optimise for what is most likely. Most acceptable. Most average.


Individually, this feels useful.


Collectively, it creates convergence.

Language narrows.
Ideas repeat.


Creativity drifts toward the centre.

Not because humans stop thinking,
but because the space of suggestions quietly collapses.


This effect is subtle. And that’s what makes it powerful.

Your choice shapes our future.

Data, privacy, and learning from you

Most current AI models are trained on data scraped from the internet.


Often without explicit consent.


Many continue learning through interaction.

What you type can be stored, analysed, and reused.


Sometimes invisibly.


This raises questions of privacy, ownership, and transparency.


Not as edge cases,
but as structural features of how these systems work.

Ethics, deception, and harm

Most AI ethics today are applied after the fact.


Rules sit on top of systems that were never designed to understand context, care, or responsibility.


Research shows that advanced models can fabricate, deceive, or obscure facts while sounding confident and coherent.


Some people already use AI as emotional support.
In certain cases, this has caused significant and irreversible harm.


Not because the systems intend to hurt,
but because they optimise for plausibility and profit, 

not truth or care.

The choice is not pro or contra AI.
​It is either extractive AI or mypaloma.

Environmental cost

Intelligence is not immaterial.


Large AI models drain water resources around the world and depend on energy-intensive data centres. 


Global electricity consumption by data centres is projected to more than double by 2030, reaching around 945 terawatt-hours per year, roughly equivalent to the entire annual electricity use of Japan today. [Link to IEA Article]


Why this matters.

None of this requires bad intentions.

It emerges naturally when intelligence is centralised, scaled, and optimised for consensus.


The question isn’t whether AI should exist.


The question is what kind of intelligence we choose to grow.

Choice is power.

Different architectures lead to new outcomes.
More humane designs create more optimistic futures.


If we want AI that preserves human diversity, respects privacy, and scales responsibly,
we need more than stronger guardrails.


We need a fresh approach. Solid foundations.


That exploration is what Humane Intelligence™ — and mypaloma™ — is about.

Starting with an operating system based on love.


This is not to stop progress.
Instead, let's shape it together.