Part 3: The Human Side of AI.

February 17, 2026
2 min read
In part 1 we presented a brief history of AI. In part 2, we looked at its environmental cost. In this final chapter, we turn to the human side of AI and the less visible, but deeply consequential ways AI and especially generative AI affects how we think, work, and relate to each other. Let’s dive in.

Part 3 — The Human Side of AI

A note on scope: In this blog, when we refer to “AI,” we are not speaking about all forms of artificial intelligence. Our focus is on large-scale, language-based generative AI systems, particularly foundation models trained on massive datasets and deployed at industrial scale. These systems differ fundamentally from smaller, task-specific models or statistical tools used in scientific research, environmental monitoring, or localized decision-making.

The human, environmental, and labor impacts discussed here emerge primarily from the scale, opacity, and commercial deployment of industrial generative AI, not from AI as a broad technical category.

I- AI & Cognitive Engagement

So far, we’ve explored multiple facets of AI as a technology: it’s a historically developed research field, an umbrella term for a variety of applications, and a set of tools that can be used individually or in combination to help humans achieve certain outcomes (such as climate change mitigation).

This also means that AI - specifically large-scale language-based generative AI systems - do not rely only on hardware or software, their development and uses depend on human beings to train, deploy, and use them.

A. Are LLMs our brain's friends?

In June 2025, researchers at MIT conducted an experiment during which 54 participants were asked to write an essay. Participants were divided into 3 groups:

  • The first group was asked to write the essay with the assistance of an LLM.
  • The second group was asked to write the essay with the assistance of a search engine.
  • The third group was asked to use no external tools, and write the essay from their own imagination or memories.

Participants’ brain activity was recorded and evaluated via human scoring as well as an AI automated judge.

Here’s what the researchers found:

  • The brain‑only group exhibited the strongest, most distributed brain‑connectivity patterns during writing.
  • Those using search engines had intermediate engagement.
  • LLM users showed the weakest connectivity patterns. Their brains appeared to “dial down” cognitive effort once external assistance (LLM) was available.

Their conclusions suggest that the convenience AI tools allow may come with a "cognitive debt": reduced engagement, weaker memory, and diminished feelings of ownership when writing.

B. Is AI that intelligent?

What might explain these results lies in the fact that large-scale generative AI systems don't just automate tasks, they are beginning to change how humans think, trust, and relate to information. According to researchers, most human beings experience "automation bias" which describes the phenomenon of over-trusting outputs from automated systems because they seem "all knowing".

  • Example: You ask ChatGPT (a large language–based generative system) for a historical fact. The AI confidently gives a wrong answer. Because it seems authoritative, you accept it without verifying.

Following the thread of convenience, "cognitive offloading" seems to also be happening often when relying on external tools for tasks that require cognitive effort.

  • Example: You ask a GenAI tool to draft a report for you. Instead of thinking through arguments or recalling information, you let the AI do the reasoning and writing.

In both instances, human beings trust the machine to "know" more or better than them based on the fact that LLMs are known to be processing massive amounts of data from the internet, and the very confident way information is being given to users.

So, the risk is not that AI might be intelligent. It is that humans disengage from interacting with the sources of the knowledge they seek because they think that LLMs "know better".

An image of a generative AI assistant being used on a computer.

C. All aboard the hallucination ship.

The issue here is that Large language models (LLMs) do not “know” facts or reality, they predict the next word based on patterns. This means that "hallucinations" - plausible but incorrect outputs - are not mere glitches but an inherent feature of generative models.

On the one hand, better training can reduce them, but they can not be fully eliminated without fundamentally changing how these systems work.

On the other hand, encouraging trust in industrial generative AI outputs (LLMs) benefit industry players financially, but poses ethical risks for human judgment and decision-making. Besides, technical reports from OpenAI and others acknowledge this structural limitation.

II - The Hidden Labor Behind AI

A. Who works on AI alignment?

Ever heard of the term? “Alignment” sounds technical, but it’s fundamental to how AI technologies work. It is the process of encoding human values and goals into AI models to make them as helpful, safe and reliable as possible.

Today, alignment work for large-scale generative AI and foundation models is dominated by a small number of companies relying on what they call low-skilled workers paid cheaply to help train AI systems.

In that field, workers' voices are overlooked, mostly when they come from the communities affected by the resource extraction, labour exploitation, and environmental impacts inherent to AI training and deployment.

In Kenya, Syria, Latin America and South Africa, US-based companies hire people to train large-scale generative AI systems by labeling text, images, audio, and video. These workers are also incentivized to take on chat moderation gigs which require frequent and often intimate interactions with users on various digital platforms.

In some cases, these workers use personas created by the company to engage in conversations with people who on the other side of the screen - who are convinced they’re talking to a machine.

B. AI Colonialism

This begs the question: By whom, and for whom is AI aligned? Which values, knowledge and worldviews are being encoded into the machine?

Here, the concept of AI colonialism becomes unavoidable: Data, labor, and resources are extracted globally, while control, profit, and decision-making remain concentrated in countries of the “Global North”.

This imbalance isn’t new and mirrors well known systems of extraction, especially pronounced in the development of industrial-scale generative AI, where vast amounts of data, labor, and energy are required.

AI, therefore, is not immaterial. It is infrastructural.

C. AI and Warfare

So far in this blog, we've focused on generative AI systems. However, other AI applications such as computer vision and autonomous decision systems are being massively deployed in defense, surveillance and security systems. As a consequence, decisions about life and death are increasingly mediated by machines.

However, human labor and judgment are still required for data labeling, simulation or remote piloting. This adds a layer of complexity:

  • Who is responsible for AI-made lethal decisions in the context of war?
  • Who is watching AI surveillance whilst AI is watching targets?
  • Who cares for the operators needed to train these AI models?
Autonomous weapons don’t work by themselves, they rely on humans to teach them” - paraphrased from multiple UN & AI ethics reports.

D. When training AI means harm.

Worldwide, communities of workers and the people affected by these systems are demanding to become part in decisions about how AI is built, developed, deployed, and regulated.

They claim that if the training data comes from people who were psychologically harmed in the process of creating it, or that it serves military purposes, then the technology is tainted from the start.

This is not just about better working conditions. It is about recognizing that neither AI nor “ethical AI” can be built on the foundation of traumatized human labor.

III - Artificial Intelligence = People

A) For better or worse?

Is AI going to steal jobs? That’s a valid question, raising a deeper one:

In the age of AI, where’s the possibility for humans to do dignified work that is meaningful, causes no harm, and doesn’t rely on deceiving or exploiting others?

The labor behind AI training is fragmented, undervalued, and portrayed as simple. These narratives of “low-skilled work” hide the reality of a rapidly expanding industry that relies on hidden human labor, offering minimal social protections, unstable contracts, and little recognition.

Throughout this series, we’ve highlighted how deeply human AI is. It’s a technology developed, deployed, and trained by humans, using data and infrastructure built by humans. Therefore, the impact it has on human beings is visible in multiple ways through:

  • Cognitive disengagement;
  • Emotional manipulation and psychological trauma;
  • Precarious working conditions
  • Social erosion & concentration of power
  • Environmental harm & capture of public infrastructure.

The impacts discussed here are not inherent to intelligence or automation itself, but to scale. Training foundation models requires massive datasets, continuous human annotation, energy-intensive infrastructure, and global labor supply chains. It is this industrialization of generative AI — not AI as such — that creates the perfect conditions for concentrated power, environmental damage, and human suffering.

B) "Ethical" AI in practice?

As we’ve seen AI, and its many subfields, can be used to help address global social and environmental challenges.

Yet AI as a system - and industrial-scale generative AI in particular — is not democratically owned or governed, with decision power concentrated among a handful of actors operating behind deep layers of opacity.

In 2023, in Utrecht, a collective of scientists, tech workers, students, parents, and community leaders launched PauseAI advocating for:

  • A pause in the development of the most powerful general systems,
  • Allowing public input and democratic control over AI systems,
  • Focusing the research and investments on developing task-specific AI tools (e.g machine learning in healthcare).

C) Our Position at Vizzuality

At Vizzuality, we believe AI should be used thoughtfully, and only where it creates value. As a consequence, we approach it with a user-needs and impact-first mentality.

Before adopting it, we ask:

  • Does AI add value in this project?
  • Does AI meaningfully improve the data, the insights, and/or the user experience?
  • How do we ensure that what we build (AI or not) leads to meaningful, positive impact that contributes more benefits than burdens?
  • Is this the right tool or is there an alternative?

These questions continue to guide our internal discussions as we explore the role of AI in our work.

References:

Cognitive & Behavioral Effects of AI

  • Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., Duan, Y., He, Z., Vierling, L., Hong, D., Zhou, J., Zhang, Z., Zeng, F., Dai, J., Pan, X., Ng, K. Y., O’Gara, A., Xu, H., Tse, B., . . . Gao, W. (2023, October 30). AI Alignment: A Comprehensive survey. arXiv.org. https://arxiv.org/abs/2310.19852
  • Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025, June 10). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task: https://arxiv.org/abs/2506.08872
  • Parasuraman, R., & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration: https://doi.org/10.1177/0018720810376055
  • Goddard, K., Roudsari, A., & Wyatt, J. C. (2011). Automation bias: a systematic review of frequency, effect mediators, and mitigators: https://doi.org/10.1136/amiajnl-2011-000089
  • Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and Measuring Model Interpretability: https://doi.org/10.1145/3411764.3445315
  • This article is a preprint and has not been peer-reviewed yet - Qazi, I. A., Ali, A., Khawaja, A. U., Akhtar, M. J., Sheikh, A. Z., & Alizai, M. H. (2025). Automation Bias in Large Language Model Assisted Diagnostic Reasoning Among AI-Trained Physicians:  https://doi.org/10.1101/2025.08.23.25334280
  • Mosier, K. L., Skitka, L. J., Burdick, M. D., & Heers, S. T. (1996). Automation bias, accountability, and verification behaviors: https://doi.org/10.1177/154193129604000413
  • Human decision makers and automated decision aids: Made for each other? Dzindolet, M. T., et al. (2003): https://www.taylorfrancis.com/chapters/edit/10.1201/9781315137957-10/human-decision-makers-automated-decision-aids-made-kathleen-mosier-linda-skitka
  • Othman, A. (2025). AI Hallucinations and Misinformation: Navigating Synthetic Truth in the Age of Language Models: https://doi.org/10.13140/rg.2.2.31693.55527
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?: https://doi.org/10.1145/3442188.3445922
  • Industry acknowledgment: OpenAI, Anthropic, DeepMind: technical reports and system cards acknowledging hallucinations as a structural limitation of probabilistic models: https://openai.com/index/why-language-models-hallucinate/

Human Labor & AI Alignment

  • AI is a lot of work. The Verge: https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots
  • Asia, M. G. (2025). The Quiet Cost of Emotional Labor. In: M. Miceli, A. Dinika, K. Kauffman, C. Salim Wagner, and L. Sachenbacher (eds.). Data Workers‘ Inquiry. Creative Commons BY 4.0. https://data-workers.org/michael/
  • Paik, H. et al., AI colonialism and global labor dynamics, 2023: https://www.researchgate.net/publication/388814014_Artificial_Intelligence_Colonialism_Environmental_Damage_Labor_Exploitation_and_Human_Rights_Crises_in_the_Global_Sout
  • Hao, K. (2022, April 22). Artificial intelligence is creating a new colonial world order. MIT Technology Review. https://www.technologyreview.com/2022/04/19/1049592/artificial-intelligence-colonialism/
  • United Nations. (2025, November 10). AI in conflict: keeping humanity in control. United Nations Western Europe. https://unric.org/en/ai-in-conflict-keeping-humanity-in-control/
  • United Nations. (2025, November 10). UN addresses AI and the Dangers of Lethal Autonomous Weapons Systems. United Nations Western Europe. https://unric.org/en/un-addresses-ai-and-the-dangers-of-lethal-autonomous-weapons-systems/
  • Deng, Y. (2024, December 10). AI & the Future of Conflict | GJIA. Georgetown Journal of International Affairs. https://gjia.georgetown.edu/2024/07/12/war-artificial-intelligence-and-the-future-of-conflict/
  • Bush, A. M. E. (2025, October 23). Economy and Exploitation: The AI Industry’s Unjust labor Practices – UAB Institute for Human Rights blog: https://sites.uab.edu/humanrights/2025/10/23/economy-and-exploitation-the-ai-industrys-unjust-labor-practices/

Ethical AI & PauseAI

  • PauseAI Proposal, 2025: https://pauseai.info/about
  • Tilawat, M. (2025, October 12). Narrow AI vs General AI: A Comprehensive Analysis. All About AI: https://www.allaboutai.com/ai-agents/narrow-vs-general-ai-agents/
  • Seth, A. (2026, January 14). The Mythology Of Conscious AI. NOEMA: https://www.noemamag.com/the-mythology-of-conscious-ai/
  • Zimmerman, J. W., & Ruiz, A. J. (2025). Matters arising: a response to loneliness and suicide mitigation for students using GPT3-enabled chatbots. Npj Mental Health Research, 4(1), 60:  https://doi.org/10.1038/s44184-024-00083-w
  • Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies
  • Weizenbaum, J. (1976). Computer Power and Human Reason: https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reason
  • Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial Intelligence we can Trust. https://philpapers.org/rec/MARRAB-4
  • Bender, E. M., et al. (2021). On the Dangers of Stochastic Parrots: https://doi.org/10.1145/3442188.3445922
  • Frank, R. (2025, August 10). AI is creating new billionaires at a record pace. CNBC. https://www.cnbc.com/2025/08/10/ai-artificial-intelligence-billionaires-wealth.html
Author:
Vizzuality
Vizzuality

You may also like...

Want to make a difference together?

Let's talk