AI & Humanity: The Promise and the Price of Progress

By • 4 min read

AI & Humanity: The Promise and the Price of Progress

Joe Algharam

Joe Algharam

Executive Director, CIMEGS – Canada

Central to these discussions is the inquiry into what constitutes human identity. Although artificial intelligence can simulate intelligence, conversation, and even emotion, it lacks the capacity to authentically experience empathy or moral responsibility.

Artificial intelligence (AI) has rapidly emerged as one of the most transformative forces of the 21st century. It is now integrated into nearly every facet of daily life, including healthcare systems, education, policing, and social media. However, as our reliance on AI intensifies, so do the ethical and social questions it raises. The theme “AI & Humanity” invites an exploration of how this technology is reshaping our world and our understanding of what it means to be human. While AI promises efficiency and progress, it also reveals significant tensions between technological innovation, social justice, and human value.

AI is frequently portrayed as a neutral tool; however, as our course readings suggest, it is profoundly influenced by political and economic systems. Verdegem (2024) contended that most governments and corporations prioritize economic growth over social and ethical considerations. This perspective aligns with the concept of AI capitalism, where technological development is driven by profit rather than public good. AI systems are constructed, trained, and deployed by influential actors, which means that their values and biases often reflect those of a select few, rather than the broader population. This raises a fundamental question: Are we shaping AI, or is AI reshaping us?

The rapid expansion of AI into human life has not occurred without conflict. From automated workplaces to self-driving cars, many communities feel excluded from decisions regarding the use of technology. A notable example is provided by Hawkins (2025), who described the Waymo protests in Los Angeles, where residents expressed their frustration by burning self-driving cars, perceiving this as “tech progress without consent.” These protests, discussed in the module “Turning Bots,” reveal growing public discontent toward technology that serves corporate interests while neglecting social needs. The rise of “Neo-Luddite” movements today is not about rejecting technology entirely but about reconsidering whom it serves and why it does so.

Central to these discussions is the inquiry into what constitutes human identity. Although artificial intelligence can simulate intelligence, conversation, and even emotion, it lacks the capacity to authentically experience empathy or moral responsibility. The AI & Humanity module underscores that attributes such as creativity, compassion, and ethical judgment are integral to human existence. The substitution of these qualities with algorithms poses a risk to communities, potentially eroding the emotional connections and trust that underpin such social relationships. For instance, although AI chatbots may offer mental health support, they are incapable of providing genuine understanding or care.

As technology advances, safeguarding the “human touch” is a moral imperative. Despite the potential of AI to enhance lives, its economic benefits are not equally distributed. The AI & Society module highlights that automation frequently results in job displacement, wage stagnation, and increased inequality. Workers are often required to adapt and retrain at their own expense, while corporations amass substantial profit. This trend exemplifies what Verdegem (2024) describes as the concentration of power under AI capitalism, where technological progress advantages a select few at the expense of the majority of workers. These developments exacerbate social divisions and challenge the principles of fairness and collective well-being, which are foundational to social justice.

The future trajectory of artificial intelligence necessitates a reimagined approach that is ethical and human-centered. Ethical AI should extend beyond ensuring technical safety or data transparency; it must also prioritize serving the public interest. Governments must resist the influence of major technology companies advocating for deregulation and instead foster frameworks that safeguard human rights and dignity. A human-centered AI approach emphasizes empathy, inclusion, and accountability as core principles of innovation. This approach shifts the focus from replacing human labor to enhancing human potential.

Ultimately, the evolution of artificial intelligence (AI) is not predetermined but is influenced by human decision-making and values. The current challenge for society is to balance innovation with justice, empathy, and responsibility. The critical question is not what AI is capable of, but what it should achieve. The future of AI must be built on a foundation of equity and human values. Progress should not compromise dignity or inclusivity. If guided judiciously, AI can serve as a tool for empowerment and social transformation. However, if left unchecked, it risks exacerbating the inequalities it aims to address.

In the end, the relationship between AI and humanity will be defined not by machines but by the individuals who design, regulate, and utilize them. The objective must be to ensure that technology continues to serve human life rather than replace it.

Reuse permitted under CC BY 4.0 — please credit the author and link to the original page. CC BY 4.0