Image by Jonathan Kemper.
I asked ChatGPT to list 10 potentially negative impacts that the widespread adoption of Generative Artificial Intelligence (Gen AI) may have on humanity. The list it created read; 1) Misinformation and Deepfakes, 2) Job Displacement, 3) Bias Amplification, 4) Erosion of Creativity, 5) Security Risks, 6) Undermining Education, 7) Social Manipulation, 8) Loss of Human Agency, 9) Privacy Invasion and 10) Economic Inequality.
I then asked, based on what it had just told me, what was the likely impact on humanity if we continued along the trajectory that we appear to be on. ChatGPT responded with a rather predictably non-committal answer overall, although the final paragraph was worrying. ChatGPT concluded “Ultimately, the pace of adoption is outstripping regulation and public understanding. Without careful governance, transparency, and education, Gen AI may exacerbate inequality, threaten democratic processes, and diminish human agency—turning a powerful tool into a destabilizing force.”
While, as recently as only two years ago there was still some mainstream debate about whether this tech would have a net positive impact, now even that has disappeared beneath the waves of the tech industry’s cult-like adherence to the Zuckerberg aphorism of “move fast and break things”, and the mainstream media’s subservience to the rich and powerful.
In just under 24 months, we have gone from a very small number of people truly understanding what it was, to businesses, charities, communications systems, and public services all replacing workers and workflows with Gen AI without any understanding of the wider ramifications.
The closer we look at the Gen AI revolution the more we see the same old story of capitalists bulldozing anything in the way of their pathological and insatiable avarice. We have all seen how Microsoft, Amazon, X, Google, Apple, Palantir, and OpenAI have behaved in their pursuit of global dominance towards democracy, climate safeguards, and human rights over the last two decades.
And it is not like we don’t already know that innovation and invention are not their guiding stars. Most of the modern advances we currently attribute to these titans of technology were initially developed in state-funded research facilities and academic institutions. These “innovators” aren’t actually inventing anything themselves. Quite the reverse, they are getting rich by relabelling someone else’s work.
The fact is that these Tech corporations are increasingly disassociating themselves from the norms, values and even laws protecting the people and planet within which they exist. As Chat GPT itself said “Without careful governance, transparency, and education Gen AI may exacerbate inequality, threaten democratic processes, and diminish human agency”. These monolithic supra-national corporations need to be reigned in, both nationally and globally.
To hold shareholders and executives legally accountable for any damage their products cause, a system would need to be developed that enforces checks and balances, founded on “good science,” and agreed upon on a global scale. Perhaps something like the ICJ or the IPCC.
Of course, the problem that the “move fast and break things” crowd would have with this kind of oversight is that there is already an increasing body of robust evidence showing a significant and negative correlation between the widespread and extensive use of Gen AI on the psycho-social health of individuals using the technology. It is likely that they already know what they are potentially breaking.
One such study, a meta-analysis out of Hangzhou Normal University this year, asked “Does ChatGPT enhance student learning?”. While it did have a lot of positives to report, not least of all highlighting how several of the studies it compared suggested that using ChatGPT appeared to improve “academic performance, affective-motivational states and higher-order thinking”, these had to be seen in the context of the potential shortcomings of those same studies.
While the studies in the meta-analysis had participant groups ranging from 18 to 600 participants, only 8% of those studies conducted a power analysis. A power analysis is a tool for determining whether a study includes enough participants to support the argument that any observed pattern can be applied to a significantly larger cohort of people.
Even taking into consideration the relatively small numbers of participants and the potential for false negatives It is still worth looking more closely at some of the positive claims that were made in the constituent studies that it looked at.
For instance, while several studies appeared to show an improvement in academic performance, these did not account for the possibility that the higher-quality work might be partly generated by ChatGPT rather than the students themselves. This factor wasn’t accounted for.
Equally, when it came to the apparent improvement in affective-motivational states (an emotional state that is more receptive to learning), the findings didn’t apply to all age groups. While this appeared to be the case with college students, it wasn’t as apparent in K-12 students. So again, while a selective highlighting of specific findings makes everything sound great, a fuller reading of the report demonstrates a more nuanced pattern of results.
A similar caveat applies to the “positive” impact of AI on developing and applying higher-order thinking. Higher order thinking here refers to the ability to go beyond simply learning and understanding information, to applying and analysing that learned information, evaluating and relating it to other information, and finally creating new and novel information.
While the studies appeared to show a propensity for higher-order thinking after the introduction of ChatGPT, the meta-analysis points out that most of these were based on the self-appraisals of the students themselves. The problem with this is that self-appraisal alone really isn’t an accurate or objective metric, there being such a high potential for subjective bias.
The reason that this study is so important now is because Google has just announced that it will be providing $1bn of AI goods and services to colleges, universities and non-profits over the next 3 years at no cost. While this act of “philanthropy” is being presented as putting college students on the path to career utopia, based on Google’s previous behaviour, it does rather feel more like a “loss-leader” in the race for global AI dominance. If, in moving quickly towards a monopoly, there is a chance that Google may harm education, perhaps we ought to examine the road ahead before waving them through.
The Hangzhou study wasn’t the only large-scale meta-analysis reporting findings this year. Another, this time on the impact of artificial intelligence, digital technology and and social media on cognitive functions had a similar tone. While the individual studies included in this one had a tendency to reach fairly balanced conclusions, by comparing them with one another and looking for patterns the meta-analysis began to see yet more warning signs.
This study is so important because AI isn’t being introduced into a vacuum, quite the reverse, it is being introduced into a world already drowning in digital technology and social media.
This study by Deckker and Sumanasekara focused on the impact of cognitive functions specifically. As part of their reporting they argued that long-term recall and deep cognitive engagement can all be shown to be weakened when users increasingly outsource their memory storage and retrieval to AIs.
Digital amnesia is a term that has been gaining ground in academic circles and refers to this phenomenon where individuals who increasingly offload information storage to AI and digital tools appear in certain cases to be weakening their own capacity for critical thinking and recall. Recall being one of the foundation stones on which higher-order thinking builds.
The first and perhaps most obvious risk for this offloading of memories onto social media and digital devices is that it introduces an external set of commercially developed biases within the algorithms. Because the company’s algorithms will determine which and how “memories” will be stored, mapped and recalled, our memories of our pasts will be increasingly managed by what the company’s algorithm deems significant and perhaps even appropriate.
Without opening up the whole bag of worms that is “how important are one’s memories in determining who we are and how we interact with the rest of the world”, this is still of major significance to our individual cognitive processes in the moment. Everything our species has achieved, for good or bad has come about through novel ideas built upwards through several steps from recalled memories.
If we weaken this highly developed and adaptable skill we will lose much of our problem-solving and ingenuity as a species. There are studies already suggesting that this is the case with prolonged and persistent use of digital tools appearing to “impair long-term memory consolidation and retrieval efficiency”.
For a large number of us who are already suffering from increasing levels of “digital amnesia” our problems are being compounded by “attention fragmentation”. This is our increasing inability to maintain deep focus on one thing when we are looking at a screen because we are constantly struggling to not get distracted by any number of countless other demands on our attention.
The digital landscape is rife with notifications, pop-ups, multiple windows open, multiple media playing at once, and, of course, the misleading claims of algorithm-curated social media and streaming content. Not only are all of these demands conditioning users to try to perform multiple tasks at the same time, which in turn is leading to reduced cognitive control in the moment, but it is also increasing our levels of susceptibility to distraction and therefore a decreased performance in primary tasks. To put it more simply, all of the digital background noise is stopping us from being able to focus deeply and in a sustained manner on any one thing.
Both social media and AI content curation systems are further complicating this problem by using pre-existing preferences, historical behaviours, and user profiling biases to polarise the users experience ideologically and limit their exposure to competing positions. In creating these feedback loops the users are less likely to critically engage with the information. The so-called “echo chamber” becomes a place where the user doesn’t have to actively engage with the information because they already think that they know what it is going to tell them and they are pretty sure that they are likely to agree with it so they don’t need to question it.
And it doesn’t stop there. For those unlucky enough to find themselves struggling with attention fragmentation, digital amnesia, and their devices actively reinforcing and accentuating their biases and prejudices, their diminishing ability to critically analyse and evaluate information presented to them is making them more susceptible to an internet increasingly awash with misinformation and disinformation. Just as our ability to critically analyse weakens, so the objective truth is becoming drowned out in a sea of Gen AI-created falsehoods.
There is an increasing body of evidence to suggest that AI curation of information could be diminishing our ability to store and recall information. Equally, in outsourcing the analysis and evaluation of information, we may well be diminishing our individual ability to make reasoned and informed decisions. It is possible that this is further compounded by AI managed algorithms directly impacting our ability to differentiate fact from fiction.
Without getting into the massive environmental demands that the data centres running the Gen AIs require, or the tens of millions of job losses predicted to happen in the next few years, job losses incidentall,y that some experts argue are likely to exacerbate racial and economic inequalities, the technology itself looks like it may well be undermining our ability to think critically.
The sad truth is that the cognitive building blocks that we are allowing the “move fast and break things” billionaire class to ride roughshod over are the same cognitive building blocks that humanity has historically used to protect ourselves against the worst excesses of exactly these sorts of people.
In its own words, ChatGPT warned that Gen AI has every chance of being a destabilising force on humanity as it is being adopted too fast and without political oversight or a clear understanding of the future implications. The political class in hock to the tech billionaires are not going to bite the hands that feed them on our behalf. The responsibility, in the first instance, lies with individuals to resist the magical thinking of billionaires, to inform themselves about the potential impacts of this technology, to make informed choices about when and where to use it, and to demand better from our political representatives. Because if we don’t, one day soon we might not be able to.
The post Is ChatGPT Warning us of an Existential Threat that Generative AI Poses? appeared first on CounterPunch.org.
From CounterPunch.org via this RSS feed