The First Golden Age of AI is Over

This was a speech I gave online to a group of businesspeople in Ukraine in September of 2023. Now, in January 2024, I feel it’s come true.

First of all, lets call this technology what it is: machine learning. The move to rebrand machine learning as artificial intelligence is, itself, an artificial and inaccurate label. There is no evidence that anything we’ve seen so far is comparable to what we call human intelligence, though it may already be close to the intelligence of, say, insects and plants. The hype and confusion around calling these technologies “intelligence” is confounding and I worry for the state of what passes as critical discourse in the technology circles.

Some philosophers argue that human cognition arose not from statistical logic, but from embodied experience. If machine learning has no consciousness, how can it have experience? How could it understand people or the world without experiencing it?

Statistically “Understanding” Isn’t Real Understanding

Second, current generative “AI” (machine learning) tools are regurgitrons because they only know how to regurgitate what they’ve been fed. And, the techniques of training Large Learning Models and tailoring their algorithms statistically have the result of homogenizing the data they’ve been fed, over time. Such is statistical word association. These tools, though popular, do not understand semantics (word meaning) only association. They can rank how often a word of phrase is associated with other words or phrases and, thusly, predict likely next associations. That’s really it. I know some people working on building semantic meaning into these tools and the experiments I’ve seen our outstanding but they are also not operational, yet.

When you regurgitate something without understanding it’s meaning, when you simply associate things based on statistics instead of semantics, you can be a very good mimic but you’re severely limited in base creativity. These tools are surprising and delightful in their ability to read (and, now, sound) like human speech but only to a mediocre extent. They do mediocre very well. The don’t do originality that greatly, however.

But, as it turns out, much of what we read, watch, and listen to is mediocre. Think of the sheer volume of sitcoms, romcoms, romance novels, etc. and consider how many of these you’ve found exceptional, inspiring, original, etc. Most are recombinations of known, sometimes tired, tropes and themes. Their structures are usually predictable, if enjoyable, but most aren’t breaking new ground in content or structure.

And, that’s OK.

Mediocre satisfies many people, which makes these tools so much fun (and so concerning to current writers, screenwriters, composers, etc.—even programmers). But, it only takes us so far, too. As these brute-force statistic tools get used more and more, and as they reinforce themselves and each other statistically, they average ever more and “regress” to a mean: the most statistically likely and average result. Left to its own statistical devices, these tools will regress themselves to textual pablum. Without new processes to break them out of this regression, these tools, eventually (and, possibly, quickly) will regress to the writing equivalent of 50% gray—and output truly tired repetition that will entertain only the most simple, unexperienced minds.

Current AI looks “smart” but will get dumber before, hopefully, growing insightful again.

Beware the Quantitative Jobs

And, lest you think that this is just a danger for writing jobs, consider that computers have always been better at processing numbers (math) than humans.

If you’re a business analyst, accountant, bookkeeper, or process numbers in your job, your job is going to change quickly—and that job may disappear entirely. This will happen much faster for number-heavy jobs than word-heavy ones. Already, without machine learning, number-based jobs are on the decline as organizations adopt software tools and platforms that perform most of the same functions without any person to do the calculations. Think payroll, accounting, and tax platforms.

Consider how many jobs in business are simply tracking and processing numbers. My guess is that 75-80% of jobs in accounting, bookkeeping, insurance, operations, etc. are going to be gone by the time we hit 2030. Now, there will always be a need for more-than-mediocre numbers work. What will be left in these industries will be strategic and forensic jobs: checking the results, analyzing them for more insightful needs, and using them strategically. But the vast amount of number jobs will simply go away. And, these are white-collar jobs. The truckers are less at risk than the number-crunchers (and their managers) and we are notdiscussing this, and its effect on the economy, at all (let alone preparing for it).

If you’ve been steering your children into the STEM fields, that’s not going to be enough. Consider how much math is in each letter of that acronym—and it’s fully 100% of just one of those letters! (More on skills later).

Suffer Regression to the Mean

At some point, the mediocrity we’ve run on will no longer be enough. In business, the “average” response isn’t usually good enough already. Current AI exposes how poor most of our business narratives are already. That AI can so quickly imitate and even excel how we talk about what we do (and need to do), should make us embarrassed at what passes for quality insights. Already, some systems allow the same quality of industry and market analysis in 5 minutes that businesses pay $100,000 per report to companies like Gartner, McKinsey, and others—and, these companies know it. At the smart ones, the bells are ringing “red alert.”

The End of the First Golden Age of AI

We’re nearly there already. Soon, current machine learning platform will get “dumber.” That is, their capabilities will be worse, not better. This is due to several factors:

Content Divestment

Because current Large Language Models have been built on the wealth of others’ work (everything in the past), content owners of that past work are suing to pull their intellectual property out of the LLMs or license their IP. In some cases, these IP owners want to create their own AIs, trained on their own corpus of works. While intellectual property laws won’t apply to much of human creative work past, say, 100 years ago, this will become difficult for current AI developers in the near term. Already, lawsuits from illustrators, writers, painters, movie studios, publishers, research organizations, etc. are carving their work out of these generalized corpuses and seek to penalize the companies who used their works without permission.

Current AI platforms are already hampering their responses with that point users to other sources or simply state “I can’t answer that kind of question” when it deems protected content is involved. This is in addition to the replies to prompts deemed unacceptable in terms of language and symbols, and will soon need to do the same for requests to emulate specific styles.

Enjoy the fun while you can because AIs will get inconsistently less facile, purposefully, very soon.

Too Few & Too Generic LLMs

Imagine an AI trained on a corpus of music, poetry, and philosophy. Or, one trained specifically on medicine, science, a particular industry. These systems would have different replies to the same questions. If you try to combine them, they may cancel the specialness inherent in each system, resulting, again, to regression to a mean. The drive to have one, centralized source actually ruins the relevance of specialization in domains and needs. So, it’s obvious that we already need to plan for many new, specialized systems that excel in their particular domains.

Consider an AI trained only on Mein Kampf or racist propaganda from the KKK? It’s “truth” will be much different than others’ and this is already a growing problem in terms of news and social media. Most people perceive the output of machines to be truthful and infallible sources of authority. We’ve already seen the funny incidents of AI hallucination but this is a glaring hole that will only get wider. Consider an AI trained on FOX News’ history of reporting propaganda. What “truth” is it describing and how will future readers and watchers reconcile that truth with others?

Consider the differences between how OpenAI and Anthropic’s AI already answer the same queries. That difference is the difference in the corpuses they’ve been trained on—and both have been built to be generalized platforms.

Consider every company in a particular industry asking ChatGPT to develop their next positioning or pricing strategy (and I bet this is already happening). Every company would have nearly the same strategy. Aside from tailoring the prompts slightly, the system is still going to give you 60-80% of what those companies really need for success. This is great for those with exceptional skills as it allows them to move quickly to get to a base upon which they can apply their unique skills and perspectives. But, it’s unclear how many people have the skills to best the mediocre or even realize that they need to. It’s also unclear how much better-than-mediocre is required.

Would you trust a generalized AI to build your organization’s strategy? How would it differ from your competitors asking it for the same strategy, for their use? Do you even want a mediocre strategy to begin with? In this case, you definitely get what you pay for (and mediocre is free).

Second Golden Age of AI

AI is already a creative collaborator to those who have learned to use it. Designers, architects, illustrators, painters, musicians, comedians, and even businesspeople are already using it within their creative processes, to help them explore alternatives and consider things they otherwise wouldn’t. It’s accelerating complex development. Where many were needed in the past, fewer are needed to accomplish the same creative output.

But, the smartest users know that these technologies accelerate you quickly to a quality that is 60-80% of what is likely required. Now that everyone can soon create mediocre and serviceable reports, songs, videos, websites, diagrams, analyses, conclusions, and insights, it will become more difficult to push past these to solutions that shine above the rest, that offer unique insights, strategies, or competitive ideas to those who need them.

If you work in a company and industry and you haven’t yet used these tools to explore the advice they can give you, you’re already behind the curve. If you’ve asked these systems to generate and analyze trends but think that you’ve gotten the whole picture, you’re using them wrong. If you’ve generated reports, etc. and used them as the foundation on which to build better strategies and content, then you’re right where these tools are today.

But, then, what’s next?

Powerfully Collaborative Tools: A Chorus of Distinct Voices

AI will become truly collaborative, agentive, and multi-dimensional. Perhaps, council is a better term than chorus but the interactions between these voices will be as important as the distinctness of each.

We currently have AI built for general use. Someday soon, we’ll have a plethora of AIs created for specialized use. Each will be trained on a different corpus. Each will have special purposes and insight. We’ll consult many of these for the advice and insight we need, acting more like a council of expert opinions than a single expert. We should think of our interactions with AI as a community and bring to bear all that we know about creating healthy, thriving communities (and not merely collections of tools).

In order to reach this second age of AI, raising them may be more like raising children than programming machines. And, we don’t have the best track record on raising children if we are to look at the number of narcissists, Karen’s, and fascists in the world. What happens when bad actors raise bad AIs? We will need to determine how to protect ourselves and others not from autonomous AI but from fascist AI hidden behind benign fronts. What would an AI be like trained on a corpus of Hitler or Putin or Trump? Are we already submitting to the technocratic voice of Silicon Valley investors and developers without realizing it?

We will need to be just as careful in picking which AIs we associate with as we are in picking our friends and other associates.

In the future, our teams will be composed of both people and AIs. We need to start building the best practices for interacting effectively with these new kinds of collaboration. Who you associate with, and how, is the new paradigm for your interaction with technology.

What will it mean to share access to information, to communicate clearly, to discuss the finer points of a topic, to co-develop solutions, to co-own responsibilities, and to share credit, ownership, and consequences?

From Prompt Design To Conversation Design

Beyond the current AI fun and amazement, we’re heading into a world of unique, independent, and somewhat autonomous queryable perspectives. Where, today, we see prompt design as the newest critical skill, conversation may be the best paradigm for the most effective interaction with AIs. How many people do you know who are excellent conversationalists? In the future, everyone may need to be but not all of us have those skills.

These are in addition to the other tools that machines aren’t as good at as people. If computers are better at math than humans, we humans still have some important skills we’re better at than computers. These are the skills parents and educators in a 21stCentury world should be emphasizing and prioritizing (the 4Cs and the 4Ss):

  • Creative Thinking
  • Critical Thinking
  • Communication Skills
  • Collaborative Skills
  • Systems Thinking
  • Strategic Thinking
  • Sustainability
  • Social Impact

Current AI tools are already challenging our understanding of what it means to be creative or intelligent—or human. These notions have always evolved, based on what we learn from Nature. We used to think that language and intelligence were only things that humans possessed because we purposefully ignored what our studies of other animals showed us. More recent and more honest science has forced us to consider just how smart and how richly emotional other animals are. We’ve even had to reconsider how “smart” some plants and trees are—or, at least, how reactive and adaptable they can be.

Current AI tools can’t create works where there weren’t similar things in the past (depending on your definition of similar). They can’t critically analyze what they create because they don’t have a model for how meaning works. Perhaps, in the future, they might, but for now, both creativity and critical thinking are capabilities beyond this kind of approach to machine learning.

Creative Thinking

Thinking creatively is still something humans do better. The second Golden Ago of AI can help boost our creativity but we, first, need to reevaluate the use of creativity in our decision-making. Too often, creativity is driven out of children in favor of more “serious” skills, yet these may be the ones that most set us apart from technological systems in the future. Creative expertise is becoming even more important, even to the world of business but our traditional approaches, processes, and tools make little room for the insights and generative capabilities that create change. Rather than only focusing on optimizing the quantitative, which has served the economy well but at the expense of society and the planet, can these tools help us balance perspective, insights, strategies, and decisions with the qualitative aspects of our systems so often ignored?

AI has become adept at integrating existing elements. It masters collage and the part of creativity that can endlessly combine editing things in new ways, whether text, images, sound, video, etc. It can help us create things that have never before existed but it cannot create things from whole cloth in meaningful arrangements.

Humans have excelled at this for our entire time on this planet but our business systems have downplayed their importance and truncated their value. Machine learning will make this one of the most important skills to hone for anyone who wants a job in the next 50 years.

Critical Thinking

Similarly, machines are poor at critical thinking. We’re learning to program or grow awareness of some aspects of critical thinking, such as determining how to balance dissimilar objects but humans define and excel at critical thinking and this will be another skill that will set us apart.

Yet, most businesspeople don’t understand the process or need to critique in healthy ways. Not only will we need to better critique ourselves and our peers, but also our sources of information and the values they express.

Communication Skills

Few of us are the great communicators we need to be. We’ll need to develop and sharpen these skills and emphasize their teaching to everyone. We already have models for better, generative communication but they are not widely understood or shared. Clear communication is a requirement of leadership and the facilitator of success in any team.

We’ll also need to teach our tools to communicate better than we have in the past if we are to use them to build better solutions in the future.

Collaboration Skills

Our collaboration skills need to be improved for this new kind of work. The best metaphor might be when pianists play together, creating a work that neither could create by themselves. Can we use these services to think and act more collaboratively and communally instead of competitively and extractively?

Now, consider the kinds of complex, system challenges we could address with collaboration opportunities that go beyond simple one-on-one interaction. AI has the potential to join us in shaping how we approach the many critical challenges we face as nations and planet. Truly complex crises, like climate change, inequality, and war may require partnerships that make use of better multi-dimensional coordination with agency coming from machines as well as people.

Systemic Thinking

Can AI help us be more systemic, more strategic, more equitable, and more representative? Can we make it economically and ecologically sustainable and, in turn, have it help us identify more sustainable and regenerative options? AI can only think systemically where we define the relationships between parts of the system. In doing so, we define the values of the system, whether we mean to or not.

Sustainability

Nature is, perhaps, the most critical system of all that needs better solutions. It’s what all the other systems rely upon. Could we build an AI that allows us to have an insightful conversation about a forest, a sea, or the planet? Some companies are already trying to put Nature on the board to represent the perspectives needed for healthier corporate behavior. Could AI be this representation? Could AI helps identify and blend the needs and value of different parts of our systems to get out of the zero-sum-gain mindset?

My guess is “yes!” But, this requires us to build these systems deliberately, instead of intuitively or accidentally (or not at all).

Strategic Thinking

AI cannot currently think strategically and won’t be able to until we define what it even means to think strategically. Mostly, this is about relationships and the different kinds of value in an ecosystem (and there are at least 5). Our new tools should help us to identify opportunities, prioritizing time, attention, and resources, and building value for ourselves, our partners, and the many stakeholders around us even when they’re silent, hidden, or never considered. They should illuminate better solutions for all of these stakeholders but they can only do this where we codify value and relationships in order for these tools to suggest appropriate alternatives.

Can AI help us provide bi-directional value, for all involved, instead of more solutions that simply extract money and culture from others?

Social Impact

If we’re not designing for a healthy society, what is it we’re developing at all? These tools may be able to integrate the plethora of social impacts and needs into our thinking processes in ways we’ve never seen before. These are the really hard issues that have had such bad impacts on society (to be fair, the improvements we’ve built to societies also come from here). Rather than build more, increasingly mediocre tools that ignore the planet and the people living on it, we can build tools that integrate these right into the very frameworks of how they operate—if we choose to.

Do we choose to?

Do you?