
If we humanize AI, what does the meaning of our lives become? Will most be servants of the machines while the few- the ultra-elite, ultra-rich, control our world? The nightmares of dystopian novels were relegated to the unpopular science-fiction sections of libraries and bookstores which only geeks ever ventured until not too long ago. But there is hope to be found in the utopian version of the genre that doesn't foretell devolution because of AI.
Artificial intelligence is intelligence demonstrated by machines. Contrasting with the natural intelligence of humans and other sentient beings, AI is a recurrent theme in science fiction. And it’s disconcertingly eerie to look back and acknowledge the accuracies of some of the historical work.
Although Victor Frankenstein, the protagonist in Mary Shelley's 1818 novel, doesn't give the monster a name, the creature known as Frankenstein has been considered an artificial being, arguably the first fictional reference to AI in science fiction. Advanced robots with human-like intelligence later appear in Samuel Butler’s 1872 novel, Erewhon, drawing on his 1863 article, Darwin among the Machines. The book raised the question of the evolution of consciousness among self-replicating machines that could potentially replace humans as the dominant species.
Almost a century later, Neil Armstrong’s small step on the moon in 1969 may have been the catalyst for the giant leap to the utopian Star Trek series which had been canceled just 3 seasons into its debut in 1966. Reruns began in 1969 and thousands of fans attended the first Star Trek convention in New York in 1972. The animated series followed the next year and Star Wars followed suit in 1979. Between then and now, science fiction mushroomed into a vast mainstream industry with 31 percent of people between 18 and 29 now saying they watch science fiction and fantasy programs.
Influenced partly by the dystopian view of Orson Welles's War of the Worlds that created radio broadcasting hysteria in 1938 as listeners across the U.S. heard a startling report of mysterious creatures and terrifying war machines moving toward New York City, the darker side of science fiction robots is also later hilariously described in Douglas Adams’s 1978 novel, The Hitchhikers Guide to the Galaxy. Following the demolition of the Earth by a Vogon fleet, the misadventures of the last surviving man, Arthur Dent, unfold as he explores the galaxy. As he meets up with others, Dent learns that the Earth was actually a giant supercomputer, created by another supercomputer, named Deep Thought.
Deep Thought- the super-computer, not the philosophical process, was built to answer the "Ultimate Question of Life, the Universe, and Everything.". A tantalizing concept and after eons of calculations, Deep Thought delivered the answer: “42.” After this disappointingly cryptic reply, Deep Thought was told to design the Earth supercomputer to establish what the Question actually is.
Just moments before the computing was done, the Earth was destroyed by the ubiquitous Vogons and Dent became the target of the descendants of the Deep Thought creators, who thought Dent’s mind must surely hold the Question.
Apparently, Arthur Dent didn’t know the question because almost 50 years later we are still looking to super-computers to find the question as we develop machines that are, theoretically, smarter than us.
Is AI the next step in human evolution?

Charles Darwin- and many respected scientists agree, said that between 100 and 300 million years ago, different kinds of living organisms developed from earlier forms during the history of the earth. Every living thing emanates from one single cell and has evolved to where we are now creating artificial intelligence.
Evolution developed in different ways in different species. The hominids were an early ape-like group from which both monkeys and Homo erectus evolved in unique ways, for example, we don’t have opposable toes but rather use ropes and ladders to climb trees. Erectus then got a little smarter, making important discoveries like fire and the wheel, but it was only in 1758 that the name Homo sapiens was given to us by the father of modern biological classification, Carolus Linnaeus.
We like to think of ourselves as the smartest species on the planet, yet we have spent over 100 years designing super-computers that can seemingly withstand Vogons but still haven’t figured out the burning question of The Hitchhikers Guide.
Far later than 300,000 million years ago, the groundwork for AI began in the early 1900s. The biggest leaps weren't made until the 1950s but leaned on the work of early experts in many different fields. Harvard University published a paper in 2017 (seven years is a long time when it comes to AI evolution in the 21st century but history remains relevant) setting out some important historical developments:
In fiction, Harvard identifies the "heartless" Tin Man from The Wizard of Oz as the original artificially intelligent humanoid robot.
By the 1950s, a generation of scientists, mathematicians, and philosophers assimilated the concept of artificial intelligence (AI) in their minds.
The British polymath Alan Turing was one of these pioneers and suggested that machines could use available information and apply reason to solve problems and make decisions just as humans do. His 1950 paper, Computing Machinery, and Intelligence, set us the building of intelligent machines and ways to test their intelligence. But back then, the cost of leasing a computer- at around $200,000 a month, was a barrier to proving the theory.
Five years later, Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist put the proof of concept together and presented their findings at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). At the conference, everyone who attended wholeheartedly agreed with the thinking that AI was achievable, but there was no agreement on standard methods for the field.
Between 1957 and 1974, AI started to thrive. Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise in interpreting spoken language and problem-solving. On the back of these successes, leading researchers convinced government agencies to fund AI research at some institutions.
Despite the mountain of obstacles, AI was reignited by an expansion of the algorithmic toolkit, combined with a boost of funds by the 1980s. As part of their Fifth Generation Computer Project (FGCP), the Japanese government heavily funded expert systems and other AI-related projects. When funding of the FGCP ceased, AI fell out of the limelight.
Ironically, when government funding dried up, human endeavor and entrepreneurship in the 1990s and 2000s saw many of the landmark goals of artificial intelligence being achieved, not least of which was the chess-playing computer program, Deep Blue, which defeated the reigning world chess champion and grand master Gary Kasparov in 1997.
While DEEP Blue’s strengths lay in its storage capacity, a hold-up was created by the fundamental limit of computer storage. The limit of 30 years ago was solved as we now saturate the capabilities of AI to meet the level of our current computational power (computer storage and processing speed)while waiting for Moore’s Law (which says the memory and speed of computers doubles every year) to catch up.
The age of big data has arrived and even when algorithms don’t improve much, big data and massive computing simply use brute force to allow AI to learn.
Neural networks have also gained a lot of steam since their earlier incarnation, the multi-layered perceptron (MLP) which was developed in 1986. This learning algorithm evolved into at least two models of generative neural networks, including:
Generative-adversarial networks (GANs) can achieve good results because they are partly adversarial, almost like an onboard critic that demands improved quality from the generative component.
Transformer networks gained prominence with applications such as GPT4 (Generative Pre-trained Transformer 4) and ChatGPT, its text-based version. Trained on massive datasets drawn from the World Wide Web, these large-language models (LLMs) allow human feedback to improve their performance through reinforcement learning.
It is the seemingly infinite capabilities of LLMs, including what we are still teaching them, that has led to dire predictions of AI taking over the world, particularly when combined with advances in the field of robotics. Sun Tzu's Art of War counsels us to be quiet and thus ensure secrecy, yet we are volunteering all our secrets to “AI” and its masters in an increasingly privacy-challenged world.
Pew Research Center, in their paper, Improvements Ahead: How Humans and AI might evolve together in the next decade, has some suggestions which include positive and negatives.
Positive Influences of AI

AI is generally less prone to burnout than humans and some say it’s more reliable. Suggesting mainly positive outcomes, respondents in the Pew survey say AI is likely to become embedded in most human endeavors. The respondents said the benefits of AI include:
AI is targeted toward efficiencies in workplaces and other activities.
All kinds of professions will be able to do their work more efficiently, especially those involving individualized medicine and policing, or even warfare (although the latter is probably only a benefit if you’re the one deploying the drones). AI will also enable greater individualization in education by tailoring courses to the needs and intellectual abilities of each pupil/student.
The Cambridge Analytica case uncovered huge privacy issues in social networks but many people do value the services Facebook offers to improve sharing capabilities and communication opportunities.
AI could help human experts to be more accurate.
Many large-scale technologies that we all depend upon, including the internet, power grids, and roads and highways will use AI to solve complexity and demand issues as they increase.
Devices continuously assess the world around us, looking at sound analysis, air quality, natural events, and other data, keeping us safer. AI can also deliver collective notifications and insight.
Self-driving vehicles help those with deteriorating vision and an increased elderly population will feel increasingly liberated.
Rapid growth in the use of AI in education and big-data applications in health-related research should be increasingly productive.
A new craft will arise as the ability to recycle, reduce, and reuse is enhanced by the use of in-home 3D printers, supporting the ‘cradle-to-grave’ movement by making it easier for people to trace the manufacturing process from inception to final product.
Algorithmic machine learning will amplify our intelligence, exhaustively exploring data and designs in ways humans cannot.
Health delivery, smart buildings, and smart cities are made possible by AI.
Because humans do poorly when it comes to making decisions based on facts, not emotions, and get distracted easily, AI can do some things better than humans, for example, handling finances, driving cars, or diagnosing illnesses.
By finding new approaches to persistent problems, the overall quality of life is improved using adaptive algorithmic tools to explore whole new domains in every industry and field of study.
In many areas where human error is currently problematic, machine learning and related technologies can greatly reduce these errors and make available good, appropriately tailored advice to people who currently can’t access it, in literally almost every field of human endeavor.
AI will aid, augment, and amplify individual and collective human intelligence in unprecedented and powerful ways.
Negative Influences Of AI
Balancing the benefits, some of the Pew respondents named potential problems. The negative things about AI include:
There will be abuses and bugs, some of which are harmful, so we need to be thoughtful about how these technologies are implemented and used.
There will be a continuing digital divide in education, with the privileged having more access to advanced tools, with the added benefit of better capacity to use these tools, while the less-privileged will likely lag behind.
Huge segments of society will be left behind or could be completely excluded from the benefits of digital advances. Again, many in underserved communities and others who are socio-economically challenged will not be able to derive these benefits.
There will also be abuses in monitoring personal data and emotions as well as abuses in controlling human behavior.
Governments or religious organizations could monitor emotions and activities using AI to lead them to ‘feeling’ a particular way, to monitor them, and to punish them if their emotional responses don’t conform to some pre-determined norm. Similarly, education could become indoctrination, and- more disturbingly, democracy could become autocracy or theocracy.
AI, it seems, has the capacity to become a huge propaganda machine, controlling and dictating norms. Perhaps the response of one of the respondents, Tim Morgan, should be born in mind: “The synthesis is more than the sum of the parts.”
Humans evolved over 300,000 million years, but by leveraging our prowess, AI has managed to come reasonably close to replicating us over a little more than a single century. The challenge must surely be to synthesize humans and AI, to humanize AI, in such a way that we create a better life for ourselves and everyone else on the planet. Not just the elite few.
As ethical issues arise, an important question that arises (perhaps not “the” question”) is:
Who is teaching and controlling the machines?
One of the measures of civilization is their adherence to a human rights code that affords basic rights to every human being. The United Nations (UN), rightly says that Artificial intelligence must be grounded in human rights. Infiltrating almost every aspect of what it means to be human, AI can identify, classify, and discriminate, and in doing so, has the potential to impact almost all our human rights.
While AI has the potential to be hugely beneficial to humanity, the UN is looking at how to create a structure that ensures the benefits outweigh the risks. We need limits, they say, and this means regulation, which, to be effective, and humane, must put people at the heart of developing new technologies. Respect for human rights must be paramount. The paper points out a few potential problems associated with AI:
Authoritarian governance can potentially be strengthened using AI.
Mass surveillance of public spaces, using facial recognition systems can destroy any concept of privacy, as George Orwell’s fictional Big Brother intended.
Modern lethal autonomous weapons (such as drones) are often built and operated by AI.
AI systems in the criminal justice system that predict future criminal behavior reinforce discrimination and undermine rights- including the presumption of innocence.
AI could be a powerful tool deployed for societal control, surveillance, and censorship.
The UN postulates two schools of thought:
Risk-based regulations: Focused largely on self-regulation and self-assessment by AI developers rather than relying on detailed rules, the approach emphasizes identifying, and mitigating risks to achieve outcomes but puts a lot of the responsibility on the private sector. It does sound like a bit like setting the wolves to guard the henhouse.
Embedding human rights in AI’s entire lifecycle: From start to end human rights principles should be included in selecting and collecting data, design, development, deployment, and the resulting models, tools, and services. But as they point out, this has not happened. The horse bolted in the 1950s when AI development started in earnest, and it's a little late to close the stable door.
While calling for urgent action, the UN says it can play a central role in convening key stakeholders and advising on progress and they are exploring the establishment of an international advisory body to look at in human rights as they applies to AI. It is proposed as part of the Global Digital Compact for the Summit of the Future next year in the hope that a human rights framework can provide an essential foundation with guardrails.
But who will be the AI police?
In the global south, many of the San people live what most would call primitive lives, relying on the earth and its bounty to survive and thrive. But as modernization has encroached on their world, they are increasingly experiencing existential crises, finding difficulty in applying their harmonious principles to living in a world that is anything but harmonious.
Cell phones may be the biggest step into AI for many KhoiSan who see the benefits and want to be part of the new world. But it's difficult to understand why they would give up contented lives and it seems that the only answer is if you can't beat them (they are not a war-faring nation in any way), you may as well join them. But their difficulties in adjusting are a demonstration of things happening in the rest of the world.

People cannot get along. Russia invaded Ukraine, Hamas launched a terror attack on Israel which has resulted in a brutal war, and wars are ongoing across the planet. As we hurtle towards a total population of 8 billion people, resources and ideologies have become things worth killing for, sometimes worth giving your life for.
Killing, abuse, and deeply entrenched trauma is rife. But fighting has been around for centuries. The first son in the Bible, Cain, killed his brother, Abel. Archeologists have identified walls (to keep enemies out) and fortresses going back many years and in many civilizations.
Forcing one's ideologies on others is another theme that's been around for a while. Early civilizations simply moved on- the Aztecs eventually reached Mexico via the Russian Bering Strait when lowered sea levels between Siberia and Alaska exposed a land bridge. Nowadays, it's a 14-hour flight across the 4,000 mile (about 6,500 km) trip but they must have been really upset with their neighbors to walk for an estimated 170 days across rugged terrain and climates to not have to have anything to do with them.
With competition for land and resources as the population explodes, it's no longer viable to move somewhere else, which could explain why so many people are fascinated with the prospect of moving to the moon and other planets. The walk is too far and space exploration is already utilizing AI to reach further and more remote planets.
When we land, the science fiction versions of outer space, with hostile inhabitants jealously protecting their resources, may be the reality. AI will then be the most useful tool, the new wheel for civilization as it conquers new territories, builds, and evolves.
Meanwhile, back on Earth, ever evolving AI is being deployed into more fields on an ongoing basis, and humans in the future will likely become as dependent on AI as they are on a more basic tool- fire.
The question here is whether it's really evolution to leave behind a healthy happy way of life- as the San did, and embrace the brutal benefits of the corporate jungle.
AI has already been integrated into human life, but are we making our lives unnecessarily complicated? Even the poverty-stricken areas in Africa use AI to communicate via cell phones, embed AI in infrastructure such as electricity and water systems, and deploy AI in warfare. AI is not the enemy and can make our planet a vastly better place to live in or help us discover new frontiers in outer or inner space if living here becomes too harsh a reality to face.
AI already talks like a human, acts like a human (think about the maid bots or sim girlfriends), and can look very humanoid in the hands of the right creator. And that's the point: Humans created AI to be like them. It sounds great to leave the menial stuff to AI.
The biggest benefit of AI is arguably that it frees our time to really think about stuff while it keeps the world churning, the food delivered to our doors, and friends just a text message away. But we now face an existential dilemma. If AI is taking our place, what is the meaning of our life? There are limited jobs in AI leaving many without a purpose, one of the essential elements of being human but it leaves it up to us to find the answer to the eternal question:
Why are we here?
AI will have an answer, but the answer will reflect what whoever keyed in the AI data thinks is the reason for their particular existence. It's somebody else's raison d'etre.
The convenience of AI comes at the cost of losing real human connection. We message rather than call, and call rather than visit people who are important to us. Humanizing AI does present the opportunity to have a humanoid friend, but that friend can only ever mimic emotion, never actually feel love or hate, and never have an original thought. It only regurgitates what another human told it to.

Can it truly be called evolutionary to lack no conveniences by integrating with AI, but at the same time live in an increasingly insular space as AI essentially becomes us? It sounds more like devolution, where we may find ourselves back where the original single organism that we came from had no other purpose than to multiply and evolve into something else. Except that we will have handed control of the evolutionary path to someone else, to the tech geeks who have little interest in people, and lots of interest in profit and secret longings to dominate the world.
It's not the question that's important, it's the answer: By being human. And that means doing very human things, like laughing and loving, striving and building, but mainly, by being unique and finding meaning in our lives.
ความคิดเห็น