sábado, 30 de outubro de 2021

Yuval Noah Harari on big data, Google and the end of free will / The unacknowledged fictions of Yuval Harari / We Must Not Accept an Algorithmic Account of Human Life /

 


  Yuval Noah Harari on big data, Google and the end of free will

 

Forget about listening to ourselves. In the age of data, algorithms have the answer, writes the historian Yuval Noah Harari

 

© Janne Iivonen

Yuval Noah Harari AUGUST 26 2016

https://www.ft.com/content/50bb4830-6a4c-11e6-ae5b-a7cc5dd5a28c

 

For thousands of years humans believed that authority came from the gods. Then, during the modern era, humanism gradually shifted authority from deities to people. Jean-Jacques Rousseau summed up this revolution in Emile, his 1762 treatise on education. When looking for the rules of conduct in life, Rousseau found them “in the depths of my heart, traced by nature in characters which nothing can efface. I need only consult myself with regard to what I wish to do; what I feel to be good is good, what I feel to be bad is bad.” Humanist thinkers such as Rousseau convinced us that our own feelings and desires were the ultimate source of meaning, and that our free will was, therefore, the highest authority of all.

 

Now, a fresh shift is taking place. Just as divine authority was legitimised by religious mythologies, and human authority was legitimised by humanist ideologies, so high-tech gurus and Silicon Valley prophets are creating a new universal narrative that legitimises the authority of algorithms and Big Data. This novel creed may be called “Dataism”. In its extreme form, proponents of the Dataist worldview perceive the entire universe as a flow of data, see organisms as little more than biochemical algorithms and believe that humanity’s cosmic vocation is to create an all-encompassing data-processing system — and then merge into it.

 


We are already becoming tiny chips inside a giant system that nobody really understands. Every day I absorb countless data bits through emails, phone calls and articles; process the data; and transmit back new bits through more emails, phone calls and articles. I don’t really know where I fit into the great scheme of things, and how my bits of data connect with the bits produced by billions of other humans and computers. I don’t have time to find out, because I am too busy answering emails. This relentless dataflow sparks new inventions and disruptions that nobody plans, controls or comprehends.

 

But no one needs to understand. All you need to do is answer your emails faster. Just as free-market capitalists believe in the invisible hand of the market, so Dataists believe in the invisible hand of the dataflow. As the global data-processing system becomes all-knowing and all-powerful, so connecting to the system becomes the source of all meaning. The new motto says: “If you experience something — record it. If you record something — upload it. If you upload something — share it.”

 

Dataists further believe that given enough biometric data and computing power, this all-encompassing system could understand humans much better than we understand ourselves. Once that happens, humans will lose their authority, and humanist practices such as democratic elections will become as obsolete as rain dances and flint knives.

 

When Michael Gove announced his shortlived candidacy to become Britain’s prime minister in the wake of June’s Brexit vote, he explained: “In every step in my political life I have asked myself one question, ‘What is the right thing to do? What does your heart tell you?’” That’s why, according to Gove, he had fought so hard for Brexit, and that’s why he felt compelled to backstab his erstwhile ally Boris Johnson and bid for the alpha-dog position himself — because his heart told him to do it.

 

Gove is not alone in listening to his heart in critical moments. For the past few centuries humanism has seen the human heart as the supreme source of authority not merely in politics but in every other field of activity. From infancy we are bombarded with a barrage of humanist slogans counselling us: “Listen to yourself, be true to yourself, trust yourself, follow your heart, do what feels good.”

 

In politics, we believe that authority depends on the free choices of ordinary voters. In market economics, we maintain that the customer is always right. Humanist art thinks that beauty is in the eye of the beholder; humanist education teaches us to think for ourselves; and humanist ethics advise us that if it feels good, we should go ahead and do it.

 


Of course, humanist ethics often run into difficulties in situations when something that makes me feel good makes you feel bad. For example, every year for the past decade the Israeli LGBT community has held a gay parade in the streets of Jerusalem. It is a unique day of harmony in this conflict-riven city, because it is the one occasion when religious Jews, Muslims and Christians suddenly find a common cause — they all fume in accord against the gay parade. What’s really interesting, though, is the argument the religious fanatics use. They don’t say: “You shouldn’t hold a gay parade because God forbids homosexuality.” Rather, they explain to every available microphone and TV camera that “seeing a gay parade passing through the holy city of Jerusalem hurts our feelings. Just as gay people want us to respect their feelings, they should respect ours.” It doesn’t matter what you think about this particular conundrum; it is far more important to understand that in a humanist society, ethical and political debates are conducted in the name of conflicting human feelings, rather than in the name of divine commandments.

 

We are already becoming tiny chips inside a giant system that nobody really understands

 

Yet humanism is now facing an existential challenge and the idea of “free will” is under threat. Scientific insights into the way our brains and bodies work suggest that our feelings are not some uniquely human spiritual quality. Rather, they are biochemical mechanisms that all mammals and birds use in order to make decisions by quickly calculating probabilities of survival and reproduction.

 

Contrary to popular opinion, feelings aren’t the opposite of rationality; they are evolutionary rationality made flesh. When a baboon, giraffe or human sees a lion, fear arises because a biochemical algorithm calculates the relevant data and concludes that the probability of death is high. Similarly, feelings of sexual attraction arise when other biochemical algorithms calculate that a nearby individual offers a high probability for successful mating. These biochemical algorithms have evolved and improved through millions of years of evolution. If the feelings of some ancient ancestor made a mistake, the genes shaping these feelings did not pass on to the next generation.

 

Even though humanists were wrong to think that our feelings reflected some mysterious “free will”, up until now humanism still made very good practical sense. For although there was nothing magical about our feelings, they were nevertheless the best method in the universe for making decisions — and no outside system could hope to understand my feelings better than me. Even if the Catholic Church or the Soviet KGB spied on me every minute of every day, they lacked the biological knowledge and the computing power necessary to calculate the biochemical processes shaping my desires and choices. Hence, humanism was correct in telling people to follow their own heart. If you had to choose between listening to the Bible and listening to your feelings, it was much better to listen to your feelings. The Bible represented the opinions and biases of a few priests in ancient Jerusalem. Your feelings, in contrast, represented the accumulated wisdom of millions of years of evolution that have passed the most rigorous quality-control tests of natural selection.

 

However, as the Church and the KGB give way to Google and Facebook, humanism loses its practical advantages. For we are now at the confluence of two scientific tidal waves. On the one hand, biologists are deciphering the mysteries of the human body and, in particular, of the brain and of human feelings. At the same time, computer scientists are giving us unprecedented data-processing power. When you put the two together, you get external systems that can monitor and understand my feelings much better than I can. Once Big Data systems know me better than I know myself, authority will shift from humans to algorithms. Big Data could then empower Big Brother.

 

This has already happened in the field of medicine. The most important medical decisions in your life are increasingly based not on your feelings of illness or wellness, or even on the informed predictions of your doctor — but on the calculations of computers who know you better than you know yourself. A recent example of this process is the case of the actress Angelina Jolie. In 2013, Jolie took a genetic test that proved she was carrying a dangerous mutation of the BRCA1 gene. According to statistical databases, women carrying this mutation have an 87 per cent probability of developing breast cancer. Although at the time Jolie did not have cancer, she decided to pre-empt the disease and undergo a double mastectomy. She didn’t feel ill but she wisely decided to listen to the computer algorithms. “You may not feel anything is wrong,” said the algorithms, “but there is a time bomb ticking in your DNA. Do something about it — now!”

 

Authority will shift from humans to computer algorithms. Big Data could then empower Big Brother

 

What is already happening in medicine is likely to take place in more and more fields. It starts with simple things, like which book to buy and read. How do humanists choose a book? They go to a bookstore, wander between the aisles, flip through one book and read the first few sentences of another, until some gut feeling connects them to a particular tome. Dataists use Amazon. As I enter the Amazon virtual store, a message pops up and tells me: “I know which books you liked in the past. People with similar tastes also tend to love this or that new book.”

 

This is just the beginning. Devices such as Amazon’s Kindle are able constantly to collect data on their users while they are reading books. Your Kindle can monitor which parts of a book you read quickly, and which slowly; on which page you took a break, and on which sentence you abandoned the book, never to pick it up again. If Kindle was to be upgraded with face recognition software and biometric sensors, it would know how each sentence influenced your heart rate and blood pressure. It would know what made you laugh, what made you sad, what made you angry. Soon, books will read you while you are reading them. And whereas you quickly forget most of what you read, computer programs need never forget. Such data should eventually enable Amazon to choose books for you with uncanny precision. It will also allow Amazon to know exactly who you are, and how to press your emotional buttons.

 

Take this to its logical conclusion, and eventually people may give algorithms the authority to make the most important decisions in their lives, such as who to marry. In medieval Europe, priests and parents had the authority to choose your mate for you. In humanist societies we give this authority to our feelings. In a Dataist society I will ask Google to choose. “Listen, Google,” I will say, “both John and Paul are courting me. I like both of them, but in a different way, and it’s so hard to make up my mind. Given everything you know, what do you advise me to do?”

 

And Google will answer: “Well, I know you from the day you were born. I have read all your emails, recorded all your phone calls, and know your favourite films, your DNA and the entire biometric history of your heart. I have exact data about each date you went on, and I can show you second-by-second graphs of your heart rate, blood pressure and sugar levels whenever you went on a date with John or Paul. And, naturally enough, I know them as well as I know you. Based on all this information, on my superb algorithms and on decades’ worth of statistics about millions of relationships — I advise you to go with John, with an 87 per cent probability of being more satisfied with him in the long run.

 

“Indeed, I know you so well that I even know you don’t like this answer. Paul is much more handsome than John and, because you give external appearances too much weight, you secretly wanted me to say ‘Paul’. Looks matter, of course, but not as much as you think. Your biochemical algorithms — which evolved tens of thousands of years ago in the African savannah — give external beauty a weight of 35 per cent in their overall rating of potential mates. My algorithms — which are based on the most up-to-date studies and statistics — say that looks have only a 14 per cent impact on the long-term success of romantic relationships. So, even though I took Paul’s beauty into account, I still tell you that you would be better off with John.”

 

Google won’t have to be perfect. It won’t have to be correct all the time. It will just have to be better on average than me

 

Google won’t have to be perfect. It won’t have to be correct all the time. It will just have to be better on average than me. And that is not so difficult, because most people don’t know themselves very well, and most people often make terrible mistakes in the most important decisions of their lives.

 

The Dataist worldview is very attractive to politicians, business people and ordinary consumers because it offers groundbreaking technologies and immense new powers. For all the fear of missing our privacy and our free choice, when consumers have to choose between keeping their privacy and having access to far superior healthcare — most will choose health.

 

For scholars and intellectuals, Dataism promises to provide the scientific Holy Grail that has eluded us for centuries: a single overarching theory that unifies all the scientific disciplines from musicology through economics, all the way to biology. According to Dataism, Beethoven’s Fifth Symphony, a stock-exchange bubble and the flu virus are just three patterns of dataflow that can be analysed using the same basic concepts and tools. This idea is extremely attractive. It gives all scientists a common language, builds bridges over academic rifts and easily exports insights across disciplinary borders.

 

Of course, like previous all-encompassing dogmas, Dataism, too, may be founded on a misunderstanding of life. In particular, Dataism has no answer to the notorious “hard problem of consciousness”. At present we are very far from explaining consciousness in terms of data-processing. Why is it that when billions of neurons in the brain fire particular signals to one another, a subjective feeling of love or fear or anger appears? We don’t have a clue.

 

But even if Dataism is wrong about life, it may still conquer the world. Many previous creeds gained enormous popularity and power despite their factual mistakes. If Christianity and communism could do it, why not Dataism? Dataism has especially good prospects, because it is currently spreading across all scientific disciplines. A unified scientific paradigm may easily become an unassailable dogma.

 

If you don’t like this, and you want to stay beyond the reach of the algorithms, there is probably just one piece of advice to give you, the oldest in the book: know thyself. In the end, it’s a simple empirical question. As long as you have greater insight and self-knowledge than the algorithms, your choices will still be superior and you will keep at least some authority in your hands. If the algorithms nevertheless seem poised to take over, it is mainly because most human beings hardly know themselves at all.

 

Yuval Noah Harari is the author of ‘Homo Deus: A Brief History of Tomorrow’, published by Harvill Secker on September 8. He will be speaking in London, Cambridge, Manchester and Bristol. For more information go to po.st/HomoDeusEvents

 


Illustrations by Janne Iivonen

 

TRANSFORMATION

The unacknowledged fictions of Yuval Harari

 

Replacing one set of myths with another is no basis for confronting the earth’s existential problems.

 

Jeremy Lent

6 January 2019

https://www.opendemocracy.net/en/transformation/unacknowledged-fictions-of-yuval-harari/

 

When Yuval Noah Harari speaks, the world listens. Or at least, much of the world’s reading public. His first two blockbusters, Sapiens: A Brief History of Humankind, and Homo Deus: A Brief History of Tomorrow, have sold 12 million copies globally, and his new book, 21 Lessons for the 21st Century, is on bestseller lists everywhere. His fans include Barack Obama, Bill Gates, and Mark Zuckerberg, he’s admired by opinion shapers as diverse as Sam Harris and Russell Brand, and he’s fêted at the IMF and World Economic Forum.

 

A galvanizing theme of Harari’s writing is that humans are driven by shared, frequently unacknowledged fictions. Many of these fictions, he rightly points out, underlie the concepts that organize society, such as the value of the US dollar or the authority of nation states. In critiquing the current vogue topic of “fake news,” Harari observes that this is nothing new, but has been around for millennia in the form of organized religion.

 

However, though apparently unwittingly, Harari himself perpetuates a set of unacknowledged fictions that he relies on as foundations for his own version of reality. Given his enormous sway as a public intellectual, this risks causing considerable harm. Like the traditional religious dogmas that he mocks, his own implicit stories wield great influence over the global power elite as long as they remain unacknowledged.

 

Fiction #1: nature is a machine.

 

One of Harari’s most striking prophecies is that artificial intelligence will come to replace even the most creative human endeavors, and ultimately be capable of controlling every aspect of human cognition. The underlying rationale for his prediction is that human consciousness -including emotions, intuitions, and feelings - is nothing more than a series of algorithms, which could all theoretically be deciphered and predicted by a computer program. Our feelings, he tells us, are merely “biochemical mechanisms” resulting from “billions of neurons calculating” based on algorithms honed by evolution.

 

The idea that humans - and indeed all of nature - can be understood as very complicated machines is in fact a uniquely European cultural myth that arose in the 17th century and has since taken hold of the popular imagination. In the heady days of the Scientific Revolution, Descartes declared he saw no difference “between the machines made by craftsmen and the various bodies that nature alone composes.” The preferred machine metaphor is now the computer, with Richard Dawkins (apparently influencing Harari) writing that “life is just bytes and bytes and bytes of digital information,” but the idea remains the same - everything in nature can ultimately be reduced to its component parts and understood accordingly.

 

This myth, however attractive it might be to our technology-driven age, is as fictional as the theory that God created the universe in six days. Biologists point out principles intrinsic to life that categorically differentiate it from even the most complicated machine. Living organisms cannot be split, like a computer, between hardware and software. A neuron’s biophysical makeup is intrinsically linked to its behavior: the information it transmits doesn’t exist separately from its material construction. As prominent neuroscientist Antonio Damasio states in The Strange Order of Things, Harari’s assumptions are “not scientifically sound” and his conclusions are “certainly wrong.”

 

The dangers of this fiction arise when others base their actions on this flawed foundation. Believing that nature is a machine inspires a hubristic arrogance that technology can solve all humanity’s problems. Molecular biologists promote genetic engineering to enhance food production, while others advocate geo-engineering as a solution to climate breakdown - strategies fraught with the risk of massive unintended consequences. Recognizing that natural processes, from the human mind to the entire global ecosystem, are complex, nonlinear, and inherently unpredictable, is a necessary first step in crafting truly systemic solutions to the existential crises facing our civilization.

 

Fiction #2: “there is no alternative.”

 

When Margaret Thatcher teamed up with Ronald Reagan in the 1980s to impose the free-market, corporate-driven doctrine of neoliberalism on the world, she famously used the slogan “There Is No Alternative” to argue that the other two great ideologies of the twentieth century - fascism and communism - had failed, leaving her brand of unrestrained market capitalism as the only meaningful choice.

 

Astonishingly, three decades later, Harari echoes her caricatured version of history, declaring how, after the collapse of communism, only “the liberal story remained.” The current crisis, as Harari sees it, is that “liberalism has no obvious answers to the biggest problems we face.” We now need to “craft a completely new story,” he avers, to respond to the turmoil of modern times.

 

Sadly, Harari seems to have missed the abundant, effervescent broth of inspiring visions for a flourishing future developed over decades by progressive thinkers across the globe. He appears to be entirely ignorant of the new foundations for economics proffered by pioneering thinkers such as Kate Raworth; the exciting new principles for a life-affirming future within the framework of an Ecological Civilization; the stirring moral foundation established by the Earth Charter and endorsed by over 6,000 organizations worldwide; in addition to countless other variations of the “new story” that Harari laments is missing. It’s a story of hope that celebrates our shared humanity and emphasizes our deep connection with a living earth.

 

The problem is not, as Harari argues, that we are “left without any story.” It is, rather, that the world’s mass media is dominated by the same overpowering transnational corporations that maintain a stranglehold over virtually all other aspects of global activity, and choose not to give a platform to the stories that undermine the Thatcherian myth that neoliberalism is still the only game in town.

 

Harari is well positioned to apprise mainstream thinkers of the hopeful possibilities on offer. In doing so, he would have the opportunity to influence the future that—as he rightly points out—holds terrifying prospects without a change in direction. Is he ready for this challenge? Perhaps, but first he would need to investigate the assumptions underlying Fiction #3.

 

Fiction #3: Life Is meaningless - It’s best to do nothing.

 

Yuval Harari is a dedicated meditator, sitting for two hours a day to practice vipassana (insight) meditation, which he learned from the celebrated teacher Goenka. Based on Goenka’s tutelage, Harari offers his own version of the Buddha’s original teaching: “Life,” he writes, “has no meaning, and people don’t need to create any meaning.” In answer to the question as to what people should do, Harari summarizes his view of the Buddha’s answer: “Do nothing. Absolutely nothing.”

 

As a fellow meditator and admirer of Buddhist principles, I share Harari’s conviction that Buddhist insight can help reduce suffering on many levels. However, I am concerned that, in distilling the Buddha’s teaching to these sound bites, Harari gives a philosophical justification to those who choose to do nothing to avert the imminent humanitarian and ecological catastrophes threatening our future.

 

The statement that “life has no meaning” seems to arise more from the modern reductionist ontology of physicist Steven Weinberg than the mouth of the Buddha. To suggest that “people don’t need to create any meaning” contradicts an evolved instinct of the human species. As I describe in my own book, The Patterning Instinct: A Cultural History of Humanity’s Search for Meaning, human cognition drives us to impose meaning into the universe, a process that’s substantially shaped by the culture a person is born into. However, by recognizing the underlying structures of meaning instilled in us by our own culture, we can become mindful of our own patterns of thought, thus enabling us to reshape them for more beneficial outcomes - a process I call “cultural mindfulness.”

 

There are, in fact, other interpretations of the Buddha’s core teachings that lead to very different distillations - ones that cry out “Do Something!” - inspiring meaningful engagement in worldly activities. The principle of ‘dependent origination,’ for example, emphasizes the intrinsic interdependence of all aspects of existence, and forms the basis for the politically engaged Buddhism of prominent monk and peace activist, Thích Nhất Hạnh. Another essential Buddhist practice is metta, or compassion meditation, which focuses on identifying with the suffering of others, and resolving to devote one’s own life energies to reducing that suffering. These are sources of meaning in life that are fundamentally consistent with Buddhist principles.

 

Fiction #4: Humanity’s future Is a spectator sport.

 

A distinguishing characteristic of Harari’s writing, and one that may account for much of his prodigious success, is his ability to transcend the preconceptions of everyday life and offer a panoramic view of human history - as though he’s orbiting the earth from ten thousand miles and transmitting what he sees.  Through his meditation practice, Harari confides, he has been able to “actually observe reality as it is,” which gave him the focus and clear-sightedness to write Sapiens and Homo Deus. He differentiates his recent 21 Lessons for the 21st Century from his first two books by declaring that, in contrast to their ten thousand-mile Earth orbit, he will now “zoom in on the here and now.”

 

While the content of his new book is definitely the messy present, Harari continues to view the world as if through a scientist’s objective lens. However, Harari’s understanding of science appears to be limited to the confines of Fiction #1 - “Nature Is a Machine” - which requires complete detachment from whatever is being studied. Acknowledging that his forecast for humanity “seems patently unjust,” he justifies his own moral detachment, retorting that “this is a historical prediction, not a political manifesto.”

 

In recent decades, however, systems thinkers in multiple scientific disciplines have transformed this notion of pristine scientific objectivity. Recognizing nature as a dynamic, self-organized fractal complex of nonlinear systems, which can only be truly understood in terms of how each part relates to each other and the whole, they have shown how these principles apply, not just to the natural world, but also our own human social systems. A crucial implication is that the observer is part of what is being observed, with the result that the observer’s conclusions and ensuing actions feed back into the system being investigated.

 

This insight holds important ethical implications for approaching the great problems facing humanity. Once you recognize that you are part of the system you’re analyzing, this raises a moral imperative to act on your findings, and to raise awareness of others regarding their own intrinsic responsibilities. The future is not a spectator sport - in fact, every one of us is on the team and can make a difference in the outcome. We can no longer afford any fictions - the stakes have become too high.

 



We Must Not Accept an Algorithmic Account of Human Life

 

To a certain extent, living organisms are constructed according to algorithms, but they are not algorithms themselves. Every component is a vulnerable living entity of its own. They are not simply lines of code.

 

By

Antonio Damasio, Contributor

Dornsife Professor of Neuroscience, Psychology and Philosophy at USC

06/28/2016 01:05pm EDT | Updated December 6, 2017

https://www.huffpost.com/entry/algorithmic-human-life_b_10699712

 

One remarkable development of twentieth century science is the discovery that both physical structures and the communication of ideas can be assembled on the basis of algorithms that make use of codes. The genetic code helps living organisms assemble the basics of other living organisms and guide their development. Verbal languages provide us with alphabets (with which we can assemble an infinity of words that name an infinity of objects, actions, relationships and events) and with grammatical rules that govern the sequencing of the words so as to construct sentences and stories that narrate events or explain ideas.

 

Many aspects of the assembly of natural organisms and of communication depend on algorithms and on coding, as do all aspects of computation as well as the enterprises of artificial intelligence and robotics. These solid and interesting facts, however, have given rise to the sweeping notion that natural organisms would be reducible to algorithms or fully explainable by algorithms.

 

The worlds of artificial intelligence, biology and even neuroscience are inebriated with this notion. The thoughtful historian Yuval Harari echoed it in a recent interview published in the The WorldPost. Asked to pick one idea that would prove most influential in the next 50 years, Harari responded, "It's definitely the algorithm" and added that current biology can be summarized in three words: "Organisms are algorithms." Not only that, biology and computer science are converging because the "basic insight that unites the biological with the electronic is that bodies and brains are also algorithms." The fact that "we can write algorithms artificially" enables the convergence.

 

Needless to say, I am not blaming Harari for voicing ideas that have gained currency in technology and science circles. I am only interested in the merit of the ideas and because ideas do matter, this is an opportunity to consider if they conform to scientific fact and how they fare in human terms. From my perspective, they are not scientifically sound, and they suggest a problematic account of humanity. Why so?

 

Saying that living organisms are algorithms is, in the very least, misleading and in strict terms, false.

 

Saying that living organisms are algorithms is, in the very least, misleading and in strict terms, false. Algorithms are formulas, recipes, enumerations of steps in the construction of a predicted result. As noted, living organisms, including human organisms, use code-dependent algorithms such as the genetic machinery. But while, to a certain extent, living organisms are constructed according to algorithms, they are not algorithms themselves. They are consequences of the engagement of algorithms. The critical issue, however, is that living organisms are collections of tissues, organs and systems within which every component cell is a vulnerable living entity made of proteins, lipids and sugars. They are not lines of code.

 

The idea that living organisms are algorithms helps perpetuate the false notion that the substrates of organism construction are not relevant. This is because embedded within the label "algorithm" is a notion of context and substrate-independence. Applying the same algorithm to new contexts, using different substrates, is presumed to achieve similar results. But this is simply not so. Substrates count. The substrate of life is organized chemistry, a servant to thermodynamics and the imperative of homeostasis, and to the best of our current knowledge, that substrate is essential to explain who we are. Why so?

 

First, because the particular chemical substrate of life is necessary for the phenomena of feeling, and, in humans, reflection and elaboration on the experience of feelings is the basis for much that we hold as humanly distinctive and admirable, including moral and aesthetic judgments as well as the experience and notions of being and transcendence. While there is plenty of evidence that artificial organisms can be designed so as to operate intelligently and even surpass the intelligence of human organisms, there is no evidence to date that such artificial organisms can generate feelings without an actual living substrate (I note that the counterhypothesis, i.e. that certain designs might allow artificial organisms to simulate feelings, is well worth investigating).

 

In brief, there is no evidence that pure intellectual processes, which lend themselves well to an algorithmic account and which do not appear to be as sensitive to the substrate, can form the basis for what makes us distinctly human. Throw away the chemical substrate and you throw away feelings along with the values that humanistic cultures, from the axial ages forward, have been celebrating in the form of arts, religious beliefs, justice and fair governance. Once we remove suffering and flourishing, for example, there is no natural grounding for the logical conclusion that human beings deserve dignity. Of note, none of this implies that the higher functions of living organisms are not amenable to scientific investigation. They certainly are, provided the investigations take into account the living substrate and the complexity of the processes.

 

The implication of these distinctions is not trivial as we contemplate a new era of medicine in which the extension of human life will be possible by means of genetic engineering and the creation of human-artificial hybrids.

 

There is no evidence that pure intellectual processes can form the basis for what makes us distinctly human.

 

Second, the abundant presence of conscious feeling and creative intelligence in humans guarantees that the execution of the native algorithms can be thwarted. Our freedom to run against the impulses that the good or bad angels of our natures attempt to impose on us, is limited; but the fact remains that we can act against such impulses. The history of human cultures is in good part a narrative of our resistance to native algorithms by means of inventions not predicted by those algorithms. One can argue that all of these departures from native algorithms are in turn open to an algorithmic account. The scope of an algorithm may be expanded to capture a system at an arbitrary level of detail, but by then, what are the advantages of using the term algorithm?

 

Third, accepting an algorithmic account of humanity is the sort of reductionist position that often leads good souls to dismiss science and technology as demeaning and bemoan the passing of an age in which philosophy, complete with aesthetic sensibility and a religious response to suffering and death, made humans soar above the species on whose biological shoulders they were riding. But of course, denying the value of science as a reaction to problematic accounts of humanity is not acceptable either.

 

Science and philosophical inquiry can proceed side by side, not always in synchrony but often feeding off each other. Science needs to continue, in spite of the enthusiasts who reduce the sublime power of life to engineering and entrepreneurial successes, and in spite of the nervous Jeremiahs who are afraid that science will not honor the humanist traditions of the past. Both science, as honest knowledge-seeking, and philosophy, as serious debate and love of honest scientific knowledge, will not only endure but prevail.





Transhumanism

Transhumanism is a philosophical movement, the proponents of which advocate and predict the enhancement of the human condition by developing and making widely available sophisticated technologies able to greatly enhance longevity, mood and cognitive abilities.

 

Transhumanist thinkers study the potential benefits and dangers of emerging technologies that could overcome fundamental human limitations as well as the ethics of using such technologies. Some transhumanists believe that human beings may eventually be able to transform themselves into beings with abilities so greatly expanded from the current condition as to merit the label of posthuman beings.

 

Another topic of transhumanist research is how to protect humanity against existential risks, such as nuclear war or asteroid collision.

 

The contemporary meaning of the term "transhumanism" was foreshadowed by one of the first professors of futurology, a man who changed his name to FM-2030. In the 1960s, he taught "new concepts of the human" at The New School when he began to identify people who adopt technologies, lifestyles and worldviews "transitional" to posthumanity as "transhuman".[6] The assertion would lay the intellectual groundwork for the British philosopher Max More to begin articulating the principles of transhumanism as a futurist philosophy in 1990, and organizing in California a school of thought that has since grown into the worldwide transhumanist movement.

 

Influenced by seminal works of science fiction, the transhumanist vision of a transformed future humanity has attracted many supporters and detractors from a wide range of perspectives, including philosophy and religion.

 

In 2017, Penn State University Press, in cooperation with philosopher Stefan Lorenz Sorgner and sociologist James Hughes, established the Journal of Posthuman Studies[9] as the first academic journal explicitly dedicated to the posthuman, with the goal of clarifying the notions of posthumanism and transhumanism, as well as comparing and contrasting both.


Sem comentários: