Moderators: Elvis, DrVolin, Jeff
DrEvil » Sun Sep 26, 2021 5:07 pm wrote:The lacking piece is generalized learning
Harvey » Mon Sep 27, 2021 12:49 am wrote:DrEvil » Sun Sep 26, 2021 5:07 pm wrote:The lacking piece is generalized learning
The lacking piece is meaning. In my view the transhumanist agenda was never intended to arrive at technological transcendence, that is, to make people out of machines but was always intended to achieve precisely the opposite - to make machines out of people.
The unconscious desire of transhumanism is to make order where order already exists.
http://www.unariunwisdom.com/
This web site is an online study group dedicated to the discussion
and study of the works of Dr. Ernest L. Norman.UNARIUN WISDOM – Blending of Science and Spirit
“To attempt to describe the UN.AR.I.U.S. Science would be like trying to place all the visible and invisible universes into the proverbial goldfish bowl, for this Science does encompass all known and unknown elements and factors of life and the creative principles which make all things possible. For practical purposes, however and inasmuch as any person would be completely overwhelmed with even a glimpse into the Infinite Cosmos, he must therefore begin – as Kung Fu said: This long journey into Infinity begins with the first step – and this first step is the teaching course of UN.AR.I.U.S. Science.
It must be properly understood that unlike all other existing religions, cultisms, philosophies, metaphysics and so-called mind sciences, et cetera, the Unariun teachings is an exact science. In our present day electronic and atomic technocracy, the man of science has struck many close parallels with the basic principles of life...
Artificial Consciousness Is Boring
The reality of AI is something harder to comprehend.
By Stephen Marche June 19, 2022, 8 AM ET
Last week, Google put one of its engineers on administrative leave after he claimed to have encountered machine sentience on a dialogue agent named LaMDA. Because machine sentience is a staple of the movies, and because the dream of artificial personhood is as old as science itself, the story went viral, gathering far more attention than pretty much any story about natural-language processing (NLP) has ever received. That’s a shame. The notion that LaMDA is sentient is nonsense: LaMDA is no more conscious than a pocket calculator. More importantly, the silly fantasy of machine sentience has once again been allowed to dominate the artificial-intelligence conversation when much stranger and richer, and more potentially dangerous and beautiful, developments are under way.
The fact that LaMDA in particular has been the center of attention is, frankly, a little quaint. LaMDA is a dialogue agent. The purpose of dialogue agents is to convince you that you are talking with a person. Utterly convincing chatbots are far from groundbreaking tech at this point. Programs such as Project December are already capable of re-creating dead loved ones using NLP. But those simulations are no more alive than a photograph of your dead great-grandfather is.
Already, models exist that are more powerful and mystifying than LaMDA. LaMDA operates on up to 137 billion parameters, which are, speaking broadly, the patterns in language that a transformer-based NLP uses to create meaningful text prediction. Recently I spoke with the engineers who worked on Google’s latest language model, PaLM, which has 540 billion parameters and is capable of hundreds of separate tasks without being specifically trained to do them. It is a true artificial general intelligence, insofar as it can apply itself to different intellectual tasks without specific training “out of the box,” as it were.
Some of these tasks are obviously useful and potentially transformative. According to the engineers—and, to be clear, I did not see PaLM in action myself, because it is not a product—if you ask it a question in Bengali, it can answer in both Bengali and English. If you ask it to translate a piece of code from C to Python, it can do so. It can summarize text. It can explain jokes. Then there’s the function that has startled its own developers, and which requires a certain distance and intellectual coolness not to freak out over. PaLM can reason. Or, to be more precise—and precision very much matters here—PaLM can perform reason.
The method by which PaLM reasons is called “chain-of-thought prompting.” Sharan Narang, one of the engineers leading the development of PaLM, told me that large language models have never been very good at making logical leaps unless explicitly trained to do so. Giving a large language model the answer to a math problem and then asking it to replicate the means of solving that math problem tends not to work. But in chain-of-thought prompting, you explain the method of getting the answer instead of giving the answer itself. The approach is closer to teaching children than programming machines. “If you just told them the answer is 11, they would be confused. But if you broke it down, they do better,” Narang said.
Google illustrates the process in the following image:
Adding to the general weirdness of this property is the fact that Google’s engineers themselves do not understand how or why PaLM is capable of this function. The difference between PaLM and other models could be the brute computational power at play. It could be the fact that only 78 percent of the language PaLM was trained on is English, thus broadening the meanings available to PaLM as opposed to other large language models, such as GPT-3. Or it could be the fact that the engineers changed the way that they tokenize mathematical data in the inputs. The engineers have their guesses, but they themselves don’t feel that their guesses are better than anybody else’s. Put simply, PaLM “has demonstrated capabilities that we have not seen before,” Aakanksha Chowdhery, a member of the PaLM team who is as close as any engineer to understanding PaLM, told me.
None of this has anything to do with artificial consciousness, of course. “I don’t anthropomorphize,” Chowdhery said bluntly. “We are simply predicting language.” Artificial consciousness is a remote dream that remains firmly entrenched in science fiction, because we have no idea what human consciousness is; there is no functioning falsifiable thesis of consciousness, just a bunch of vague notions. And if there is no way to test for consciousness, there is no way to program it. You can ask an algorithm to do only what you tell it to do. All that we can come up with to compare machines with humans are little games, such as Turing’s imitation game, that ultimately prove nothing.
Where we’ve arrived instead is somewhere more foreign than artificial consciousness. In a strange way, a program like PaLM would be easier to comprehend if it simply were sentient. We at least know what the experience of consciousness entails. All of PaLM’s functions that I’ve described so far come from nothing more than text prediction. What word makes sense next? That’s it. That’s all. Why would that function result in such enormous leaps in the capacity to make meaning? This technology works by substrata that underlie not just all language but all meaning (or is there a difference?), and these substrata are fundamentally mysterious. PaLM may possess modalities that transcend our understanding. What does PaLM understand that we don’t know how to ask it?
Using a word like understand is fraught at this juncture. One problem in grappling with the reality of NLP is the AI-hype machine, which, like everything in Silicon Valley, oversells itself. Google, in its promotional materials, claims that PaLM demonstrates “impressive natural language understanding.” But what does the word understanding mean in this context? I am of two minds myself: On the one hand, PaLM and other large language models are capable of understanding in the sense that if you tell them something, its meaning registers. On the other hand, this is nothing at all like human understanding. “I find our language is not good at expressing these things,” Zoubin Ghahramani, the vice president of research at Google, told me. “We have words for mapping meaning between sentences and objects, and the words that we use are words like understanding. The problem is that, in a narrow sense, you could say these systems understand just like a calculator understands addition, and in a deeper sense they don’t understand. We have to take these words with a grain of salt.” Needless to say, Twitter conversations and the viral information network in general are not particularly good at taking things with a grain of salt.
Ghahramani is enthusiastic about the unsettling unknown of all of this. He has been working in artificial intelligence for 30 years, but told me that right now is “the most exciting time to be in the field” exactly because of “the rate at which we are surprised by the technology.” He sees huge potential for AI as a tool in use cases where humans are frankly very bad at things but computers and AI systems are very good at them. “We tend to think about intelligence in a very human-centric way, and that leads us to all sorts of problems,” Ghahramani said. “One is that we anthropomorphize technologies that are dumb statistical-pattern matchers. Another problem is we gravitate towards trying to mimic human abilities rather than complementing human abilities.” Humans are not built to find the meaning in genomic sequences, for example, but large language models may be. Large language models can find meaning in places where we can find only chaos.
Even so, enormous social and political dangers are at play here, alongside still hard-to-fathom possibilities for beauty. Large language models do not produce consciousness but they do produce convincing imitations of consciousness, which are only going to improve drastically, and will continue to confuse people. When even a Google engineer can’t tell the difference between a dialogue agent and a real person, what hope is there going to be when this stuff reaches the general public? Unlike machine sentience, these questions are real. Answering them will require unprecedented collaboration between humanists and technologists. The very nature of meaning is at stake.
So, no, Google does not have an artificial consciousness. Instead, it is building enormously powerful large language systems with the ultimate goal, as Narang said, “to enable one model that can generalize across millions of tasks and ingest data across multiple modalities.” Frankly, it’s enough to worry about without the science-fiction robots playing on the screens in our head. Google has no plans to turn PaLM into a product. “We shouldn’t get ahead of ourselves in terms of the capabilities,” Ghahramani said. “We need to approach all of this technology in a cautious and skeptical way.” Artificial intelligence, particularly the AI derived from deep learning, tends to rise rapidly through periods of shocking development, and then stall out. (See self-driving cars, medical imaging, etc.) When the leaps come, though, they come hard and fast and in unexpected ways. Gharamani told me that we need to achieve these leaps safely. He’s right. We’re talking about a generalized-meaning machine here: It would be good to be careful.
The fantasy of sentience through artificial intelligence is not just wrong; it’s boring. It’s the dream of innovation by way of received ideas, the future for people whose minds never escaped the spell of 1930s science-fiction serials. The questions forced on us by the latest AI technology are the most profound and the most simple; they are questions that, as ever, we are completely unprepared to face. I worry that human beings may simply not have the intelligence to deal with the fallout from artificial intelligence. The line between our language and the language of the machines is blurring, and our capacity to understand the distinction is dissolving inside the blur.
Belligerent Savant » Sat Mar 23, 2019 11:07 am wrote:.
Ah yes, Hoffman. There's stuff from Hoffman within RI's archives, which I'll endeavor to locate and share when I'm in front of my laptop (rather than this irksome mobile device).
Indeed, Elfis included a vid clip of a Hoffman talk in page 1 of this thread.
His Desktop analogy is the one that gets the rounds on the interwebs:Hoffman's computer analogy is that physical space is like the desktop and that objects in it are like desktop icons, which are produced by the graphical user interface (GUI). Our senses, he says, form a biological user interface—a gooey GUI—between our brain and the outside world, transducing physical stimuli such as photons of light into neural impulses processed by the visual cortex as things in the environment. GUIs are useful because you don't need to know what is inside computers and brains. You just need to know how to interact with the interface well enough to accomplish your task. Adaptive function, not veridical perception, is what is important.
Hoffman's holotype is the Australian jewel beetle Julodimorpha bakewelli. Females are large, shiny, brown and dimpled. So, too, are discarded beer bottles dubbed “stubbies,” and males will mount them until they die by heat, starvation or ants. The species was on the brink of extinction because its senses and brain were designed by natural selection not to perceive reality (it's a beer bottle, you idiot!) but to mate with anything big, brown, shiny and dimply.
The author of the excerpt is somewhat of a detractor, however:
ITP is well worth serious consideration and testing, but I have my doubts. First, how could a more accurate perception of reality not be adaptive? Hoffman's answer is that evolution gave us an interface to hide the underlying reality because, for example, you don't need to know how neurons create images of snakes; you just need to jump out of the way of the snake icon. But how did the icon come to look like a snake in the first place? Natural selection. And why did some nonpoisonous snakes evolve to mimic poisonous species? Because predators avoid real poisonous snakes. Mimicry works only if there is an objective reality to mimic.
Hoffman has claimed that “a rock is an interface icon, not a constituent of objective reality.” But a real rock chipped into an arrow point and thrown at a four-legged meal works even if you don't know physics and calculus. Is that not veridical perception with adaptive significance?
As for jewel beetles, stubbies are what ethologists call supernormal stimuli, which mimic objects that organisms evolved to respond to and elicit a stronger response in doing so, such as (for some people) silicone breast implants in women and testosterone-enhanced bodybuilding in men. Supernormal stimuli operate only because evolution designed us to respond to normal stimuli, which must be accurately portrayed by our senses to our brain to work.
Hoffman says that perception is species-specific and that we should take predators seriously but not literally. Yes, a dolphin's icon for “shark” no doubt looks different than a human's, but there really are sharks, and they really do have powerful tails on one end and a mouthful of teeth on the other end, and that is true no matter how your sensory system works.
Also, computer simulations are useful for modeling how evolution might have happened, but a real-world test of ITP would be to determine if most biological sensory interfaces create icons that resemble reality or distort it. I'm betting on reality. Data will tell.
Finally, why present this problem as an either-or choice between fitness and truth? Adaptations depend in large part on a relatively accurate model of reality. The fact that science progresses toward, say, eradicating diseases and landing spacecraft on Mars must mean that our perceptions of reality are growing ever closer to the truth, even if it is with a small “t.”
https://www.scientificamerican.com/arti ... eally-are/
Of course, none of this rules out the Holographic theory of our Universe..
DrEvil » 26 Jun 2022 00:08 wrote:My guess (heavy emphasis on "guess") is that when you noticed the coat you triggered some of the same circuitry that would trigger by noticing a rustling in the tall grass. The "you're about to get eaten by a tiger, pay attention" circuitry. Changes means uncertainty, and uncertainty means things can be dangerous, and you pay attention to potentially dangerous things and try to work out what's going on.
If you subscribe to the free energy principle this is an excellent example of it in action. The brain wants to minimize surprise to get as accurate as possible predictions about its surroundings. If the brain starts throwing out red flags it immediately tries to correct the internal model by working out why things are changing so it can better predict them in the future.
Jorge Francisco Isidoro Luis Borges Acevedo (/ˈbɔːrhɛs/ BOR-hess,[2] Spanish: [ˈxoɾxe ˈlwis ˈβoɾxes] (listen); 24 August 1899 – 14 June 1986) was an Argentine short-story writer, essayist, poet and translator, as well as a key figure in Spanish-language and international literature. His best-known books, Ficciones (Fictions) and El Aleph (The Aleph), published in the 1940s, are collections of short stories exploring themes of dreams, labyrinths, chance, infinity, archives, mirrors, fictional writers and mythology.[3] Borges's works have contributed to philosophical literature and the fantasy genre, and have had a major influence on the magic realist movement in 20th century Latin American literature.[4]
Born in Buenos Aires, Borges later moved with his family to Switzerland in 1914, where he studied at the Collège de Genève. The family travelled widely in Europe, including Spain. On his return to Argentina in 1921, Borges began publishing his poems and essays in surrealist literary journals. He also worked as a librarian and public lecturer.[5] In 1955, he was appointed director of the National Public Library and professor of English Literature at the University of Buenos Aires. He became completely blind by the age of 55. Scholars have suggested that his progressive blindness helped him to create innovative literary symbols through imagination.[Note 1] By the 1960s, his work was translated and published widely in the United States and Europe. Borges himself was fluent in several languages.
Jorge Luis Borges on Reality, Writing, Literature, and More
Happy birthday, Jorge Luis Borges! Here are some quotes from the writer:
“Reality is not always probable, or likely.”
“I have always imagined that Paradise will be a kind of library.”
“Before I ever wrote a single line, I knew, in some mysterious and therefore unequivocal way, that I was destined for literature. What I didn’t realize at first is that besides being destined to be a reader, I was also destined to be a writer, and I don’t think one is less important than the other.”
“Any time something is written against me, I not only share the sentiment but feel I could do the job far better myself. Perhaps I should advise would-be enemies to send me their grievances beforehand, with full assurance that they will receive my every aid and support. I have even secretly longed to write, under a pen name, a merciless tirade against myself.”
“Literature is not exhaustible, for the sufficient and simple reason that a single book is not. A book is not an isolated entity: it is a narration, an axis of innumerable narrations. One literature differs from another, either before or after it, not so much because of the text as for the manner in which it is read.”
“A book is more than a verbal structure or series of verbal structures; it is the dialogue it establishes with its reader and the intonation it imposes upon his voice and the changing and durable images it leaves in his memory. A book is not an isolated being: it is a relationship, an axis of innumerable relationships.”
“In the critic’s vocabulary, the word ‘precursor’ is indispensable, but it should be cleansed of all connotations of polemic or rivalry. The fact is that every writer creates his own precursors. His work modifies our conception of the past, as it will modify the future.”
“Music, states of happiness, mythology, faces belabored by time, certain twilights and certain places try to tell us something, or have said something we should not have missed, or are about to say something; this imminence of a revelation which does not occur is, perhaps, the aesthetic phenomenon.”
“That history should have imitated history was already sufficiently marvelous; that history should imitate literature is inconceivable….”
“I foresee that man will resign himself each day to more atrocious undertakings; soon there will be no one but warriors and brigands; I give them this counsel: The author of an atrocious undertaking ought to imagine that he has already accomplished it, ought to impose upon himself a future as irrevocable as the past.”
“A writer—and, I believe, generally all persons—must think that whatever happens to him or her is a resource. All things have been given to us for a purpose, and an artist must feel this more intensely. All that happens to us, including our humiliations, our misfortunes, our embarrassments, all is given to us as raw material, as clay, so that we may shape our art.”
“I am not sure that I exist, actually. I am all the writers that I have read, all the people that I have met, all the women that I have loved; all the cities that I have visited, all my ancestors.”
“As I think of the many myths, there is one that is very harmful, and that is the myth of countries. I mean, why should I think of myself as being an Argentine, and not a Chilean, and not an Uruguayan. I don’t know really. All of those myths that we impose on ourselves—and they make for hatred, for war, for enmity—are very harmful. Well, I suppose in the long run, governments and countries will die out and we’ll be just, well, cosmopolitans.”
“Doubt is one of the names of intelligence.”
“Writing is nothing more than a guided dream.”
“Every novel is an ideal plane inserted into the realm of reality.”
“Years of solitude had taught him that, in one’s memory, all days tend to be the same, but that there is not a day, not even in jail or in the hospital, which does not bring surprises, which is not a translucent network of minimal surprises.”
“Any life, however long and complicated it may be, actually consists of a single moment—the moment when a man knows forever more who he is.”
“The original is unfaithful to the translation.”
“Dictatorships foster oppression, dictatorships foster servitude, dictatorships foster cruelty; more abominable is the fact that they foster idiocy.”
An Occurrence at Owl Creek Bridge
Analysis
The real Owl Creek Bridge is in Tennessee; Bierce, who personally assisted in three military executions during his time as a soldier, likely changed the setting to northern Alabama because the actual bridge did not have a railroad near it at the time the story is set.[5]
The story explores the concept of "dying with dignity". The story shows the reader that the perception of "dignity" provides no mitigation for the deaths that occur in warfare. It further demonstrates psychological escape right before death. Farquhar experiences an intense delusion to distract him from his inevitable death. The moment of horror that the readers experience at the end of the piece, when they realize that he dies, reflects the distortion of reality that Farquhar encounters.[6]
It is not only the narrator who experiences the story but also the readers themselves. As he himself once put it, Bierce detested "bad readers—readers who, lacking the habit of analysis, lack also the faculty of discrimination, and take whatever is put before them, with the broad, blind catholicity of a slop-fed conscience of a parlor pig".[7] Farquhar was duped by a Federal scout—and cursory readers on their part are successfully duped by the author who makes them think they are witnessing Farquhar's lucky escape from the gallows. Instead, they only witness the hallucination of such an escape taking place in the character's unconscious mind which is governed by the instinct of self-preservation.
Influence
The plot device of a long period of subjective time passing in an instant, such as the imagined experiences of Farquhar while falling, has been explored by several authors.[8] An early literary antecedent appears in the Tang dynasty tale, The Governor of Nanke, by Li Gongzuo. Another medieval antecedent is Don Juan Manuel's Tales of Count Lucanor, Chapter XII (c. 1335), "Of that which happened to a Dean of Santiago, with Don Illan, the Magician, who lived at Toledo", in which a life happens in an instant.[9][10] Charles Dickens's essay "A Visit to Newgate" wherein a man dreams he has escaped his death sentence has been speculated as a possible source for the story.[11] Bierce's story, in turn, may have influenced "The Snows of Kilimanjaro" by Ernest Hemingway and Pincher Martin by William Golding.[5]
Users browsing this forum: No registered users and 39 guests