The way I see it, our minds are the emergent result of all the little processes that make up our brains. Something greater than the sum of its parts (or at least with an inflated ego. I think, therefore I'm special), and its role is for the most part conflict resolution between opposing impulses. Instinct isn't enough to handle a problem like "do you burn your fingers or do you drop grandma's china and break it", so our mind steps in, weighs the pros and cons and makes a decision. It's a side effect of complex society / social interactions.
Although the exact mechanism is not completely understood, encoding occurs on different levels, the first step being the formation of short-term memory from the ultra-short term sensory memory, followed by the conversion to a long-term memory by a process of memory consolidation.
The process begins with the creation of a memory trace or engram in response to the external stimuli. An engram is a hypothetical biophysical or biochemical change in the neurons of the brain, hypothetical in the respect that no-one has ever actually seen, or even proved the existence of, such a construct.
https://www.scientificamerican.com/arti ... universal/
Is Consciousness Universal?
Panpsychism, the ancient doctrine that consciousness is universal, offers some lessons in how to think about subjective experience today
By Christof Koch on January 1, 2014 [/center]
For every inside there is an outside, and for every outside there is an inside; though they are different, they go together.
—Alan Watts, Man, Nature, and the Nature of Man, 1991…
All species—bees, octopuses, ravens, crows, magpies, parrots, tuna, mice, whales, dogs, cats and monkeys—are capable of sophisticated, learned, nonstereotyped behaviors that would be associated with consciousness if a human were to carry out such actions. Precursors of behaviors thought to be unique to people are found in many species. For instance, bees are capable of recognizing specific faces from photographs, can communicate the location and quality of food sources to their sisters via the waggle dance, and can navigate complex mazes with the help of cues they store in short-term memory (for instance, “after arriving at a fork, take the exit marked by the color at the entrance”). Bees can fly several kilometers and return to their hive, a remarkable navigational performance. And a scent blown into the hive can trigger a return to the site where the bees previously encountered this odor. This type of associative memory was famously described by Marcel Proust in À la Recherche du Temps Perdu. Other animals can recognize themselves, know when their conspecifics observe them, and can lie and cheat.
Some people point to language and the associated benefits as being the unique defining feature of consciousness. Conveniently, this viewpoint rules out all but one species, Homo sapiens (which has an ineradicable desire to come out on top), as having sentience. Yet there is little reason to deny consciousness to animals, preverbal infants [see “The Conscious Infant,” Consciousness Redux; Scientific American Mind, September/October 2013] or patients with severe aphasia, all of whom are mute.
None other than Charles Darwin, in the last book he published, in the year preceding his death, set out to learn how far earthworms “acted consciously and how much mental power they displayed.” Studying their feeding and sexual behaviors for several decades—Darwin was after all a naturalist with uncanny powers of observation—he concluded that there was no absolute threshold between lower and higher animals, including humans, that assigned higher mental powers to one but not to the other.
The nervous systems of all these creatures are highly complex. Their constitutive proteins, genes, synapses, cells and neuronal circuits are as sophisticated, variegated and specialized as anything seen in the human brain. It is difficult to find anything exceptional about the human brain. Even its size is not so special, because elephants, dolphins and whales have bigger brains. Only an expert neuroanatomist, armed with a microscope, can tell a grain-size piece of cortex of a mouse from that of a monkey or a human. Biologists emphasize this structural and behavioral continuity by distinguishing between nonhuman and human animals. We are all nature's children.
To be conscious, then, you need to be a single, integrated entity with a large repertoire of highly differentiated states. Even if the hard disk on my laptop exceeds in capacity my lifetime memories, none of its information is integrated. The family photos on my Mac are not linked to one another. The computer does not know that the boy in those pictures is my son as he matures from a toddler to an awkward teenager and then a graceful adult. To my computer, all information is equally meaningless, just a vast, random tapestry of 0s and 1s. Yet I derive meaning from these images because my memories are heavily cross-linked. And the more interconnected, the more meaningful they become.
These ideas can be precisely expressed in the language of mathematics using notions from information theory such as entropy. Given a particular brain, with its neurons in a particular state—these neurons are firing while those ones are quiet—one can precisely compute the extent to which this network is integrated. From this calculation, the theory derives a single number, &PHgr; (pronounced “fi”) [see “A Theory of Consciousness,” Consciousness Redux; Scientific American Mind, July/August 2009]. Measured in bits, &PHgr; denotes the size of the conscious repertoire associated with the network of causally interacting parts being in one particular state. Think of &PHgr; as the synergy of the system. The more integrated the system is, the more synergy it has and the more conscious it is. If individual brain regions are too isolated from one another or are interconnected at random, &PHgr; will be low. If the organism has many neurons and is richly endowed with synaptic connections, &PHgr; will be high. Basically, &PHgr; captures the quantity of consciousness. The quality of any one experience—the way in which red feels different from blue and a color is perceived differently from a tone—is conveyed by the informational geometry associated with &PHgr;. The theory assigns to any one brain state a shape, a crystal, in a fantastically high-dimensional qualia space. This crystal is the system viewed from within. It is the voice in the head, the light inside the skull. It is everything you will ever know of the world. It is your only reality. It is the quiddity of experience. The dream of the lotus eater, the mindfulness of the meditating monk and the agony of the cancer patient all feel the way they do because of the shape of the distinct crystals in a space of a trillion dimensions—truly a beatific vision. The water of integrated information is turned into the wine of experience.
humanoid robot sophia predicts future for elon musk’s brain implant neuralink
The RAW News
Published on Nov 15, 201
SpaceX and Tesla CEO Elon Musk is backing a brain-computer interface venture called Neuralink. The company, which is still in the earliest stages of existence and has no public presence whatsoever, is centered on creating devices that can be implanted in the human brain, with the eventual purpose of helping human beings merge with software and keep pace with advancements in artificial intelligence. These enhancements could improve memory or allow for more direct interfacing with computing devices.
These types of brain-computer interfaces exist today only in science fiction. In the medical realm, electrode arrays and other implants have been used to help ameliorate the effects of Parkinson’s, epilepsy, and other neurodegenerative diseases. However, very few people on the planet have complex implants placed inside their skulls, while the number of patients with very basic stimulating devices number only in the tens of thousands. This is partly because it is incredibly dangerous and invasive to operate on the human brain, and only those who have exhausted every other medical option choose to undergo such surgery as a last resort.
Director of the Birkbeck Institute of Contemporary Philosophers, Slavoj Zizek(??):This technology, specifically what Elon Musk is proposing to do, directly linking our brains with (a) computer, with digitized space, is a very ambiguous and potentially dangerous operation. The problems are not just economic and political, psychological - like when our brains are directly wired to a computer, it’s not only that we humans become almost like gods - all powerful. I think about something, the computer reads this, moves an object, I became like god: My thoughts can change reality.
But, it goes also the other way around: My thoughts themselves could be controlled. So there is a big psychological, economic and political question: Who will control this? Because are we aware that this will become reality - this direct link between our brain and digitized space… then, in a way, we will no longer be humans; because to be human means to have this minimal sense of separation between me - in my mind, and reality out there. Who knows what happens when this distance falls.
Elon Musk (in a clip from a Dubai conference): “To some degree we are already a cyborg… You think of the digital tools that you have - your phone and computer, the applications that you have; like the fact mentioned earlier, you can ask a question and instantly get an answer from Google, you know, and from other things… ”
Slavoj Zizek again: … At least in our experience, some kind of minimal gap remains, I still at least, maybe it’s an illusion, (but) I still perceive myself as if I am in my thoughts and there is a reality out there. I’m not directly immersed into external reality.
Are we even aware, once this immersion will become simply effect, because it’s not just me interacting with a screen. Digitized Space will not be out there. It will be literally in our very heart, controlling, directing what we are are doing, and so on and so on; and again, there is the big question of power: Who will be controlling this digital space? It’s a mega political question! I don’t believe in those dreamers, like Ray Kurzweil, who think we will become part of some collective brain, singularity and so on and so on… No! The only question for me is, and we don’t have a good answer: How will this affect our self experience?
Will we still experience ourselves as free beings? Or will we be regulated by digital machinery - now comes the crucial point, without ever being aware that we are regulated?
Will we still experience ourselves as free beings? Or will we be regulated by digital machinery… and now comes the crucial point - without ever being aware that we are regulated?
They’re already making a certain type of experiment… (accessing electrical signals through the hippocampus, recording them, and then shooting the message back in the hippocampus to remember the task). They can control your neurons so that you make a certain move, even move around space, and they can direct you like a remote control car, a toy.
“Now come the crucial question: They then asked the subjects of this experiment… : How did you experience your life, your consciousness, when you were de facto directed, when I direct your life like a remote control car? All subjects of the experiment answered, I thought I’m free. I didn’t even know that I was controlled.”
I’m not… (being) a pessimist here. It’s just that we should be aware of, first - the changing radically new is emerging which will affect our most basic experience of who we are as human beings. Literally, human nature is changing because, again, within our daily interactions we will have to learn to feel at the same time omnipotent and impotent; impotent because (we’re) totally exposed to the digital media, and omnipotent because we will be able to directly affect reality…
Philosopher Gray Scott:… The brain is truly the final frontier, and that is where we’re going now. We are looking at technology as a portal inward now, in stead of outward. You know, it used to be the technology took us away from where we were and now it’s actually going inward. We’re moving toward the unconscious mind, and this is just the first step in that evolution.
https://experiencelife.com/article/inte ... QgzfHA3X4EMeet Your New Organ — the Interstitium
BY MICHAEL DREGIN | AUGUST 21, 2018
The interstitium might be the reason some complementary-medicine techniques, such as acupuncture and myofascial-release therapy, work, says one researcher.
Scientists have discovered what they believe is a new organ in our bodies — one that’s important to cell communication, and perhaps the spread of diseases like cancer. And it’s so big that it was easy to miss … until now.
The interstitium is a body-wide web of fluid-filled spaces residing beneath the top layer of skin and surrounding muscles, blood vessels, fascia, the gut, and other organs, the researchers state in the journal Scientific Reports.
The compartments likely serve as dynamic “shock absorbers” to protect other body components.
The organ also provides a “highway of moving fluid,” the scientists explain. By moving fluid through the network via peristalsis, it could play a key role in cell signaling and inflammation. The network is also the source of lymph, the fluid key to the immune system.
This latticework may play a role in the workings of techniques like acupuncture and myofascial release therapy. It may also be the conduit for injurious agents in the body, such as spreading tumor cells and allowing cancer to metastasize — a process that’s remained a mystery, says the study’s co-lead author Neil Theise, MD, a pathologist and professor at the New York University School of Medicine.
The interstitium was hiding in plain sight because the usual way of preparing microscope slides involved draining fluid, which had caused the formerly filled spaces to collapse. Images of the interstitium were captured by electron-microscopy probes.
We recently caught up with Theise to chat about about the discovery. Here’s what he had to say:
Experience Life | What is the interstitium and what does it do?
Neil Theise | The interstitium has been defined historically as the “third space,” after the cardiovascular system and the lymphatics. It has generally been described as merely “the space between cells,” though occasionally the concept that there is a larger interstitial space has been generally referred to, though its anatomic or histologic features have never been described with precision.
It is a space where “extracellular fluid” gathers, i.e., the fluid of the body that is not contained within cells. Some such spaces are obvious: the cardiovascular system containing the fluid of blood, the lymphatics themselves, the space within the skull and spine containing cerebrospinal fluid. These other spaces, however, are estimated to contain only about one quarter of the extracellular fluid. The majority — approximately 20 percent of the fluid volume of the body, comprising approximately 10 liters — is contained within the interstitium.
This interstitial fluid is conceived as being the “pre-lymph” that eventually becomes the fluid in the lymphatic system, and so the space is in direct continuity with the lymphatics and the lymph nodes.
Little else has been known of it until now.
EL | The interstitium was basically hiding in plain sight. How was it discovered?
NT | Doctors Petros Benias and David Carr-Locke, with whom I had a close working relationship, showed me pictures of the wall of the bile duct that they had obtained using a new kind of endoscope. Endoscopes are the snakelike tools that clinicians can use to reach into the body, examining internal organs such as the upper and lower digestive tracts, and to take samples of tissue as biopsies for diagnosis. As a pathologist, I had examined many of the specimens obtained by my two colleagues.
This new scope, however, had a new capacity: after injecting a little fluorescent dye into the vein of the person undergoing endoscopy, the scope could examine living tissue at the microscopic level, similar to what I do with the biopsies at the microscope.
This scope had a fixed focal length, meaning it could only look at one depth: about one-tenth of a millimeter. In most places they and other clinicians using the scope had looked — the esophagus, stomach, small and large intestines — nothing unexpected was revealed. But in the bile duct, a pattern of spaces was revealed that did not match any known anatomy of the bile duct.
So they came to me, as an expert in microscopic examination of tissues specializing in the liver and bile ducts, to see if I could explain what they saw. To my dismay, I could not.
We finally devised an approach observing the bile duct in patients who were about to have cancer operations in which some of their normal bile duct would be removed. Rather than process the sampled bile duct tissue as usual, with dehydration and chemical (formaldehyde) fixation to make slides, we quickly froze the tissue, keeping the resected piece as close to the normal living tissue as possible.
Then we saw the unexpected. The middle layer of the bile duct — long thought to be densely compacted connective tissue, a wall of dense collagen — was actually an open, fluid-filled space supported by a lattice made of thick collagen bundles.
After recognizing this surprise in the bile duct, I quickly realized that every dense connective tissue layer of the body — the linings of all the visceral organs, the dermis (second layer of the skin), all the fascia between and around muscles, all the connective tissue around every blood vessel (arteries and veins of every size) — were like this | open, fluid-filled spaces supported by a collagen-bundle lattice.
EL | Why wasn’t the interstitium identified before now?
NT | Standard processing of tissue for making slides usually involves dehydration. Just taking a bit of tissue from this space allows the fluid in the space to drain and the supporting collagen bundles to collapse like the floors of a collapsing building.
We would often see little “cracks” between collagen bundles in these layers. I was taught, and in turn taught many of my trainees, that these cracks were artifacts of processing: We had pulled the tissue too hard in preparing the slide and separations had formed. But these were not artifacts: These were the remnants of the collapsed spaces. They had been there all the time. But it was only when we could look at living tissue that we could see the real, not the artifact.
EL | Where was lymph thought to have come from previously?
NT | From between cells and the “third space,” whatever that was.
EL | What impact could a better understanding of the interstitium have on medicine?
NT | One can’t understand the mechanical properties of any tissue without understanding the lubricating and shock-absorber potential of the interstitium. These line or surround parts of the body that move: skin and muscles as you move your body, peristalsis as food moves from top to bottom through your GI tract, the expansion and contraction of your lungs with breathing, the squeezing of the bladder during urination, the pulsing of arteries and veins. We’ve never asked, “How do dense connective tissue layers survive such continual stress without tearing or rupturing?” Now we know: They are not dense connective tissue, they are distendable and compressible fluid-filled spaces.
We have known for decades that invasion of cancer into these layers is the moment cancer is at risk for spreading outside the organ, particularly to lymph nodes. Why would invasion into a dense wall of collagen potentiate that? Because that isn’t the anatomy. The space is a fluid-filled highway, often under pressure, that flows directly into the lymphatics and, thus, to the lymph nodes. Tumor metastasis is dependent on this space and its qualities.
Macrophages, the cleanup crew of white blood cells, traffic in this space. When one gets a tattoo, this is the layer in which the pigment deposits and is consumed by these cells. When some of the cells move from here they always wind up in the lymph nodes, like the tumor cells. But unlike the tumor cells, they are performing a normal immune function. Inflammatory cells of all kinds are likely to travel through this space during injury or disease; in direct connection to the lymphatics, they probably play an important role in inflammation.
There is a novel cell type in the organ as well: cells that mix features of the fibroblasts that make collagen (and scar) and endothelial cells that line vessels. But this hybrid combination seems unique to the interstitium. There are several lines of investigation that suggest they may be a long-sought but not-yet-identified source of scar in diseases where fibrosis plays a dominant role (e.g., idiopathic pulmonary fibrosis, scleroderma). These same cells also share features of mesenchymal stem cell, an adult stem cell that can be isolated from nearly all tissues, but whose location in most tissues has remained a mystery.
EL | What qualities makes the interstitium — or any organ, for that matter — an organ?
NT | The definition of “organ” is imprecise, but it usually implies that there is a unity and uniqueness of structure or function. This space has both: unique properties and structures not seen elsewhere and functions that are highly specific and dependent on the unique structures and cell types that form it.
Some people have pushed back, questioning how we can call it “new” if the interstitium has been discussed for more than a century. The reason is that the anatomy, cellular and matrix components, and bodily distribution of the macroscopic interstitium we are describing now have never been described in this detail. Dense connective-tissue layers of the body, re-visioned by this work, are not just “connective tissue” but a macroscopic organ. Detailed discussions of “interstitium” in most of the research literature focus on the microscopic spaces between cells and have not consistently investigated this newly recognized structure, either at the larger scale or in the full distribution throughout the body.
EL | Another “new” organ, the mesentery (a fold of membrane that attaches the stomach, small intestine, spleen, pancreas, and other organs to the posterior wall of the abdomen), was designated just last year. Is there a reason for this recent boom in new organs?
NT | New techniques for examining tissues always lead to new concepts not thought of before. For the mesentery, the tissue was recognized, but it was thought of as “just” fat, inert and uninteresting. New techniques for studying physiology revealed a highly organized functional organ.
In the case of the interstitium, this new ability to look microscopically at living tissue made all the difference.
EL | Can you speculate on other functions that the interstitium might affect?
NT | There are many complementary-medicine techniques that have been proven to have therapeutic efficacy but, in the absence of mechanistic explanations of the sort prized in Western medicine, remain poorly understood or even scoffed at over all. Acupuncture, pulse diagnosis in Tibetan and Chinese medicine practices, myofascial-release therapy, for example, are all techniques that may find some mechanistic explanations in the interstitial structure and properties.
For example, some data suggests that sound waves through tissue are related to the placement of acupoint needles, but the nature of how such sound waves are propagated has been lacking. But the tips of those needles reach into the dermal interstitium. Could the arrangement of the collagen bundles dampen sound waves off the meridian and promote propagation along a meridian channel not previously viewed?
Likewise, the collagen bundles themselves are interesting. Collagen arrays are known not only to conduct electricity but to create electrical current when bent. As already mentioned, interstitial spaces are often, if not always, in movement: Does this generate electrical activity and communication through the network of the collagen lattice? What effects on that conduction occur with pressure or introduction of a moving or charged needle into the space?
These are both reasonable (and perhaps related) physiologies that may help to build a mechanistic understanding of acupuncture.
More questions arise than are answered, but that’s true of all the best, most exciting science!
EL | How does it feel to discover a new organ? A bit like discovering a new planet, perhaps?
NT | This is not my first time making a paradigm-shifting discovery. Eighteen years ago, I was one of the pioneers of adult-stem-cell plasticity that led to President George W. Bush’s 2001 stem-cell address to the nation. It’s quite a humbling experience, actually — not jump-up-and-down exciting in the moment, more like quiet awe.
Most discoveries I’ve made, I can see the implications of in two or three steps. The kind of work I do means a new diagnostic approach, a new therapeutic question raised — but you can see where it’s headed directly and it’s kind of limited, however valuable. These bigger kinds of discoveries are like you’re playing with dominos and you push the first and it hits the second, then the third, then the fourth, and then you look up and realize there are paths of dominos extending in all directions and out past the horizon and you can hear them falling and you realize it’s just so far beyond anything you can imagine in terms of impact. One can’t begin to imagine where it will lead.
http://rigint.blogspot.com/2006/12/do-y ... i-see.html
”DoYou See What I See?”
If a space ship touched down
In my yard I would run
Right towards it, yelling
"Greetings, let's go have some fun!" - Arthur’s Songbook
Sixty-four years ago this month, six million Americans became unwitting subjects in an experiment in psychological warfare.
It was the night before Halloween, 1938. At 8 p.m. CST, the Mercury Radio on the Air began broadcasting Orson Welles' radio adaptation of H. G. Wells' War of the Worlds. As is now well known, the story was presented as if it were breaking news, with bulletins so realistic that an estimated one million people believed the world was actually under attack by Martians. Of that number, thousands succumbed to outright panic, not waiting to hear Welles' explanation at the end of the program that it had all been a Halloween prank, but fleeing into the night to escape the alien invaders.
Later, psychologist Hadley Cantril conducted a study of the effects of the broadcast and published his findings in a book, The Invasion from Mars: A Study in the Psychology of Panic. This study explored the power of broadcast media, particularly as it relates to the suggestibility of human beings under the influence of fear. Cantril was affiliated with Princeton University's Radio Research Project, which was funded in 1937 by the Rockefeller Foundation. Also affiliated with the Project was Council on Foreign Relations (CFR) member and Columbia Broadcasting System (CBS) executive Frank Stanton, whose network had broadcast the program. Stanton would later go on to head the news division of CBS, and in time would become president of the network, as well as chairman of the board of the RAND Corporation, the influential think tank which has done groundbreaking research on, among other things, mass brainwashing.
Two years later, with Rockefeller Foundation money, Cantril established the Office of Public Opinion Research (OPOR), also at Princeton. Among the studies conducted by the OPOR was an analysis of the effectiveness of "psycho-political operations" (propaganda, in plain English) of the Office of Strategic Services (OSS), the forerunner of the Central Intelligence Agency (CIA). Then, during World War II, Cantril÷and Rockefeller money÷assisted CFR member and CBS reporter Edward R. Murrow in setting up the Princeton Listening Center, the purpose of which was to study Nazi radio propaganda with the object of applying Nazi techniques to OSS propaganda. Out of this project came a new government agency, the Foreign Broadcast Intelligence Service (FBIS). The FBIS eventually became the United States Information Agency (USIA), which is the propaganda arm of the National Security Council.
Thus, by the end of the 1940s, the basic research had been done and the propaganda apparatus of the national security state had been set up--just in time for the Dawn of Television ...
Experiments conducted by researcher Herbert Krugman reveal that, when a person watches television, brain activity switches from the left to the right hemisphere. The left hemisphere is the seat of logical thought. Here, information is broken down into its component parts and critically analyzed. The right brain, however, treats incoming data uncritically, processing information in wholes, leading to emotional, rather than logical, responses. The shift from left to right brain activity also causes the release of endorphins, the body's own natural opiates--thus, it is possible to become physically addicted to watching television, a hypothesis borne out by numerous studies which have shown that very few people are able to kick the television habit.
This numbing of the brain's cognitive function is compounded by another shift which occurs in the brain when we watch television. Activity in the higher brain regions (such as the neo-cortex) is diminished, while activity in the lower brain regions (such as the limbic system) increases. The latter, commonly referred to as the reptile brain, is associated with more primitive mental functions, such as the "fight or flight" response. The reptile brain is unable to distinguish between reality and the simulated reality of television. To the reptile brain, if it looks real, it is real. Thus, though we know on a conscious level it is "only a film," on a conscious level we do not--the heart beats faster, for instance, while we watch a suspenseful scene. Similarly, we know the commercial is trying to manipulate us, but on an unconscious level the commercial nonetheless succeeds in, say, making us feel inadequate until we buy whatever thing is being advertised--and the effect is all the more powerful because it is unconscious, operating on the deepest level of human response. The reptile brain makes it possible for us to survive as biological beings, but it also leaves us vulnerable to the manipulations of television programmers.
It is not just commercials that manipulate us. On television news as well, image and sound are as carefully selected and edited to influence human thought and behavior as in any commercial. The news anchors and reporters themselves are chosen for their physical attractiveness--a factor which, as numerous psychological studies have shown, contributes to our perception of a person's trustworthiness. Under these conditions, then, the viewer easily forgets--if, indeed, the viewer ever knew in the first place--that the worldview presented on the evening news is a contrivance of the network owners--owners such as General Electric (NBC) and Westinghouse (CBS), both major defense contractors. By molding our perception of the world, they mold our opinions. This distortion of reality is determined as much by what is left out of the evening news as what is included--as a glance at Project Censored's yearly list of top 25 censored news stories will reveal. If it's not on television, it never happened. Out of sight, out of mind.
Under the guise of journalistic objectivity, news programs subtly play on our emotions--chiefly fear. Network news divisions, for instance, frequently congratulate themselves on the great service they provide humanity by bringing such spectacles as the September 11 terror attacks into our living rooms. We have heard this falsehood so often, we have come to accept it as self-evident truth. However, the motivation for live coverage of traumatic news events is not altruistic, but rather to be found in the central focus of Cantril's War of the Worlds research--the manipulation of the public through fear.
There is another way in which we are manipulated by television news. Human beings are prone to model the behaviors they see around them, and avoid those which might invite ridicule or censure, and in the hypnotic state induced by television, this effect is particularly pronounced. For instance, a lift of the eyebrow from Peter Jennings tells us precisely what he is thinking--and by extension what we should think. In this way, opinions not sanctioned by the corporate media can be made to seem disreputable, while sanctioned opinions are made to seem the very essence of civilized thought. And should your thinking stray into unsanctioned territory despite the trusted anchor's example, a poll can be produced which shows that most persons do not think that way--and you don't want to be different do you? Thus, the mental wanderer is brought back into the fold.
This process is also at work in programs ostensibly produced for entertainment. The "logic" works like this: Archie Bunker is an idiot, Archie Bunker is against gun control, therefore idiots are against gun control. Never mind the complexities of the issue. Never mind the fact that the true purpose of the Second Amendment is not to protect the rights of deer hunters, but to protect the citizenry against a tyrannical government (an argument you will never hear voiced on any television program). Monkey see, monkey do--or, in this case, monkey not do.
Notice, too, the way in which television programs depict conspiracy researchers or anti-New World Order activists. On situation comedies, they are buffoons. On dramatic programs, they are dangerous fanatics. This imprints on the mind of the viewer the attitude that questioning the official line or holding "anti-government" opinions is crazy, therefore not to be emulated.
Another way in which entertainment programs mold opinion can be found in the occasional television movie, which "sensitively" deals with some "social" issue. A bad behavior is spotlighted--"hate" crimes, for instance--in such a way that it appears to be a far more rampant problem than it may actually be, so terrible in fact that the "only" cure for it is more laws and government "protection." Never mind that laws may already exist to cover these crimes--the law against murder, for instance. Once we have seen the well-publicized murder of the young gay man Matthew Shepherd dramatized in not one, but two, television movies in all its heartrending horror, nothing will do but we pass a law making the very thought behind the crime illegal.
People will also model behaviors from popular entertainment which are not only dangerous to their health and could land them in jail, but also contribute to social chaos. While this may seem to be simply a matter of the producers giving the audience what it wants, or the artist holding a mirror up to society, it is in fact intended to influence behavior.
Consider the way many films glorify drug abuse. When a popular star playing a sympathetic character in a mainstream R-rated film uses hard drugs with no apparent health or legal consequences (John Travolta's use of heroin in Pulp Fiction, for instance--an R-rated film produced for theatrical release, which now has found a permanent home on television, via cable and video players), a certain percentage of people--particularly the impressionable young--will perceive hard drug use as the epitome of anti-Establishment cool and will model that behavior, contributing to an increase in drug abuse. And who benefits?
As has been well documented by Gary Webb in his award-winning series for the San Jose Mercury New, former Los Angeles narcotics detective Michael Ruppert, and many other researchers and whistleblowers--the CIA is the main purveyor of hard drugs in this country. The CIA also has its hand in the "prison-industrial complex." Wackenhut Corporation, the largest owner of private prisons, has on its board of directors many former CIA employees, and is very likely a CIA front. Thus, films which glorify drug abuse may be seen as recruitment ads for the slave labor-based private prison system. Also, the social chaos and inflated crime rate which result from the contrived drug problem contributes to the demand from a frightened society for more prisons, more laws, and the further erosion of civil liberties. This effect is further heightened by television news segments and documentaries which focus on drug abuse and other crimes, thus giving the public the misperception that crime is even higher than it really is.
There is another socially debilitating process at work in what passes for entertainment on television these days. Over the years, there has been a steady increase in adult subject matter on programs presented during family viewing hours. For instance, it is common for today's prime-time situation comedies to make jokes about such matters as masturbation (Seinfeld once devoted an entire episode to the topic), or for daytime talk shows such as Jerry Springer's to showcase such topics as bestiality. Even worse are the "reality" programs currently in vogue. Each new offering in this genre seems to hit a new low. MTV, for instance, recently subjected a couple to a Candid Camera-style prank in which, after winning a trip to Las Vegas, they entered their hotel room to find an actor made up as a mutilated corpse in the bathtub. Naturally, they were traumatized by the experience and sued the network. Or, consider a new show on British television in which contestants compete to see who can infect each other with the most diseases--venereal diseases included.
It would appear, at the very least, that these programs serve as a shill operation to strengthen the argument for censorship. There may also be an even darker motive. These programs contribute to the general coarsening of society we see all around us--the decline in manners and common human decency and the acceptance of cruelty for its own sake as a legitimate form of entertainment. Ultimately, this has the effect of debasing human beings into savages, brutes--the better to herd them into global slavery.
For the first decade or so after the Dawn of Television, there were only a handful of channels in each market--one for each of the three major networks and maybe one or two independents. Later, with the advent of cable and more channels, the population pie began to be sliced into finer pieces--or "niche markets." This development has often been described as representing a growing diversity of choices, but in reality it is a fine-tuning of the process of mass manipulation, a honing-in on particular segments of the population, not only to sell them specifically-targeted consumer products but to influence their thinking in ways advantageous to the globalist agenda.
One of these "target audiences" is that portion of the population which, after years of blatant government cover-up in areas such as UFOs and the assassination of John F. Kennedy, maintains a cynicism toward the official line, despite the best efforts of television programmers to depict conspiracy research in a negative light. How to reach this vast, disenfranchised target audience and co-opt their thinking? One way is to put documentaries before them which mix of fact with disinformation, thereby confusing them. Another is to take the X Files approach.
The heroes of X Files are investigators in a fictitious paranormal department of the FBI whose adventures sometimes take them into parapolitical territory. On the surface this sounds good. However, whatever good X Files might accomplish by touching on such matters as MK-ULTRA or the JFK assassination is cancelled out by associating them with bug-eyed aliens and ghosts. Also, on X Files, the truth is always depicted as "out there" somewhere--in the stars, or some other dimension, never in brainwashing centers such as the RAND Corporation or its London counterpart, the Tavistock Institute. This has the effect of obscuring the truth, making it seem impossibly out-of-reach, and associating reasonable lines of political inquiry with the fantastic and other-wordly.
Not that there is no connection between the parapolitical and the paranormal. There is undoubtedly a cover-up at work with regard to UFOs, but if we accept uncritically the notion that UFOs are anything other than terrestrial in origin, we are falling headfirst into a carefully-set trap. To its credit, X Files has dealt with the idea that extraterrestrials might be a clever hoax by the government, but never decisively. The labyrinthine plots of the show somehow manage to leave the viewer wondering if perhaps the hoax idea is itself a hoax put out there to cover up the existence of extraterrestrials. This is hardly helpful to a true understanding of UFOs and associated phenomena, such as alien abductions and cattle mutilations.
Extraterrestrials have been a staple of popular entertainment since The War of the Worlds (both the novel and its radio adaptation). They have been depicted as invaders and benefactors, but rarely have they been unequivocally depicted as a hoax. There was an episode of Outer Limits which depicted a group of scientists staging a mock alien invasion to frighten the world's population into uniting as one--but, again, such examples are rare. Even in UFO documentaries on the Discovery Channel, the possibility of a terrestrial origin for the phenomenon is conspicuous by its lack of mention.
UFO researcher Jacques Vallee, the real-life model for the French scientist in Stephen Spielberg's Close Encounters of the Third Kind, attempted to interest Spielberg in a terrestrial explanation for the phenomenon. In an interview on Conspire.com, Vallee said, "I argued with him that the subject was even more interesting if it wasn't extraterrestrials. If it was real, physical, but not ET. So he said, 'You're probably right, but that's not what the public is expecting--this is Hollywood and I want to give people something that's close to what they expect.'"
How convenient that what Spielberg says the people expect is also what the Pentagon wants them to believe.
In Messengers of Deception, Vallee tracks the history of a wartime British Intelligence unit devoted to psychological operations. Code-named (interestingly) the "Martians," it specialized in manufacturing and distributing false intelligence to confuse the enemy. Among its activities were the creation of phantom armies with inflatable tanks, simulations of the sounds of military ships maneuvering in the fog, and forged letters to lovers from phantom soldiers attached to phantom regiments.
Vallee suggests that deception operations of this kind may have extended beyond World War II, and that much of the "evidence" for "flying saucers" is no more real than the inflatable tanks of World War II. He writes: "The close association of many UFO sightings with advanced military hardware (test sites like the New Mexico proving grounds, missile silos of the northern plains, naval construction sites like the major nuclear facility at Pascagoula and the bizarre love affairs ... between contactee groups, occult sects, and extremist political factions, are utterly clear signals that we must exercise extreme caution."
Many people find it fantastic that the government would perpetrate such a hoax, while at the same time having no difficulty entertaining the notion that extraterrestrials are regularly traveling light years to this planet to kidnap people out of their beds and subject them to anal probes.
The military routinely puts out disinformation to obscure its activities, and this has certainly been the case with UFOs. Consider Paul Bennewitz, the UFO enthusiast who began studying strange lights that would appear nightly over the Manzano Test Range outside Albuquerque. When the Air Force learned about his study, ufologist William Moore (by his own admission) was recruited to feed him forged military documents describing a threat from extraterrestrials. The effect was to confuse Bennewitz--even making him paranoid enough to be hospitalized--and discredit his research. Evidently, those strange lights belonged to the Air Force, which does not like outsiders inquiring into its affairs.
What the Air Force did to Bennewitz, it also does on a mass scale--and popular entertainment has been complicit in this process. Whether or not the filmmakers themselves are consciously aware of this agenda does not matter. The notion that extraterrestrials might visit this planet is so much a part of popular culture and modern mythology that it hardly needs assistance from the military to propagate itself.
It has the effect not only of obscuring what is really going on at research facilities such as Area 51, but of tainting UFO research in general as "kooky"--and does the job so thoroughly that one need only say "UFO" in the same breath with "JFK" to discredit research in that area as well. It also may, in the end, serve the same purpose as depicted in that Outer Limits episode--to unite the world's population against a perceived common threat, thus offering the pretext for one-world government.
The following quotes demonstrate that the idea has at least occurred to world leaders:
"In our obsession with antagonisms of the moment, we often forget how much unites all the members of humanity. Perhaps we need some outside, universal threat to make us realize this common bond. I occasionally think how quickly our differences would vanish if we were facing an alien threat from outside this world." (President Ronald Reagan, speaking in 1987 to the United Nations.
"The nations of the world will have to unite, for the next war will be an interplanetary war. The nations of the earth must someday make a common front against attack by people from other planets." General Douglas MacArthur, 1955)
Some one remarked that the best way to unite all the nations on this globe would be an attack from some other planet. In the face of such an alien enemy, people would respond with a sense of their unity of interest and purpose." (John Dewey, Professor of Philosophy at Columbia University, speaking at a conference sponsored by the Carnegie Endowment for International Peace, 1917)
And where was this "alien threat" motif given birth? Again, we find the answer in popular entertainment, and again the earliest source is The War of the Worlds--both Wells' and Welles' versions.
Perhaps it is no coincidence that H. G. Wells was a founding member of the Round Table, the think tank that gave birth to the Royal Institute for International Affairs (RIIA) and its American cousin, the CFR. Perhaps Wells intentionally introduced the motif as a meme which might prove useful later in establishing the "world social democracy" he described in his 1939 book The New World Order. Perhaps, too, another purpose of the Orson Welles broadcast was to test of the public's willingness to believe in extraterrestrials.
At any rate, it proved a popular motif, and paved the way for countless movies and television programs to come, and has often proven a handy device for promoting the New World Order, whether the extraterrestrials are invaders or--in films like The Day the Earth Stood Still--benefactors who have come to Earth to warn us to mend our ways and unite as one, or be blown to bits.
We see the globalist agenda at work in Star Trek and its spin-offs as well. Over the years, many a television viewer's mind has been imprinted with the idea that centralized government is the solution for our problems. Never mind the complexities of the issue--never mind the fact that, in the real world, centralization of power leads to tyranny. The reptile brain, hypnotized by the flickering television screen, has seen Captain Kirk and his culturally diverse crew demonstrate time and again that the United Federation of Planets is a good thing. Therefore, it must be so.
It remains to be seen whether the Masters of Deception will, like those scientists in The Outer Limits, stage an invasion from space with anti-gravity machines and holograms, but, if they do, it will surely be broadcast on television, so that anyone out of range of that light show in the sky, will be able to see it, and all with eyes to see will believe. It will be War of the Worlds on a grand scale.
Jack Kerouac once noted, while walking down a residential street at night, glancing into living rooms lit by the gray glare of television sets, that we have become a world of people "thinking the same thoughts at the same time."
Every day, millions upon millions of human beings sit down at the same time to watch the same football game, the same mini-series, the same newscast. And where might all this shared experience and uniformity of thought be taking us?
A recent report co-sponsored by the U.S. National Science Foundation and the Commerce Department calls for a broad-based research program to find ways to use nanotechnology, biotechnology, information technology, and cognitive sciences, to achieve telepathy, machine-to-human communication, amplified sensory experience, enhanced intellectual capacity, and mass participation in a "hive mind." Quoting the report: "With knowledge no longer encapsulated in individuals, the distinction between individuals and the entirety of humanity would blur. Think Vulcan mind-meld. We would perhaps become more of a hive mind--an enormous, single, intelligent entity."
There is no doubt that we have been brought closer to the "hive mind" by the mass media. For, what is the shared experience of television but a type of "Vulcan mind-meld"? (Note the terminology borrowed from Star Trek, no doubt to make the concept more familiar and palatable. If Spock does it, it must be okay.)
This government report would have us believe that the hive mind will be for our good--a wonderful leap in evolution. It is nothing of the kind. For one thing, if the government is behind it, you may rest assured it is not for our good. For another, common sense should tell us that blurring the line "between individuals and the entirety of humanity" means mass conformity, the death of human individuality. Make no mistake about it--if humanity is to become a hive, there will be at the center of that hive a Queen Bee, whom all the lesser "insects" will serve. This is not evolution--this is devolution. Worse, it is the ultimate slavery--the slavery of the mind.
And it is a horror first unleashed in 1938 when one million people responded as one--as a hive--to Orson Welles' Halloween prank.
In a sense, those people who fled the Martians that night were right to be afraid. They were indeed under attack. But they were wrong about who was attacking them. It was something far worse than Martians. Had they only known the true nature of the danger facing them, perhaps they would have gone to the nearest radio station with torches in hand like the villagers in those old Frankenstein movies and burned it to the ground, or at least commandeered the new technology and turned it towards another use--the liberation of humanity, instead of its enslavement.
THE HIVE MIND
US report foretells of brave new world
By Nathan Cochrane
July 23 2002
A draft government report says we will alter human evolution within 20 years by combining what we know of nanotechnology, biotechnology, IT and cognitive sciences. The 405-page report sponsored by the US National Science Foundation and Commerce Department, Converging Technologies for Improving Human Performance, calls for a broad-based research program to improve human performance leading to telepathy, machine-to-human communication, amplified personal sensory devices and enhanced intellectual capacity.
People may download their consciousnesses into computers or other bodies even on the other side of the solar system, or participate in a giant "hive mind", a network of intelligences connected through ultra-fast communications networks. "With knowledge no longer encapsulated in individuals, the distinction between individuals and the entirety of humanity would blur," the report says. "Think Vulcan mind-meld. We would perhaps become more of a hive mind - an enormous, single, intelligent entity."
Armies may one day be fielded by machines that think for themselves while devices will respond to soldiers' commands before their thoughts are fully formed, it says. The report says the abilities are within our grasp but will require an intense public-relations effort to "prepare key organisations and societal activities for the changes made possible by converging technologies", and to counter concern over "ethical, legal and moral" issues. Education should be overhauled down to the primary-school level to bridge curriculum gaps between disparate subject areas.
Professional societies should be open to practitioners from other fields, it says. "The success of this convergent-technologies priority area is crucial to the future of humanity," the report says. wtec.org/ConvergingTechnologies/Report/NBIC-pre-publication.pdf
12/14/2006 09:03:00 PM
DrEvil » 01 Jan 2019 04:55 wrote:Not sure if this has been mentioned before, but I recently came across a fascinating theory called the "free energy principle" * (which has nothing to do with cold fusion or zero point energy btw), by Karl Friston.
The simple version is that biological systems try to minimize the difference between their model of the world and their perceptions of the world. In other words, minimizing surprise. This can be done either by updating and improving the internal model, or by actively changing the world to conform to the internal model.
* Also known as "active inference" in neuroscience
DrEvil » 01 Jan 2019 17:19 wrote:^^You're welcome, and happy new year.
My favorite part is that it apparently makes AIs creative when implemented. They start experimenting to get to that sweet spot of minimum surprise. Not sure how I feel about the "changing your surroundings to fit your model" bit in that context though.
The Genius Neuroscientist Who Might Hold the Key to True AI
Author: Shaun Raviv 11.13.18 06:00 am
When King George III of England began to show signs of acute mania toward the end of his reign, rumors about the royal madness multiplied quickly in the public mind. One legend had it that George tried to shake hands with a tree, believing it to be the King of Prussia. Another described how he was whisked away to a house on Queen Square, in the Bloomsbury district of London, to receive treatment among his subjects. The tale goes on that George’s wife, Queen Charlotte, hired out the cellar of a local pub to stock provisions for the king’s meals while he stayed under his doctor’s care.
More than two centuries later, this story about Queen Square is still popular in London guidebooks. And whether or not it’s true, the neighborhood has evolved over the years as if to conform to it. A metal statue of Charlotte stands over the northern end of the square; the corner pub is called the Queen’s Larder; and the square’s quiet rectangular garden is now all but surrounded by people who work on brains and people whose brains need work. The National Hospital for Neurology and Neurosurgery—where a modern-day royal might well seek treatment—dominates one corner of Queen Square, and the world-renowned neuroscience research facilities of University College London round out its perimeter. During a week of perfect weather last July, dozens of neurological patients and their families passed silent time on wooden benches at the outer edges of the grass.
On a typical Monday, Karl Friston arrives on Queen Square at 12:25 pm and smokes a cigarette in the garden by the statue of Queen Charlotte. A slightly bent, solitary figure with thick gray hair, Friston is the scientific director of University College London’s storied Functional Imaging Laboratory, known to everyone who works there as the FIL. After finishing his cigarette, Friston walks to the western side of the square, enters a brick and limestone building, and heads to a seminar room on the fourth floor, where anywhere from two to two dozen people might be facing a blank white wall waiting for him. Friston likes to arrive five minutes late, so everyone else is already there.
His greeting to the group is liable to be his first substantial utterance of the day, as Friston prefers not to speak with other human beings before noon. (At home, he will have conversed with his wife and three sons via an agreed-upon series of smiles and grunts.) He also rarely meets people one-on-one. Instead, he prefers to hold open meetings like this one, where students, postdocs, and members of the public who desire Friston’s expertise—a category of person that has become almost comically broad in recent years—can seek his knowledge. “He believes that if one person has an idea or a question or project going on, the best way to learn about it is for the whole group to come together, hear the person, and then everybody gets a chance to ask questions and discuss. And so one person’s learning becomes everybody’s learning,” says David Benrimoh, a psychiatry resident at McGill University who studied under Friston for a year. “It’s very unique. As many things are with Karl.”
At the start of each Monday meeting, everyone goes around and states their questions at the outset. Friston walks in slow, deliberate circles as he listens, his glasses perched at the end of his nose, so that he is always lowering his head to see the person who is speaking. He then spends the next few hours answering the questions in turn. “A Victorian gentleman, with Victorian manners and tastes,” as one friend describes Friston, he responds to even the most confused questions with courtesy and rapid reformulation. The Q&A sessions—which I started calling “Ask Karl” meetings—are remarkable feats of endurance, memory, breadth of knowledge, and creative thinking. They often end when it is time for Friston to retreat to the minuscule metal balcony hanging off his office for another smoke.
Friston first became a heroic figure in academia for devising many of the most important tools that have made human brains legible to science. In 1990 he invented statistical parametric mapping, a computational technique that helps—as one neuroscientist put it—“squash and squish” brain images into a consistent shape so that researchers can do apples-to-apples comparisons of activity within different crania. Out of statistical parametric mapping came a corollary called voxel-based morphometry, an imaging technique that was used in one famous study to show that the rear side of the hippocampus of London taxi drivers grew as they learned “the knowledge.”
A study published in Science in 2011 used yet a third brain-imaging-analysis software invented by Friston—dynamic causal modeling—to determine if people with severe brain damage were minimally conscious or simply vegetative.
When Friston was inducted into the Royal Society of Fellows in 2006, the academy described his impact on studies of the brain as “revolutionary” and said that more than 90 percent of papers published in brain imaging used his methods. Two years ago, the Allen Institute for Artificial Intelligence, a research outfit led by AI pioneer Oren Etzioni, calculated that Friston is the world’s most frequently cited neuroscientist. He has an h-index—a metric used to measure the impact of a researcher’s publications—nearly twice the size of Albert Einstein’s. Last year Clarivate Analytics, which over more than two decades has successfully predicted 46 Nobel Prize winners in the sciences, ranked Friston among the three most likely winners in the physiology or medicine category.
What’s remarkable, however, is that few of the researchers who make the pilgrimage to see Friston these days have come to talk about brain imaging at all. Over a 10-day period this summer, Friston advised an astrophysicist, several philosophers, a computer engineer working on a more personable competitor to the Amazon Echo, the head of artificial intelligence for one of the world’s largest insurance companies, a neuroscientist seeking to build better hearing aids, and a psychiatrist with a startup that applies machine learning to help treat depression. And most of them had come because they were desperate to understand something else entirely.
For the past decade or so, Friston has devoted much of his time and effort to developing an idea he calls the free energy principle. (Friston refers to his neuroimaging research as a day job, the way a jazz musician might refer to his shift at the local public library.) With this idea, Friston believes he has identified nothing less than the organizing principle of all life, and all intelligence as well. “If you are alive,” he sets out to answer, “what sorts of behaviors must you show?”
First the bad news: The free energy principle is maddeningly difficult to understand. So difficult, in fact, that entire rooms of very, very smart people have tried and failed to grasp it. A Twitter account2 with 3,000 followers exists simply to mock its opacity, and nearly every person I spoke with about it, including researchers whose work depends on it, told me they didn’t fully comprehend it.
But often those same people hastened to add that the free energy principle, at its heart, tells a simple story and solves a basic puzzle. The second law of thermodynamics tells us that the universe tends toward entropy, toward dissolution; but living things fiercely resist it. We wake up every morning nearly the same person we were the day before, with clear separations between our cells and organs, and between us and the world without. How? Friston’s free energy principle says that all life, at every scale of organization—from single cells to the human brain, with its billions of neurons—is driven by the same universal imperative, which can be reduced to a mathematical function. To be alive, he says, is to act in ways that reduce the gulf between your expectations and your sensory inputs. Or, in Fristonian terms, it is to minimize free energy.
To get a sense of the potential implications of this theory, all you have to do is look at the array of people who darken the FIL’s doorstep on Monday mornings. Some are here because they want to use the free energy principle to unify theories of the mind, provide a new foundation for biology, and explain life as we know it. Others hope the free energy principle will finally ground psychiatry in a functional understanding of the brain. And still others come because they want to use Friston’s ideas to break through the roadblocks in artificial intelligence research. But they all have one reason in common for being here, which is that the only person who truly understands Karl Friston’s free energy principle may be Karl Friston himself.
Friston isn't just one of the most influential scholars in his field; he’s also among the most prolific in any discipline. He is 59 years old, works every night and weekend, and has published more than 1,000 academic papers since the turn of the millennium. In 2017 alone, he was a lead or coauthor of 85 publications3—which amounts to approximately one every four days.
But if you ask him, this output isn’t just the fruit of an ambitious work ethic; it’s a mark of his tendency toward a kind of rigorous escapism.
Friston draws a carefully regulated boundary around his inner life, guarding against intrusions, many of which seem to consist of “worrying about other people.” He prefers being onstage, with other people at a comfortable distance, to being in private conversations. He does not have a mobile phone. He always wears navy-blue suits, which he buys two at a time at a closeout shop. He finds disruptions to his weekly routine on Queen Square “rather nerve-racking” and so tends to avoid other human beings at, say, international conferences. He does not enjoy advocating for his own ideas.
At the same time, Friston is exceptionally lucid and forthcoming about what drives him as a scholar. He finds it incredibly soothing—not unlike disappearing for a smoke—to lose himself in a difficult problem that takes weeks to resolve. And he has written eloquently about his own obsession, dating back to childhood, with finding ways to integrate, unify, and make simple the apparent noise of the world.
Friston traces his path to the free energy principle back to a hot summer day when he was 8 years old. He and his family were living in the walled English city of Chester, near Liverpool, and his mother had told him to go play in the garden. He turned over an old log and spotted several wood lice—small bugs with armadillo-shaped exoskeletons—moving about, he initially assumed, in a frantic search for shelter and darkness. After staring at them for half an hour, he deduced that they were not actually seeking the shade. “That was an illusion,” Friston says. “A fantasy that I brought to the table.”
He realized that the movement of the wood lice had no larger purpose, at least not in the sense that a human has a purpose when getting in a car to run an errand. The creatures’ movement was random; they simply moved faster in the warmth4 of the sun.
Friston calls this his first scientific insight, a moment when “all these contrived, anthropomorphized explanations of purpose and survival and the like all seemed to just peel away,” he says. “And the thing you were observing just was. In the sense that it could be no other way.”
Friston’s father was a civil engineer who worked on bridges all around England, and his family moved around with him. In just his first decade, the young Friston attended six different schools. His teachers often didn’t know what to do with him, and he drew most of his fragile self-esteem from solitary problem solving. At age 10 he designed a self-righting robot that could, in theory, traverse uneven ground while carrying a glass of water, using self-correcting feedback actuators and mercury levels. At school, a psychologist was brought in to ask him how he came up with it. “You’re very intelligent, Karl,” Friston’s mother reassured him, not for the last time. “Don’t let them tell you you’re not.” He says he didn’t believe her.
When Friston was in his mid-teens, he had another wood-lice moment. He had just come up to his bedroom from watching TV and noticed the cherry trees in bloom outside the window. He suddenly became possessed by a thought that has never let go of him since. “There must be a way of understanding everything by starting from nothing,” he thought. “If I’m only allowed to start off with one point in the entire universe, can I derive everything else I need from that?” He stayed there on his bed for hours, making his first attempt. “I failed completely, obviously,” he says.
Toward the end of secondary school, Friston and his classmates were the subjects of an early experiment in computer-assisted advising. They were asked a series of questions, and their answers were punched into cards and run through a machine to extrapolate the perfect career choice. Friston had described how he enjoyed electronics design and being alone in nature, so the computer suggested he become a television antenna installer. That didn’t seem right, so he visited a school career counselor and said he’d like to study the brain in the context of mathematics and physics. The counselor told Friston he should become a psychiatrist, which meant, to Friston’s horror, that he had to study medicine.
Both Friston and the counselor had confused psychiatry with psychology, which is what he probably ought to have pursued as a future researcher. But it turned out to be a fortunate error, as it put Friston on a path toward studying both the mind and body,5 and toward one of the most formative experiences of his life—one that got Friston out of his own head.
After completing his medical studies, Friston moved to Oxford and spent two years as a resident trainee at a Victorian-era hospital called Littlemore. Founded under the 1845 Lunacy Act, Littlemore had originally been instituted to help transfer all “pauper lunatics” from workhouses to hospitals. By the mid-1980s, when Friston arrived, it was one of the last of the old asylums on the outskirts of England’s cities.
Friston was assigned a group of 32 chronic schizophrenic patients, the worst-off residents of Littlemore, for whom treatment mostly meant containment. For Friston, who recalls his former patients with evident nostalgia, it was an introduction to the way that connections in the brain were easily broken. “It was a beautiful place to work,” he says. “This little community of intense and florid psychopathology.”
Twice a week he led 90-minute group therapy sessions in which the patients explored their ailments together, reminiscent of the Ask Karl meetings today. The group included colorful characters who still inspire Friston’s thinking more than 30 years later. There was Hillary,6 who looked like she could play the senior cook on Downton Abbey but who, before coming to Littlemore, had decapitated her neighbor with a kitchen knife, convinced he had become an evil, human-sized crow.
There was Ernest, who had a penchant for pastel Marks & Spencer cardigans and matching plimsoll shoes, and who was “as rampant and incorrigible a pedophile as you could ever imagine,” Friston says.
And then there was Robert, an articulate young man who might have been a university student had he not suffered severe schizophrenia. Robert ruminated obsessively about, of all things, angel shit; he pondered whether the stuff was a blessing or a curse and whether it was ever visible to the eye, and he seemed perplexed that these questions had not occurred to others. To Friston, the very concept of angel shit was a miracle. It spoke to the ability of people with schizophrenia to assemble concepts that someone with a more regularly functioning brain couldn’t easily access. “It’s extremely difficult to come up with something like angel shit,” Friston says with something like admiration. “I couldn’t do it.”
After Littlemore, Friston spent much of the early 1990s using a relatively new technology—PET scans—to try to understand what was going on inside the brains of people with schizophrenia. He invented statistical parametric mapping along the way. Unusually for the time, Friston was adamant that the technique should be freely shared rather than patented and commercialized, which largely explains how it became so widespread. Friston would fly across the world—to the National Institutes of Health in Bethesda, Maryland, for example—to give it to other researchers. “It was me, literally, with a quarter of biometric tape, getting on an airplane, taking it over there, downloading it, spending a day getting it to work, teaching somebody how to use it, then going home for a rest,” Friston says. “This is how open source software worked in those days.”
Friston came to Queen Square in 1994, and for a few years his office at the FIL sat just a few doors down from the Gatsby Computational Neuroscience Unit. The Gatsby—where researchers study theories of perception and learning in both living and machine systems—was then run by its founder, the cognitive psychologist and computer scientist Geoffrey Hinton. While the FIL was establishing itself as one of the premier labs for neuroimaging, the Gatsby was becoming a training ground for neuroscientists interested in applying mathematical models to the nervous system.
Friston, like many others, became enthralled by Hinton’s “childlike enthusiasm” for the most unchildlike of statistical models, and the two men became friends.7
Over time, Hinton convinced Friston that the best way to think of the brain was as a Bayesian probability machine. The idea, which goes back to the 19th century and the work of Hermann von Helmholtz, is that brains compute and perceive in a probabilistic manner, constantly making predictions and adjusting beliefs based on what the senses contribute. According to the most popular modern Bayesian account, the brain is an “inference engine” that seeks to minimize “prediction error.”
In 2001, Hinton left London for the University of Toronto, where he became one of the most important figures in artificial intelligence, laying the groundwork8 for much of today’s research in deep learning.
Before Hinton left, however, Friston visited his friend at the Gatsby one last time. Hinton described a new technique he’d devised to allow computer programs to emulate human decisionmaking more efficiently—a process for integrating the input of many different probabilistic models, now known in machine learning as a “product of experts.”
The meeting left Friston’s head spinning. Inspired by Hinton’s ideas, and in a spirit of intellectual reciprocity, Friston sent Hinton a set of notes about an idea he had for connecting several seemingly “unrelated anatomical, physiological, and psychophysical attributes of the brain.” Friston published those notes in 2005—the first of many dozens of papers he would go on to write about the free energy principle.
Even Friston has a hard time deciding where to start when he describes the free energy principle. He often sends people to its Wikipedia page. But for my part, it seems apt to begin with the blanket draped over the futon in Friston’s office.
It’s a white fleece throw, custom-printed with a black-and-white portrait of a stern, bearded Russian mathematician named Andrei Andreyevich Markov, who died in 1922. The blanket is a gag gift from Friston’s son, a plush, polyester inside joke about an idea that has become central to the free energy principle. Markov is the eponym of a concept called a Markov blanket, which in machine learning is essentially a shield that separates one set of variables from others in a layered, hierarchical system. The psychologist Christopher Frith—who has an h-index on par with Friston’s—once described a Markov blanket as “a cognitive version of a cell membrane, shielding states inside the blanket from states outside.”
In Friston’s mind, the universe is made up of Markov blankets inside of Markov blankets. Each of us has a Markov blanket that keeps us apart from what is not us. And within us are blankets separating organs, which contain blankets separating cells, which contain blankets separating their organelles. The blankets define how biological things exist over time and behave distinctly from one another. Without them, we’re just hot gas dissipating into the ether.
“That’s the Markov blanket you’ve read about. This is it. You can touch it,” Friston said dryly when I first saw the throw in his office. I couldn’t help myself; I did briefly reach out to feel it under my fingers. Ever since I first read about Markov blankets, I’d seen them everywhere. Markov blankets around a leaf and a tree and a mosquito. In London, I saw them around the postdocs at the FIL, around the black-clad protesters at an antifascist rally, and around the people living in boats in the canals. Invisible cloaks around everyone, and underneath each one a different living system that minimizes its own free energy.
The concept of free energy itself comes from physics, which means it’s difficult to explain precisely without wading into mathematical formulas. In a sense that’s what makes it powerful: It isn’t a merely rhetorical concept. It’s a measurable quantity that can be modeled, using much the same math that Friston has used to interpret brain images to such world-changing effect. But if you translate the concept from math into English, here’s roughly what you get: Free energy is the difference between the states you expect to be in and the states your sensors tell you that you are in. Or, to put it another way, when you are minimizing free energy, you are minimizing surprise.
According to Friston, any biological system9 that resists a tendency to disorder and dissolution will adhere to the free energy principle—whether it’s a protozoan or a pro basketball team.
A single-celled organism has the same imperative to reduce surprise that a brain does.
The only difference is that, as self-organizing biological systems go, the human brain is inordinately complex: It soaks in information from billions of sense receptors, and it needs to organize that information efficiently into an accurate model of the world. “It’s literally a fantastic organ in the sense that it generates hypotheses or fantasies that are appropriate for trying to explain these myriad patterns, this flux of sensory information that it is in receipt of,” Friston says. In seeking to predict what the next wave of sensations is going to tell it—and the next, and the next—the brain is constantly making inferences and updating its beliefs based on what the senses relay back, and trying to minimize prediction-error signals.
So far, as you might have noticed, this sounds a lot like the Bayesian idea of the brain as an “inference engine” that Hinton told Friston about in the 1990s. And indeed, Friston regards the Bayesian model as a foundation of the free energy principle (“free energy” is even a rough synonym for “prediction error”). But the limitation of the Bayesian model, for Friston, is that it only accounts for the interaction between beliefs and perceptions; it has nothing to say about the body or action. It can’t get you out of your chair.
This isn’t enough for Friston, who uses the term “active inference” to describe the way organisms minimize surprise while moving about the world. When the brain makes a prediction that isn’t immediately borne out by what the senses relay back, Friston believes, it can minimize free energy in one of two ways: It can revise its prediction—absorb the surprise, concede the error, update its model of the world—or it can act to make the prediction true. If I infer that I am touching my nose with my left index finger, but my proprioceptors tell me my arm is hanging at my side, I can minimize my brain’s raging prediction-error signals by raising that arm up and pressing a digit to the middle of my face.
And in fact, this is how the free energy principle accounts for everything we do: perception, action, planning, problem solving. When I get into the car to run an errand, I am minimizing free energy by confirming my hypothesis—my fantasy—through action.
For Friston, folding action and movement into the equation is immensely important. Even perception itself, he says, is “enslaved by action”: To gather information, the eye darts, the diaphragm draws air into the nose, the fingers generate friction against a surface. And all of this fine motor movement exists on a continuum with bigger plans, explorations,10 and actions.
“We sample the world,” Friston writes, “to ensure our predictions become a self-fulfilling prophecy.”
So what happens when our prophecies are not self-fulfilling? What does it look like for a system to be overwhelmed by surprise? The free energy principle, it turns out, isn’t just a unified theory of action, perception, and planning; it’s also a theory of mental illness. When the brain assigns too little or too much weight to evidence pouring in from the senses, trouble occurs. Someone with schizophrenia, for example, may fail to update their model of the world to account for sensory input from the eyes. Where one person might see a friendly neighbor, Hillary might see a giant, evil crow. “If you think about psychiatric conditions, and indeed most neurological conditions, they are just broken beliefs or false inference—hallucinations and delusions,” Friston says.
Over the past few years, Friston and a few other scientists have used the free energy principle to help explain anxiety, depression, and psychosis, along with certain symptoms of autism, Parkinson’s disease, and psychopathy. In many cases, scientists already know—thanks to Friston’s neuroimaging methods—which regions of the brain tend to malfunction in different disorders and which signals tend to be disrupted. But that alone isn’t enough to go on. “It’s not sufficient to understand which synapses, which brain connections, are working improperly,” he says. “You need to have a calculus that talks about beliefs.”
So: The free energy principle offers a unifying explanation for how the mind works and a unifying explanation for how the mind malfunctions. It stands to reason, then, that it might also put us on a path toward building a mind from scratch.
A few years ago, a team of British researchers decided to revisit the facts of King George III’s madness with a new analytic tool. They loaded some 500 letters written by the king into a machine-learning engine and laboriously trained the system to recognize various textual features: word repetition, sentence length, syntactical complexity, and the like. By the end of the training process, the system was able to predict whether a royal missive had been written during a period of mania or during a period of sanity.
This kind of pattern-matching technology—which is roughly similar to the techniques that have taught machines to recognize faces, images of cats, and speech patterns—has driven huge advances in computing over the past several years. But it requires a lot of up-front data and human supervision, and it can be brittle. Another approach to AI, called reinforcement learning, has shown incredible success at winning games: Go, chess, Atari’s Breakout. Reinforcement learning doesn’t require humans to label lots of training data; it just requires telling a neural network to seek a certain reward, often victory in a game. The neural network learns by playing the game over and over, optimizing for whatever moves might get it to the final screen, the way a dog might learn to perform certain tasks for a treat.
But reinforcement learning, too, has pretty major limitations. In the real world, most situations are not organized around a single, narrowly defined goal. (Sometimes you have to stop playing Breakout to go to the bathroom, put out a fire, or talk to your boss.) And most environments aren’t as stable and rule-bound as a game is. The conceit behind neural networks is that they are supposed to think the way we do; but reinforcement learning doesn’t really get us there.
To Friston and his enthusiasts, this failure makes complete sense. After all, according to the free energy principle, the fundamental drive of human thought isn’t to seek some arbitrary external reward. It’s to minimize prediction error. Clearly, neural networks ought to do the same. It helps that the Bayesian formulas behind the free energy principle—the ones that are so difficult to translate into English—are already written in the native language of machine learning.
Julie Pitt, head of machine-learning infrastructure at Netflix, discovered Friston and the free energy principle in 2014, and it transformed her thinking. (Pitt’s Twitter bio reads, “I infer my own actions by way of Active Inference.”) Outside of her work at Netflix, she’s been exploring applications of the principle in a side project called Order of Magnitude Labs. Pitt says that the beauty of the free energy model is that it allows an artificial agent to act in any environment, even one that’s new and unknown. Under the old reinforcement-learning model, you’d have to keep stipulating new rules and sub-rewards to get your agent to cope with a complex world. But a free energy agent always generates its own intrinsic reward: the minimization of surprise. And that reward, Pitt says, includes an imperative to go out and explore.
In late 2017, a group led by Rosalyn Moran, a neuroscientist and engineer at King’s College London, pitted two AI players against one another in a version of the 3D shooter game Doom. The goal was to compare an agent driven by active inference to one driven by reward-maximization.
The reward-based agent’s goal was to kill a monster inside the game, but the free-energy-driven agent only had to minimize surprise. The Fristonian agent started off slowly. But eventually it started to behave as if it had a model of the game, seeming to realize, for instance, that when the agent moved left the monster tended to move to the right.
After a while it became clear that, even in the toy environment of the game, the reward-maximizing agent was “demonstrably less robust”; the free energy agent had learned its environment better. “It outperformed the reinforcement-learning agent because it was exploring,” Moran says. In another simulation that pitted the free-energy-minimizing agent against real human players, the story was similar. The Fristonian agent started slowly, actively exploring options—epistemically foraging, Friston would say—before quickly attaining humanlike performance.
Moran told me that active inference is starting to spread into more mainstream deep-learning research, albeit slowly. Some of Friston’s students have gone on to work at DeepMind and Google Brain, and one of them founded Huawei’s Artificial Intelligence Theory lab. “It’s moving out of Queen Square,” Moran says. But it’s still not nearly as common as reinforcement learning, which even undergraduates learn. “You don’t teach undergraduates the free energy principle—yet.”
The first time I asked Friston about the connection between the free energy principle and artificial intelligence, he predicted that within five to 10 years, most machine learning would incorporate free energy minimization. The second time, his response was droll. “Think about why it’s called active inference,” he said. His straight, sparkly white teeth showed through his smile as he waited for me to follow his wordplay. “Well, it’s AI,” Friston said. “So is active inference the new AI? Yes, it’s the acronym.” Not for the first time, a Fristonian joke had passed me by.
While I was in London, Friston gave a talk at a quantitative trading firm. About 60 baby-faced stock traders were in attendance, rounding out the end of their workday. Friston described how the free energy principle could model curiosity in artificial agents. About 15 minutes in, he asked his listeners to raise a hand if they understood what he was saying. He counted only three hands, so he reversed the question: “Can you put your hand up if that was complete nonsense and you don’t know what I was talking about?” This time, a lot of people raised their hands, and I got the feeling that the rest were being polite. With 45 minutes left, Friston turned to the organizer of the talk and looked at him as if to say, What the hell? The manager stammered a bit before saying, “Everybody here’s smart.” Friston graciously agreed and finished his presentation.
The next morning, I asked Friston if he thought the talk went well, considering that few of those bright young minds seemed to understand him. “There is going to be a substantial proportion of the audience who—it’s just not for them,” he said. “Sometimes they get upset because they’ve heard that it’s important and they don’t understand it. They think they have to think it’s rubbish and they leave. You get used to that.”
In 2010, Peter Freed, a psychiatrist at Columbia University, gathered together 15 brain researchers to discuss one of Friston’s papers. Freed described what happened in the journal Neuropsychoanalysis: “There was a lot of mathematical knowledge in the room: three statisticians, two physicists, a physical chemist, a nuclear physicist, and a large group of neuroimagers—but apparently we didn’t have what it took. I met with a Princeton physicist, a Stanford neurophysiologist, a Cold Springs Harbor neurobiologist to discuss the paper. Again blanks, one and all: too many equations, too many assumptions, too many moving parts, too global a theory, no opportunity for questions—and so people gave up.”
But for all the people who are exasperated by Friston’s impenetrability, there are nearly as many who feel he has unlocked something huge, an idea every bit as expansive as Darwin’s theory of natural selection. When the Canadian philosopher Maxwell Ramstead first read Friston’s work in 2014, he had already been trying to find ways to connect complex living systems that exist at different scales—from cells to brains to individuals to cultures. In 2016 he met Friston, who told him that the same math that applies to cellular differentiation—the process by which generic cells become more specialized—can also be applied to cultural dynamics. “This was a life-changing conversation for me,” Ramstead says. “I almost had a nosebleed.”
“This is absolutely novel in history,” Ramstead told me as we sat on a bench in Queen Square, surrounded by patients and staff from the surrounding hospitals. Before Friston came along, “We were kind of condemned to forever wander in this multidisciplinary space without a common currency,” he continued. “The free energy principle gives you that currency.”
In 2017, Ramstead and Friston coauthored a paper, with Paul Badcock of the University of Melbourne, in which they described all life in terms of Markov blankets. Just as a cell is a Markov-blanketed system that minimizes free energy in order to exist, so are tribes and religions and species.
After the publication of Ramstead’s paper, Micah Allen, a cognitive neuroscientist then at the FIL, wrote that the free energy principle had evolved into a real-life version of Isaac Asimov’s psychohistory,11 a fictional system that reduced all of psychology, history, and physics down to a statistical science.
And it’s true that the free energy principle does seem to have expanded to the point of being, if not a theory of everything, then nearly so. (Friston told me that cancer and tumors might be instances of false inference, when cells become deluded.) As Allen asked: Does a theory that explains everything run the risk of explaining nothing?
On the last day of my trip, I visited Friston in the town of Rickmansworth, where he lives in a house filled with taxidermied animals12 that his wife prepares as a hobby.
As it happens, Rickmansworth appears on the first page of The Hitchhiker’s Guide to the Galaxy; it’s the town where “a girl sitting on her own in a small café” suddenly discovers the secret to making the world “a good and happy place.” But fate intervenes. “Before she could get to a phone to tell anyone about it, a terrible stupid catastrophe occurred, and the idea was lost forever.”
It’s unclear whether the free energy principle is the secret to making the world a good and happy place, as some of its believers almost seem to think it might be. Friston himself tended to take a more measured tone as our talks went on, suggesting only that active inference and its corollaries were quite promising. Several times he conceded that he might just be “talking rubbish.” During the last group meeting I attended at the FIL, he told those in attendance that the free energy principle is an “as if” concept—it does not require that biological things minimize free energy in order to exist; it is merely sufficient as an explanation for biotic self-organization.
Friston’s mother died a few years ago, but lately he has been thinking back to her frequent reassurances during his childhood: You’re very intelligent, Karl. “I never quite believed her,” he says. “And yet now I have found myself suddenly being seduced by her argument. Now I do believe I’m actually quite bright.” But this newfound self-esteem, he says, has also led him to examine his own egocentricity.
Friston says his work has two primary motivations. Sure, it would be nice to see the free energy principle lead to true artificial consciousness someday, he says, but that’s not one of his top priorities. Rather, his first big desire is to advance schizophrenia research, to help repair the brains of patients like the ones he knew at the old asylum. And his second main motivation, he says, is “much more selfish.” It goes back to that evening in his bedroom, as a teenager, looking at the cherry blossoms, wondering, “Can I sort it all out in the simplest way possible?”
“And that is a very self-indulgent thing. It has no altruistic clinical compassion behind it. It is just the selfish desire to try and understand things as completely and as rigorously and as simply as possible,” he says. “I often reflect on the jokes that people make about me—sometimes maliciously, sometimes very amusingly—that I can’t communicate. And I think: I didn’t write it for you. I wrote it for me.”
Friston told me he occasionally misses the last train home to Rickmansworth, lost in one of those problems that he drills into for weeks. So he’ll sleep in his office, curled on the futon under his Markov blanket, safe and securely separated from the external world.
https://www.winterwatch.net/2019/01/gre ... ropaganda/
Gregory Bateson: The Master of Double-Bind Black Propaganda
January 18, 2019
Anthropologist Gregory Bateson (1904-1980) was a heavy hitter in social theories and propaganda. He was also the husband of Margaret Mead. In 1942, while working in black propaganda, he wrote about the war:… is now a life-or-death struggle over the role which the social sciences shall play in the ordering of human relationships. It is hardly an exaggeration to say that this war is ideologically about just this – the role of the social sciences. Are we to reserve the techniques and the right to manipulate peoples as the privilege of a few planning, goal-oriented and power hungry individuals to whom the instrumentality of science makes a natural appeal? Now that we have techniques, are we in cold blood, going to treat people as things? (Bateson 1942, as quoted in Price)
After the war, Bateson answered his own “rhetorical” question. In a CIA website article titled ““The Birth of Central Intelligence,” Bateson is quoted as follows:… the bomb would shift the balance of warlike and peaceful methods of international pressure. It would be powerless, he said, against subversive practices, guerrilla tactics, social and economic manipulation, diplomatic forces, and propaganda either black or white. The nations would therefore resort to those indirect methods of warfare. The importance of the kind of work the Foreign Economic Administration, the Office of War Information, and the Office of Strategic Services had been doing would thus be infinitely greater than it had ever been. The country could not rely upon the Army and Navy alone for defense. There should be a third agency to combine the functions and employ the weapons of clandestine operations, economic controls, and psychological pressures.
Black propaganda is false information that purports to be from a source on one side of a conflict but is actually from the opposing side. This, and false-dialectic mind games (Clinton vs. Trump, red vs. blue, whites vs. blacks, etc.) is something the population is being continually subjected to.
Bateson’s research focused on double-bind theory as a brainwashing and propaganda technique. A double bind is an emotionally distressing dilemma in which an individual (or group) receives two or more conflicting messages, and one message negates the other. This creates a situation in which a successful response to one message results in a failed response to the other (and vice versa), so that the person will automatically be wrong regardless of response. The double bind occurs when the person cannot confront the inherent dilemma, and, therefore, can neither resolve it nor opt out of the situation. I also like to think of this as a dead end.
Double-think is an adoption of this method and is the act of simultaneously accepting two mutually contradictory beliefs as correct. Double-think is notable due to a lack of cognitive dissonance — thus the person is completely unaware of any conflict or contradiction.
According to George Orwell’s book “1984,” double-think is:To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions, which cancelled out, knowing them to be contradictory and believing in both of them, to use logic against logic, to repudiate morality while laying claim to it …
The classic example given of a negative double-bind is of a mother telling her child that she loves him or her, while at the same time turning away in disgust, or inflicting corporal punishment as discipline. (“I’m spanking you because I love you!”) The words are socially acceptable, but the body language is in conflict with the message.
The field of neuro-linguistic programming (NLP) also makes use of the expression “double bind.” Here, a communication could be constructed with multiple messages, whereby the recipient of the message is given the impression of choice — although both options have the same outcome at a higher level of intention. This is known as a “double bind” in NLP terminology.
The mind controllers then meld double binds or dead ends with the concept of injunctions. According to Bateson, a “primary injunction” is imposed on the subject by the others in one of two forms:
▪ (a) “Do X, or I will punish you”
▪ (b) “Do not do X, or I will punish you”
▪ (or both a and b)
A “secondary injunction” is imposed on the subject, conflicting with the first at a higher and more abstract level. For example, “You must do X, but only do it because you want to.” It is unnecessary for this injunction to be expressed verbally.
If necessary, a “tertiary injunction” is imposed on the subject to prevent them from escaping the dilemma.
The punishment may include the withdrawing of love, the expression of hate and anger, or abandonment resulting from the authority figure’s expression of helplessness. A common tactic is gaslighting or shaming.
Typically, a demand is imposed upon the subject by someone whom he or she respects (or thinks he should respect), but the demand itself is inherently impossible to fulfill because some broader context forbids it. For example, this situation arises when a person in a position of authority imposes two contradictory conditions but there exists an unspoken rule that one must never question authority.
Unlike the usual no-win situation, the subject has difficulty in defining the exact nature of the paradoxical situation in which he or she is caught. The contradiction may be invisible to external observers, only becoming evident when a prior communication is considered.
Growing up and being subjected to perpetual double binds could lead to learned patterns of confusion in thinking and communication. It can even induce societal schizophrenia and psychosis. Bateson and his colleagues hypothesized that schizophrenic thinking was a learned confusion in thinking and could be induced on whole populations.
Bateson had established a scholarly relationship with hypnotist Milton Erickson as early as 1932. Erickson’s research involved the idea that hypnotically effective trance states could be established in the course of ordinary life activities, such as reading, talking to a therapist or watching motion pictures, especially if intense and traumatic emotional states could be evoked by the experience. During such trance states, Erickson believed, the subconscious mind of the the target could be accessed by means of hypnotic suggestion (Atwill).
The video below is an illustrative example of all these concepts in action. Audience members are interviewed as they leave a showing of “An American Sniper.” This is an emotional and patriotic rendering produced by the icon and authority figure Clint Eastwood. The end of the movie shows the sniper being honored in a parade for his “heroic” conduct.
The paradox is that the sniper, Chris Kyle, loves his job of killing. The movie is about this blood sport being portrayed as heroic. The second paradox is that the people he kills in Iraq are arguably defending their own now ruined cities and homeland. Yet the people interviewed claim and espouse the notion that Kyle was defending the American homeland with his cold-blooded sniper attacks on Iraqi locals.
The interviewer (an Iraqi veteran) in the clip is very skilled at getting these jingoistic people to face their inconsistencies, circular logic, fuzzy thinking, dilemmas and paradoxes in their points of view. The reactions vary from vague realizations, to cognitive dissonance to just flat-out double-think. But above all, it illustrates first hand just how twisted, warped and inverted large segments of American society have become as a result of the methods concocted by Gregory Bateson and his ilk.
Winter Watch January 19, 2019 8:47 AM
A rare high value comment at Reddit:
from Drooperdoo via /r/conspiracy sent 10 hours ago
What you’re describing is called cybernetics. The science of cybernetics is based on the feedback loop. (The guy who founded it, Norbert Wiener, was working on the first early primitive computers. He noticed that humans (and all living things, for that matter) learn through feedback. As he was giving a speech, a physiologist was in the audience named Warren McCullough. McCullogh did the first neural map of the human brain. Up until hearing Wiener, he was confused as to why certain neurons were linked in loops. Only after learning about “feedback” as a process of learning did it make sense.) McCullough and Wiener collaborated on the first “electronic brain,” the precursor to the modern computer. Its programing was based on the inner-workings of the human mind. Since neurons only had two positions [off and on] and existed in a state of negative or positive feedback, the first computers were given a similar set-up (with 1’s and 0’s in binary, to represent negative and positive feedback).
Long story short, the guy who set up the Macy Conference (where Wiener and McCullough met) was none other than the Gregory Bateson mentioned in this article. He was at Ground Zero of the creation of cybernetics. Early on, the cyberneticians realized that you could manipulate society by creating echo chambers, “feedback loops,” whereby you gave the culture positive and negative signals.
The word “cybernetics,” by the way, translates as “to steer, to govern” in Greek.
It’s the concept of having “a set goal and using feedback to ‘steer’ the society to the pre-determined target.”
You can read more about Gregory Bateson in “The Cybernetic Brain,” by Andrew Pickering. A fascinating read.
Footnote: The very word “cyberspace” comes from cybernetics. It was coined in 1982 by William Gibson, who said that he based it on the work of Cybernetics founder Norbert Wiener. Aside from creating cybernetics, Wiener also wrote several papers that were foundational to the creation of the internet. (Read “The Internet Is Not The Answer,” by Andrew Keen to learn more.) In summation: The very internet you’re on is a “cybernetic manipulation tool”. It’s a finely-honed “echo chamber” which can be used to manipulate society. Look at Facebook getting busted last year “putting particular news stories in people’s feeds to manipulate their users emotions”. Notice how we have alternative medias now: Leftwing media and rightwing media. What you’re looking at are echo chambers, dispensing positive and negative feedback to the society. 1’s and 0’s. Don’t look now: But the culture is being “steered”. What the ultimate goal is, who can say? (But cybernetics doesn’t kick in, unless it has a pre-set goal to start off with.)
Users browsing this forum: Google [Bot] and 11 guests