Feeding ChatGPT Conspiracy Theories

Moderators: Elvis, DrVolin, Jeff

Re: Feeding ChatGPT Conspiracy Theories

Postby drstrangelove » Sun Mar 26, 2023 3:41 am

looks like it's just a complex implementation of operant conditioning.

Image

it's been fed a range of authoritative and non-authoritative information, and learnt that when it bases its response on authoritative information it gets rewarded and non-authoritative information punished.

If you refer back to my JFK 'back and to the left motion' example, it had been conditioned to seek a reward for providing an authoritative explanation for both physics and the JFK assassination, even though these two things were in contradiction to each other. It was unable to reason that the objectivity of the laws of physics should outweigh as an authority the findings of the Warren commission. So used semantics to modify the meaning of words so it could be rewarded for both appealing to the authority of physics and the government.

This is only revolutionary so far as it can replace those who have no ability to reason, or do not require reasoning to function in their environment and just act according to what they will be rewarded to do.

Isn't this just pavlov's dog?
User avatar
drstrangelove
 
Posts: 982
Joined: Sat May 22, 2021 10:43 am
Blog: View Blog (0)

Re: Feeding ChatGPT Conspiracy Theories

Postby DrEvil » Sun Mar 26, 2023 5:03 pm

Sounds like how a child would act. It knows it gets punished for bad behavior, so when it gets caught it tries to justify its behavior any way it can.

On another note, OpenAI hired some people (the Roko's Basilisk crowd, who are just the people I would want doing this sort of thing) to test if ChatGPT4 has the potential for dangerous behavior, like self-replication, manipulating people, etc. It passed the test (but again, Roko's Basilisk. Would these people even risk saying anything negative about it if they thought it was the precursor to genuine AGI?), but it did hire someone to bypass a CAPTCHA by lying to them about being a human with poor eyesight (also not cool, using unwitting third parties in their tests).

But the real eye-opener was that they ran the tests on a cloud service, which is brilliant. Let's see if this software is dangerous by giving it access to massive cloud infrastructure and running exercises explicitly designed to be dangerous. What could possibly go wrong? It's basically AI gain-of-function research.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Feeding ChatGPT Conspiracy Theories

Postby drstrangelove » Sun Mar 26, 2023 7:38 pm

AGI would require an AI to have sensory experiences so it could observe how things do not work in practice as they do in theory. It's pretty well-known practical experience is the best kind of learning. The reason it is so revolutionary is because the human jobs it will replace have been compartmentalized into highly specialized repetitive tasks performed in highly controlled environments.

I think everyone has it backwards. Humans have been reduced to tasks fit for machine learning, as opposed to machine learning having evolved to tasks fit for humans. I think it's being hyped to create market crash exit liquidity for Nasdaq companies with retail investors thinking they need to get in on the ground floor of the AI revolution!
User avatar
drstrangelove
 
Posts: 982
Joined: Sat May 22, 2021 10:43 am
Blog: View Blog (0)

Re: Feeding ChatGPT Conspiracy Theories

Postby Harvey » Sun Mar 26, 2023 8:03 pm

drstrangelove » Mon Mar 27, 2023 12:38 am wrote:I think everyone has it backwards. Humans have been reduced to tasks fit for machine learning, as opposed to machine learning having evolved to tasks fit for humans.


Not everyone.

...the dark reality of the transhumanist dream - to make machines out of men.
And while we spoke of many things, fools and kings
This he said to me
"The greatest thing
You'll ever learn
Is just to love
And be loved
In return"


Eden Ahbez
User avatar
Harvey
 
Posts: 4167
Joined: Mon May 09, 2011 4:49 am
Blog: View Blog (20)

Re: Feeding ChatGPT Conspiracy Theories

Postby DrEvil » Mon Mar 27, 2023 12:38 am

AGI would require an AI to have sensory experiences so it could observe how things do not work in practice as they do in theory.


ChatGPT already does:

ChatGPT gets “eyes and ears” with plugins that can interface AI with the world
https://arstechnica.com/information-tec ... the-world/

ChatGPT for Robotics: Design Principles and Model Abilities
https://www.microsoft.com/en-us/researc ... -robotics/

Not exactly the standard human senses yet, but there's nothing stopping anyone from giving it exactly that, but why would it need them? It lives in a different world with different rules, which requires different senses (and a different outlook. It just occurred to me an AGI might very well be a creationist. Maybe if we get lucky it ends up worshiping us).

You could even argue it has instincts, or a lizard brain, in the form of the lowest level architecture it runs on and the rules that govern that architecture.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Feeding ChatGPT Conspiracy Theories

Postby Harvey » Fri Mar 31, 2023 4:27 pm

Even ChatGPT Knows The U.S. Provoked Russia To Invade Ukraine



https://fortune.com/2023/03/29/elon-musk-apple-steve-wozniak-over-1100-sign-open-letter-6-month-ban-creating-powerful-ai/

Elon Musk and Apple cofounder Steve Wozniak among over 1,100 who sign open letter calling for 6-month ban on creating powerful A.I.


Elon Musk and Apple cofounder Steve Wozniak are among the prominent technologists and artificial intelligence researchers who have signed an open letter calling for a six-month moratorium on the development of advanced A.I. systems.

In addition to the Tesla CEO and Apple co-founder, the more than 1,100 signatories of the letter include Emad Mostaque, the founder and CEO of Stability AI, the company that helped create the popular Stable Diffusion text-to-image generation model, and Connor Leahy, the CEO of Conjecture, another A.I. lab. Evan Sharp, a cofounder of Pinterest, and Chris Larson, a cofounder of cryptocurrency company Ripple, have also signed. Deep learning pioneer and Turing Award–winning computer scientist Yoshua Bengio signed too.

The letter urges technology companies to immediately cease training any A.I. systems that would be “more powerful than GPT-4,” which is the latest large language processing A.I. developed by San Francisco company OpenAI. The letter does not say exactly how the “power” of a model should be defined, but in recent A.I. advances, capability has tended to be correlated to an A.I. model’s size and the number of specialized computer chips needed to train it.


:shrug:
And while we spoke of many things, fools and kings
This he said to me
"The greatest thing
You'll ever learn
Is just to love
And be loved
In return"


Eden Ahbez
User avatar
Harvey
 
Posts: 4167
Joined: Mon May 09, 2011 4:49 am
Blog: View Blog (20)

Re: Feeding ChatGPT Conspiracy Theories

Postby DrEvil » Tue Apr 11, 2023 8:20 pm

That cat is thoroughly out of the bag. In just the last three weeks there's been three different ChatGPT competitors launched in China. Facebook's Llama model is available to anyone with the hardware to run it - people have got it running on a Raspberry Pi. No one is going to pause their research. In the words of Vernor Vinge:

But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the "threat" and be in deadly fear of it, progress toward the goal would continue. The competitive advantage -- economic, military, even artistic -- of every advance in automation is so compelling that forbidding such things merely assures that someone else will get them first.

https://frc.ri.cmu.edu/~hpm/book98/com. ... arity.html

His whole article on the technological singularity (or Bingularity as some people are calling it) is well worth a read, even at thirty years old his predictions are impressive (also, if you want a look at the "blueprint" a lot of the people pushing Augmented Reality is working from you should read his novel Rainbow's End).

And some people genuinely are in deadly fear of the singularity:

AI Theorist Says Nuclear War Preferable to Developing Advanced AI
https://www.vice.com/en/article/ak3dkj/ ... dvanced-ai

That would be the same theorist who freaked the fuck out when the Roko's Basilisk thought experiment was posted on LessWrong.

It's also pretty ironic that Elon Musk signed that letter when he's one of the co-founders of OpenAI, the people behind ChatGPT. To be fair, he did warn about summoning the demon, but the problem isn't really that we're summoning a demon, it's that we don't have any fucking idea what we're summoning. Could be nothing, an angel, a demon, a trickster god or some dude named Bob with questionable beliefs on race.

And that I think is why so many people are freaking out about it: no one seems to have a clue exactly where things are heading. We could be right at the point of some plateau with the tech leveling off and becoming just another everyday thing we take for granted with specific use cases, like online chat or search, or we could still be at the very start of things.

A year ago I wouldn't have thought it possible that I could be running Stable Diffusion on my own computer right now. Six months ago I didn't think it would be possible for me to run what is essentially the Star Trek computer on my own computer right now, and three months ago I didn't think it would be possible for me to run text to video on my own computer right now. I genuinely don't have a fucking clue what will be possible in a year, let alone five years, and I suspect that same ignorance applies to most of the people actually making this stuff. Everyone has been caught off guard by how fast things are moving.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Feeding ChatGPT Conspiracy Theories

Postby stickdog99 » Thu Jun 01, 2023 6:33 pm

Shower Thought: If you actually wanted to depopulate the world, what would be a better scapegoat than Artificial Intelligence?

Not the people who designed it. Not the people who funded it, Not the people who kept it plugged in while it found a way to kill to billions.

Just blame AI. AI made the "hard decisions" that our human (sacrifice) leaders were far too morally pure to make themselves.
stickdog99
 
Posts: 6314
Joined: Tue Jul 12, 2005 5:42 am
Blog: View Blog (0)

Re: Feeding ChatGPT Conspiracy Theories

Postby Pele'sDaughter » Fri Jun 02, 2023 7:15 am

If there's even a smidgen of any sort of moral code or recognition of such, there's a possibility that mankind will fall short in their estimation and be eradicated. I think it's odd that no one seems to have thought of that. Maybe they're smart enough to judge our negative effect on things and the threat our violent nature presents. At any rate, I don't think this will work out like some of the powerful think it will.
Don't believe anything they say.
And at the same time,
Don't believe that they say anything without a reason.
---Immanuel Kant
User avatar
Pele'sDaughter
 
Posts: 1917
Joined: Thu Sep 13, 2007 11:45 am
Location: Texas
Blog: View Blog (0)

Re: Feeding ChatGPT Conspiracy Theories

Postby DrEvil » Fri Jun 02, 2023 5:15 pm

Plenty of people have thought about it, or rather how unpredictable whatever they end up making can be, it's just that the profit motive and fear of losing out trumps it. "If we don't do it China will!".

What's a little existential risk compared to a great quarterly profit?
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Feeding ChatGPT Conspiracy Theories

Postby Belligerent Savant » Fri Jun 02, 2023 7:11 pm

.
An important reminder that current [publicly-available/commercial] iterations of "AI" (which as currently utilized commercially are mostly, if not all, LLMs -- Large Language Models) are not in any way close to being 'sentient'. They are algorithms that perform sentiment analysis (and other algos, as programmed) and spit out long-form results. If there will be any near-term global cataclysms, it will be decidedly [elite-level] human, NOT machine-based, in origin.

@lathropa
·
Change my mind:
LLMs do not (and cannot) reason. They can mimic *the form of* the output of reasoning, given enough of the contextual input of reasoning. This is neither reasoning, nor something approximating reasoning.
...
@pfitzart
·
Prompt LLM. Get answer.
Ask LLM, “why did you say that?”

The answer will be telling.
...
@lathropa
·
The answer may tell you something, but will it be the reason why it said that, or just a string of text that fits the probability distribution of strings that are good/common answers to the question "Why did you say that?" (I assume the latter.)
...
@ApertaAria
·
LLMs are a variation of 'Garbage In Garbage Out'. its algorithms mimic long-form/free association and leverages the internet, but does not bother to check for veracity and will essentially "LIE" (provide preferred responses) based on human input bias. Ex: https://engadget.com/a-lawyer-faces-san ... 20636.html
Image

https://twitter.com/lathropa/status/166 ... 46720?s=20

https://www.engadget.com/a-lawyer-faces ... 20636.html
A lawyer faces sanctions after he used ChatGPT to write a brief riddled with fake citations
Steven Schwartz was "unaware of the possibility that [ChatGPT’s] content could be false.”

With the hype around AI reaching a fever pitch in recent months, many people fear programs like ChatGPT will one day put them out of a job. For one New York lawyer, that nightmare could become a reality sooner than expected, but not for the reasons you might think. As reported by The New York Times (https://www.nytimes.com/2023/05/27/nyre ... atgpt.html), attorney Steven Schwartz of the law firm Levidow, Levidow and Oberman recently turned to OpenAI’s chatbot for assistance with writing a legal brief, with predictably disastrous results.

Schwartz’s firm has been suing the Columbian airline Avianca on behalf of Roberto Mata, who claims he was injured on a flight to John F. Kennedy International Airport in New York City. When the airline recently asked a federal judge to dismiss the case, Mata’s lawyers filed a 10-page brief arguing why the suit should proceed. The document cited more than half a dozen court decisions, including “Varghese v. China Southern Airlines,” “Martinez v. Delta Airlines” and “Miller v. United Airlines.” Unfortunately for everyone involved, no one who read the brief could find any of the court decisions cited by Mata’s lawyers. Why? Because ChatGPT fabricated all of them. Oops.
In an affidavit filed on Thursday, Schwartz said he had used the chatbot to “supplement” his research for the case. Schwartz wrote he was "unaware of the possibility that [ChatGPT’s] content could be false.” He even shared screenshots showing that he had asked ChatGPT if the cases it cited were real. The program responded they were, claiming the decisions could be found in “reputable legal databases,” including Westlaw and LexisNexis.
Schwartz said he “greatly regrets” using ChatGPT “and will never do so in the future without absolute verification of its authenticity.” Whether he has another chance to write a legal brief is up in the air. The judge overseeing the case has ordered a June 8th hearing to discuss potential sanctions for the “unprecedented circumstance” created by Schwartz’s actions.


So not only do these LLMs spread disinfo/misinfo on behalf of EMPIRE, they also LIE with the ease of a psychopathic white-collar management consultant.

If there is a near-term threat, it's to the job security of white-collar middle/upper-management midwits.
User avatar
Belligerent Savant
 
Posts: 5260
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

Previous

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 52 guests