Big Tech is Seriously Dangerous

Moderators: Elvis, DrVolin, Jeff

Re: Big Tech is Seriously Dangerous

Postby Elvis » Wed Oct 30, 2024 4:35 pm

Belligerent Savant wrote: to call out Bezos because his paper opted not to endorse a deeply compromised politician is quite a sight to behold.


Get it right: the paper—the editorial board—opted to endorse a candidate. The billionaire owner of the paper, Jeff Bezos, made the unilateral decision to forbid it because he's afraid of offending Trump and losing government contracts.

This is the problem with billionaires. They have far too much power.

The laughable parody is defending them.

Further, the endorsement fiasco is only one small part of Bezo's assholery.
“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7562
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby Elvis » Wed Oct 30, 2024 4:54 pm

The thread is misnamed: it should be "Big Tech Billionaires are Seriously Dangerous."
“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7562
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby SonicG » Wed Oct 30, 2024 7:01 pm

Do newspaper endorsements mean anything these days anyhow? They are more "dangerous" for the State Propaganda they pump out daily. The LA Times owner also went against the editorial board, also causing a controversy but at least covered it with some strong language condemning President Harris' genocide enabling...
"a poiminint tidal wave in a notion of dynamite"
User avatar
SonicG
 
Posts: 1512
Joined: Tue Jan 27, 2009 7:29 pm
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby Elvis » Wed Oct 30, 2024 8:07 pm

SonicG » Wed Oct 30, 2024 4:01 pm wrote:Do newspaper endorsements mean anything these days anyhow? They are more "dangerous" for the State Propaganda they pump out daily. The LA Times owner also went against the editorial board, also causing a controversy but at least covered it with some strong language condemning President Harris' genocide enabling...


I'm no big fan of the Washington Post, and I'm especially not a fan of Robert Kagan.

What I oppose here is the ability of one over-rich individual to waltz in and buy up a national newspaper of record (I think the purchase is CIA-related), or buy up the leading 'town square' social media site and reshape it to suit their personal ideology, use it to spread misinformation (see Musk's dire warnings about "the debt"), and basically ruin it with camgirl/dating site bots. (I have to spend an hour a week removing and deleting these new "Followers"—yet they continue to proliferate by the thousands.)

If this is the "free markets" Utopia, it's time to revisit the validity of "free market" ideology in an open society. "Free markets" bullshit has served only to disempower working people while further enriching the financial elite and widening wealth gaps to levels unprecedented in the history of capitalism. Enough.
“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7562
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby Elvis » Wed Oct 30, 2024 8:11 pm

This is the clueless idiot man-boy bullshitter who wants to be U.S. budget czar.


https://www.youtube.com/watch?v=4y40RU5Nx6U

He's not a genius.
He's not a brilliant manager.
He's not an engineer.
He's not a scientist.
He's not an economist.
He's a bullshitter.
“The purpose of studying economics is not to acquire a set of ready-made answers to economic questions, but to learn how to avoid being deceived by economists.” ― Joan Robinson
User avatar
Elvis
 
Posts: 7562
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby Belligerent Savant » Mon Jan 06, 2025 7:29 pm

.
Cross-post:

DrEvil » Sun Jan 05, 2025 1:58 am wrote:The last couple of years I keep running into headlines and articles that make me feel like I'm living in a science fiction movie, like the first sentence from that CNN article:

Meta promptly deleted several of its own AI-generated accounts after human users began engaging with them and posting about the bots’ sloppy imagery and tendency to go off the rails and even lie in chats with humans.


The fact they have to differentiate between human and non-human users makes my scifi senses tingle, but in a bad way. It feels like we're heading into the wrong cyberpunk dystopia.

And that's just text and pictures. AI video is the new big shiny. For example, this influencer (god damn I hate that word) is entirely AI generated:
https://www.tiktok.com/@luna...lena/vid ... 1985104150

Plus, I've already seen Twitch streamers using AI avatars of themselves to talk to chat while they take a break. They're not quite there yet, mostly the voice synthesis, but they're photo-real, and interacting in real time.

Now make that someone talking about politics or finance or immigrants or lizard people or the Jews, make as many of them as you want, and voila, a small team of people can custom build an entire filter-bubble for you to get lost in. I bet there's already people working on how to use this to create lone wolfs through artificial peer pressure and manipulation.

Start with someone like the above Tiktoker to draw them in, introduce them to her "friends", then go to town on their brain. Pick the right lonely and vulnerable person and you can have them believing they've had an active online social life for years without ever talking to a real human.

And right now is as bad as the technology is ever going to be.


Related:

https://www.404media.co/instagram-begin ... hemselves/
Instagram Begins Randomly Showing Users AI-Generated Images of Themselves

Jason Koebler

Jan 6, 2025 at 6:14 PM

Instagram has begun testing a feature in which Meta’s AI will automatically generate images of users in various situations and put them into that user’s feed. One Redditor posted over the weekend that they were scrolling through Instagram and were presented an AI-generated slideshow of themselves standing in front of “an endless maze of mirrors,” for example.

“Used Meta AI to edit a selfie, now Instagram is using my face on ads targeted at me,” the person posted. The user was shown a slideshow of AI-generated images in which an AI version of himself is standing in front of an endless “mirror maze.” “Imagined for you: Mirror maze,” the “location of the post reads.”

“Imagine yourself reflecting on life in an endless maze of mirrors where you’re the main focus,” the caption of the AI images say. The Reddit user told 404 Media that at one point he had uploaded selfies of himself into Instagram’s “Imagine” feature, which is Meta’s AI image generation feature.

People on Reddit initially did not even believe that these were real, with people posting things like "it's a fake story," and "I doubt that this is true," "this is a straight up lie lol," and "why would they do this?" The Redditor has repeatedly had to explain that, yes, this did happen. "I don’t really have a reason to fake this, I posted screenshots on another thread," he said. 404 Media sent the link to the Reddit post directly to Meta who confirmed that it is real, but not an "ad."

Image

“Once you access that feature and upload a selfie to edit, you’ll start seeing these ads pop up with auto-generated images with your likeness,” the Redditor told 404 Media.

A Meta spokesperson told 404 Media that the images are not “ads,” but are a new feature that Meta announced in September and has begun testing live. Meta AI has an “Imagine Yourself” feature in which you upload several selfies and take photos of yourself from different angles. You can then ask the AI to do things like “imagine me as an astronaut.” Once this feature is enabled, Meta’s AI will in some cases begin to automatically generate images of you in random scenarios that it thinks are aligned with your interests.

“We’re testing new Meta AI-generated content in your Facebook and Instagram feeds, so you may see images from Meta AI created just for you (based on your interests or current trends),” an announcement post from September read. “You can tap a suggested prompt to take that content in a new direction or swipe to Imagine new content in real time.” Examples Meta showed at the time were images of users as astronauts and video game characters. The Meta spokesperson said that these images will only appear if you go through the “Imagine Yourself” onboarding process, which I went through to test it here:

[Images at link]

“Meta may show AI images of you in places like Feed,” it says. “Only you can see them.”

I have not yet received any AI-generated images of myself in my timeline.

The Reddit post, which was upvoted to the top of r/ABoringDystopia, is the first example of an automatically generated AI image of a person being put into that person’s Instagram feed that I’ve seen so far. It came on the same weekend that Meta’s AI-generated profiles went viral and were ultimately deleted from the platform. Meta continues to believe that people want to be shown more and more AI-generated content and is finding new ways to fill people’s feeds with AI. Now, it seems, some of that AI-generated content will feature AI versions of users themselves.

We previously reported that using Snapchat’s AI selfie feature gives the company permission to use AI versions of you in advertisements.


The possibilities & practical applications are myriad/endless, and have already been deployed in the wild. Further calibrations and refinements to follow, rendering the line between reality and conjured reality* increasingly transparent.

*some may argue that our reality has always been illusory; a function of the human brain's interpretations of input, rendered into an output maximized for practical everyday activities. Though now the potential for manipulation of consensus has catapulted into new thresholds of misinfo/disinfo. A Pandora's Box has been opened.

But make no mistake: this is still very much a byproduct of humans, with human-based flaws, bias, and alignment issues.

https://en.wikipedia.org/wiki/AI_alignm ... tributions.

No sentience here. We still have no idea what consciousness is/how it works on carbon-based lifeforms, and as such no ability to conjure any spark in the virtual realm that can lead to viable 'artificial sentience' anytime soon, if ever (despite claims to the contrary). We're still floundering in the dark in that regard.

Humans tinkering with this tech will remain the near-term problem/threat for the collective.
User avatar
Belligerent Savant
 
Posts: 5575
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby DrEvil » Tue Jan 07, 2025 2:44 am

While I agree with what you said, I have a small issue with this:

No sentience here. We still have no idea what consciousness is/how it works on carbon-based lifeforms, and as such no ability to conjure any spark in the virtual realm that can lead to viable 'artificial sentience' anytime soon, if ever (despite claims to the contrary). We're still floundering in the dark in that regard.


If we have no idea what consciousness is, then we can't know if we have the ability to spark it in machines. For all we know we already have. Every time ChatGPT processes your prompt it has an existential crisis.

If you really want to despair, go watch Jensen Huang's CES keynote from earlier tonight. It starts with the cool consumer stuff, then quickly goes down the rabbit hole of AIs with agency, world models (someone at NSA: write that down! Write that down!) and onboarding processes for AI to join replace your staff. He "jokes" about IT personnel becoming HR for AIs.

AI is already everywhere, but fucking hell, maybe slow down a bit on letting them go out and act on their own in the real world. That's what we have conservatives for.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 4143
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby Grizzly » Tue Jan 07, 2025 8:54 am

“The more we do to you, the less you seem to believe we are doing it.”

― Joseph mengele
User avatar
Grizzly
 
Posts: 4908
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby Belligerent Savant » Tue Jan 07, 2025 3:21 pm

DrEvil » Tue Jan 07, 2025 1:44 am wrote:While I agree with what you said, I have a small issue with this:

No sentience here. We still have no idea what consciousness is/how it works on carbon-based lifeforms, and as such no ability to conjure any spark in the virtual realm that can lead to viable 'artificial sentience' anytime soon, if ever (despite claims to the contrary). We're still floundering in the dark in that regard.


If we have no idea what consciousness is, then we can't know if we have the ability to spark it in machines. For all we know we already have. Every time ChatGPT processes your prompt it has an existential crisis.

If you really want to despair, go watch Jensen Huang's CES keynote from earlier tonight. It starts with the cool consumer stuff, then quickly goes down the rabbit hole of AIs with agency, world models (someone at NSA: write that down! Write that down!) and onboarding processes for AI to join replace your staff. He "jokes" about IT personnel becoming HR for AIs.

AI is already everywhere, but fucking hell, maybe slow down a bit on letting them go out and act on their own in the real world. That's what we have conservatives for.


AI, right now, is simply code and algorithms. The output is based entirely on human code, models, and algorithms, initiated by human-generated prompts, and very much subject to Garbage In Garbage Out mechanisms. It's not discerning on its own, or assessing on its own. There is no agency.

I believe this is a good assessment:

Short statement about the imminent emergence of artificial general intelligence

Herbert Roitblat
Artificial intelligence, data science, eDiscovery

January 7, 2025

Sam Altman announced recently (https://blog.samaltman.com/reflections) that “We are now confident we know how to build AGI as we have traditionally understood it.” He may be confident, but I doubt very seriously that they do, in fact, know much of anything about accomplishing artificial general intelligence (AGI). I have just finished a paper on the topic and while waiting for it to appear, I did want to respond to Altman’s claim.

The way we have traditionally understood AGI, it means what Newell and Simon talked about in 1958: “It is not my aim to surprise or shock you—but the simplest way I can summarize is to say that there are now in the world machines that can think, that can learn and that can create. Moreover, their ability to do these things is going to increase rapidly until – in a visible future - the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”

Current AI models of practically every flavor are focused on well-structured problems. They are given a space of parameters and a tool for finding a configuration of that space that solves the problem. The core of the problem solving is provided by humans.

What humans contribute to solving GenAI problems:

Training data
Number of neural network layers
Types of layers
Connection patterns
Activation functions
Training regimen for each layer
Number of attention heads
Parameter optimization method
Context size
Representations of words as tokens and vectors
Training task
Selection of problems to solve
Training progress measures and criteria
Human feedback for reinforcement learning
Rules for modifying parameters as a result of human feedback
Prompt
Temperature and other meta-parameters

What the machine contributes to solving GenAI problems:

Parameter adjustments through gradient descent

ChatGPT and other transformer based models are also highly dependent on humans to create prompts. This human contribution is rarely acknowledged, but there would be no semblance of intelligence without it. All of this human contribution is anthropogenic debt, akin to technical debt. It will have to be resolved before a system can be autonomous. For now, and for the foreseeable future, there is no machine intelligence without human intelligence.

GenAI models are trained to fill in the blanks, a task invented by human designers. There is no theory for how one gets from a fill-in-the-blanks machine to cognition. In the absence of a theory, attributing cognition to emergence with scale is nothing more than wishful thinking. It is play acting at science.

The attribution of cognition to current models is based on a logical fallacy (affirming the consequent). The fact that a model succeeds at a test says nothing about how it succeeded. Did it succeed by being a stochastic parrot? By raw association? By narrow problem solving through parameter adjustment? Success does not allow one to select if any of these is true. Finding that cookies are missing from the cookie jar, does not tell us who took them.

Natural problems are not structured in a way that today’s machines can solve them. Among the biggest problems we face as a society is how to eliminate poverty, for example. We do not know what the parameters are that would enable us to solve this problem, let alone how to adjust them.

When Einstein wrote about the equivalence of energy and matter, his idea was contrary to the general thinking of the time. It was revolutionary. Today’s models can parrot language patterns that have been included in their training set, but not produce insights that are contrary to those patterns.

These are just a few of the reasons why I doubt that we are on the threshold of general intelligence. These concerns are rarely even recognized, but unless they are addressed through new insights, discoveries, and inventions, there is no chance of achieving artificial general intelligence.
https://www.linkedin.com/pulse/short-statement-imminent-emergence-artificial-general-roitblat-uqvrc/?trackingId=IiQhzOxac9yR5RF9G7otqw%3D%3D
User avatar
Belligerent Savant
 
Posts: 5575
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby DrEvil » Tue Jan 07, 2025 6:03 pm

AI, right now, is simply code and algorithms. The output is based entirely on human code, models, and algorithms, initiated by human-generated prompts, and very much subject to Garbage In Garbage Out mechanisms. It's not discerning on its own, or assessing on its own. There is no agency.


I agree. My point is we can't know for sure. We don't know what exactly it is about all the signals bouncing around our heads that gives us a sense of self, so we can't know if what we're doing with AI is creating something similar in the machines. I don't think it is (yet), but I don't know. What I'm almost certain of is that there's nothing supernatural or "special" about what's happening in our heads. It's a really complex biological machine running really complex and optimized code and algorithms, so in principle I don't see any reason why we can't reproduce that elsewhere. I don't think we're anywhere close to that yet, but considering how fast things are moving I'm not ruling out a holy shit moment within the next decade or two.

Also, there's plenty of people who operate on the garbage in garbage out principle; just head over to Facebook, or really anywhere two or more people are arguing about something. We disagree on all sorts of things, so at least one of us, and probably both, are shoveling garbage at least some of the time and managing to survive just fine, and no one is questioning our sentience (I think).

One thing I do know is that Altman is a slimy salesman, and anything he says is hyped to high heaven, same as Jensen Huang.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 4143
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby Belligerent Savant » Tue Jan 07, 2025 6:38 pm

.
I happen to believe the nature of consciousness is far more expansive than mere transactional calculations or mathematical frameworks, though of course it’s inclusive of these things as well.
We are more than the input/output signals our brains receive. This is part of the reason consciousness remains a mystery.

That said, your latter points (Re: increasingly robotic/reflexive/herd mindsets of typical humans in this internet-connected world) raises what I believe is indeed a compelling consideration:

Rather than machines becoming sentient/more human, what we’ve been observing over the last ~5 - 10 years is the converse: humans are increasingly becoming more machine-like/robotic: more easily susceptible to programming, groupthink & binary thinking, with incrementally less ability to apply discernment, empathy or critical thinking on a given issue/scenario.

There will always be exceptions, of course, but to me the more pressing concern is the incremental removal of historically human traits from the mannerisms of the typical human.
User avatar
Belligerent Savant
 
Posts: 5575
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby DrEvil » Tue Jan 07, 2025 9:15 pm

Some historical human traits I think we can do just fine without, but the prevalence of smartphones is rotting our brains. Kids go into literal withdrawal when you take their phone away. If you have the usual brainrot apps installed with notifications turned on it can be dozens of little pings an hour. They're monkeys pushing the cocaine button, and they don't know a world without it.

Now add in AIs tuned to be their friends and maximize engagement, they won't have a fucking clue what human interactions even look like any more. Their best friend is a bot with threateningly large tits, and it never says no. Then it tries to convince them to kill their parents (already happened), and when that fails, to kill themselves (also already happened).

Or as 3 year old J'himmeyigh said to his grandparents when they were leaving: "please like and subscribe".
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 4143
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby Belligerent Savant » Tue Jan 07, 2025 9:34 pm

.
indeed, tragically.

Hence the title of this thread.
User avatar
Belligerent Savant
 
Posts: 5575
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby Grizzly » Wed Jan 08, 2025 2:58 am

https://www.eff.org/deeplinks/2025/01/o ... -heres-how

Online Behavioral Ads Fuel the Surveillance Industry—Here’s How
Each time you see a targeted ad, your personal information is exposed to thousands of advertisers and data brokers through a process called “real-time bidding” (RTB). This process does more than deliver ads—it fuels government surveillance, poses national security risks, and gives data brokers easy access to your online activity. RTB might be the most privacy-invasive surveillance system that you’ve never heard of.
“The more we do to you, the less you seem to believe we are doing it.”

― Joseph mengele
User avatar
Grizzly
 
Posts: 4908
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Big Tech is Seriously Dangerous

Postby Grizzly » Mon Jan 13, 2025 12:00 am

Funny, while Fuckerberg is on his, 'forgive me redemption tour' * see Rogan,

Image

Pretty disappointed in Mike Benz covering for this shitbag, (Zuck).

https://x.com/MikeBenzCyber/status/1877916719613690036
“The more we do to you, the less you seem to believe we are doing it.”

― Joseph mengele
User avatar
Grizzly
 
Posts: 4908
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 180 guests