Realtime map of cyber attacks

Moderators: Elvis, DrVolin, Jeff

Re: Realtime map of cyber attacks

Postby coffin_dodger » Thu Jul 09, 2015 4:07 am

The taloned marsupial said:
Really, it's a compelling argument for functional AI, isn't it? "We turned the robot broker on and he immediately concluded the entire market was over-valued, and it would be easy to scale enough paper/derivative leverage to force other players to agree."


I have to assume that your robot broker AI has the ability to process and reason beyond it's primal remit, as you have it capable of reaching a conclusion. To conclude requires thinking. Thinking is haphazard and unpredictable. Thinking generates new thought, unimagined and unimaginable a moment ago. Unless your AI is created with the same cognitive dissonance available to humans (which completely defeats the point of AI, i.e. I know what I'm about to do is 'wrong', but so what? - making it just another mind amongst 7 billion) then it's nothing more than a perceived-as-sophisticated weapon for use by one side with the upper hand. Bow down before the wisdom of the AI.

What makes you think it would follow the concensus of the minority and conclude the market is over-valued?
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

Re: Realtime map of cyber attacks

Postby Wombaticus Rex » Thu Jul 09, 2015 10:59 am

coffin_dodger » Thu Jul 09, 2015 3:07 am wrote:I have to assume that your robot broker AI has the ability to process and reason beyond it's primal remit, as you have it capable of reaching a conclusion. To conclude requires thinking.


Does your calculator think? Markets are both behavioural and aesthetic beasts in practice, but HFT algos have a strictly mathematical periscope.

What makes you think it would follow the concensus of the minority and conclude the market is over-valued?


Because right now, we're talking in social terms about mathematical processes. Anglo coverage of the stock market is seldom (even remotely) instructive, and mostly obfuscates. We hear about John Paulson "making bets against" collateralized debt obligation assets -- this creates mental images which are the opposite of helpful! Casinos are extremely informal, most financial markets are entirely contractual. (Both run on vast quantities of alcohol, however - parallels do exist!)

HFT don't need to form macro-opinions, and indeed, don't. Those kinds of heuristics, which human beings are so fond of, are actually counterproductive in terms of both system requirements and actual results. System Requirements, ie, you'd need to build your calculator some frontal lobes and long-term memory to hold all those (mostly contradictory!) notions. Actual Results, ie, the market changes so quickly that What You Learned Yesterday is just noise, compared to the signal of market price discovery.

Any attempt to program "thinking" algos would be entirely wasted effort when the name of the game is executing the smallest possible code base on the fastest possible connections. You're losing both money and time; no upside.

Indeed, for the most part, it would be a mistake to say HFT algos even "see" the market as a whole. Sure, it does everything at speeds humans can't even grasp conceptually, but it's still doing linear operations -- one deal at one time. There is only the bid!

So, why would an algorithm side with a minority of market analysts? Because those primates are completely invisible to it, and it has no capacity whatsoever to even quantize those concepts. To the algo, their opinions don't exist, and indeed, nothing any market pundit has ever said in human history exists as an input to HFT algos in 2015. The only "reasoning" involved is a constant feedback loop of price discovery & arbitrage exploits.
User avatar
Wombaticus Rex
 
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Realtime map of cyber attacks

Postby coffin_dodger » Thu Jul 09, 2015 11:14 am

Wombat:
Really, it's a compelling argument for functional AI, isn't it? "We turned the robot broker on and he immediately concluded the entire market was over-valued, and it would be easy to scale enough paper/derivative leverage to force other players to agree."

Does your calculator think? Markets are both behavioural and aesthetic beasts in practice, but HFT algos have a strictly mathematical periscope.


So which is it, this AI of yours - a sophisticated calculator, or something able to conclude? I'm confused.
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

Re: Realtime map of cyber attacks

Postby Wombaticus Rex » Thu Jul 09, 2015 11:22 am

coffin_dodger » Thu Jul 09, 2015 10:14 am wrote:
So which is it, this AI of yours - a sophisticated calculator, or something able to conclude? I'm confused.


You want me to resolve the hard problem of consciousness? Uh, no. Totally agnostic, the philosophy behind it never interested me. Check out Minsky's Society of Mind for an engaging and readable introduction to modeling cognition.

That aside? You're inflicting your confusion on yourself, I think.

Again, opinions are social. Opinions are primate tech for wetware! The "conclusion" of an HFT algo is the order(s) being closed out, that's it. No sentences involved. When a human being makes a stock trade, there's a whole line of reasoning -- conveniently, often explicated in English -- you can access after the fact, you can talk it through with the primate who executed the trade.

The HFT algo is a stone wall painted black, it doesn't need opinions. It's every bit as sophisticated and nuanced as a herpes virus: is there a vulnerability to exploit? IF YES: Exploit vulnerability. Detect, exploit, repeat. 500,000 times per second.

Viewing that from a human time scale, it sure looks like a comprehensive opinion is being expressed, but we're talking about a dumb probe of vast computational power being dispatched into an environment and returning data -- that's it. Ground penetrating radar doesn't have any opinions on the readout it gives us.
User avatar
Wombaticus Rex
 
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Realtime map of cyber attacks

Postby coffin_dodger » Thu Jul 09, 2015 11:37 am

I'm sorry if I'm belabouring the point, but I'm pretty sure you're reinforcing my belief that cognitive AI will never exist.

How would your robot-trader AI ever be able to conclude that the market is overpriced, when it wouldn't have been programmed to look at the data in that way?
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

Re: Realtime map of cyber attacks

Postby Wombaticus Rex » Thu Jul 09, 2015 11:55 am

coffin_dodger » Thu Jul 09, 2015 10:37 am wrote:I'm sorry if I'm belabouring the point, but I'm pretty sure you're reinforcing my belief that cognitive AI will never exist.


Okay: picture our ground penetrating radar being placed on wheels and coupled with a mechanism for marking features it detects. You turn it on, let it run its course through some customers backyard, and 40 minutes later you've got markers laid down for the gas line, electric line, septic tank and that fallout shelter they never knew they had. Now: did the machine make conclusions about the yard? If so, where in the feedback loop were those decisions actually made?

I hope that illustrates the futility of anthropomorphizing these questions.

coffin_dodger » Thu Jul 09, 2015 10:37 am wrote:How would your robot-trader AI ever be able to conclude that the market is overpriced, when it wouldn't have been programmed to look at the data in that way?


The only way HFT algorithms "conclude" anything is by placing bids for testing and executing orders when conditions are right. Their "conclusions" are transactions, not opinions.

That said, you're right that they've "been programmed to look at the data that way" -- aka, they're programmed to make money by finding and exploiting vulnerabilities human analysts lack the attention span and time preference to detect.

Which, is rather the whole point...
User avatar
Wombaticus Rex
 
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Realtime map of cyber attacks

Postby coffin_dodger » Thu Jul 09, 2015 12:09 pm

Wombat:
That said, you're right that they've "been programmed to look at the data that way" -- aka, they're programmed to make money by finding and exploiting vulnerabilities human analysts lack the attention span and time preference to detect.


Mucho rhetorical bloviation about HFT and ground-radar, apart - :rofl2 - you haven't really answered my question, have you Wombat? - you sly old dog. :)

* sly old dog - English expression of affection for a scoundrel or somesuch
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

Re: Realtime map of cyber attacks

Postby Wombaticus Rex » Thu Jul 09, 2015 1:01 pm

Quite so; I think your question is the result of a category error and doesn't allow for a coherent response.

I am attempting to explain that through the lens of how HFT trading actually works.

I apologize if this comes off as rhetorical bloviation, most especially since I am actually typing in order to communicate with you.

Price discovery is discrete, the Planck length of market measurements and financial phenomena. HFT algos are participating in the market; their reality, their existence, is operational.

(I would never dare to imply that we human beings shuffle through our days in precisely the same automata fashion, of course -- surely, our worries, obsessions and fantasies are all vitally important signals, and not completely superfluous noise gnawing at our poor confused bodies until we die. That would be unpleasant.)
User avatar
Wombaticus Rex
 
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Realtime map of cyber attacks

Postby coffin_dodger » Thu Jul 09, 2015 1:36 pm

Incidentally, I don't see this as a 'winnable' discussion.

I'm just interested to understand the human thought process behind a statement that can turn a specifically-programmed robot-trader AI into an entity capable of concluding an end-result that would be totally contradictory to it's initial intentions.

'Bat said:
Quite so; I think your question is the result of a category error and doesn't allow for a coherent response.


In what way is my question a category error?

Really, it's a compelling argument for functional AI, isn't it? "We turned the robot broker on and he immediately concluded the entire market was over-valued, and it would be easy to scale enough paper/derivative leverage to force other players to agree."
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

Re: Realtime map of cyber attacks

Postby Wombaticus Rex » Thu Jul 09, 2015 2:15 pm

coffin_dodger » Thu Jul 09, 2015 12:36 pm wrote:I'm just interested to understand the human thought process behind a statement that can turn a specifically-programmed robot-trader AI into an entity capable of concluding an end-result that would be totally contradictory to it's initial intentions.


Because there is no contradiction. Unintended consequences are not rare in any system. (How do attempts at Socialist Utopia turn into acres of corpses, while bitter Anglo-Saxon Social Darwinists wound up developing and exporting a global life support system? cf. "Paperclip Maximizers.")

There is no contradiction. The end result is ALPHA - asymmetric, zero-sum profits. That's what it was specifically programmed to do. That is what almost every participant in every market, water or code based, is in the market to get. Goldman Sachs milks the marks, they don't bleed them dry because they're more sophisticated predators than trading software.

It's precisely because HFT algorithms have no sensitivity to little "externalities" (like whether or not they're crashing an entire market, burning down the other end of their own portfolio, or doing something with bad PR consequences) that they are so dangerous. They're essentially autonomous weapons.

Pass the popcorn.
User avatar
Wombaticus Rex
 
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Realtime map of cyber attacks

Postby Harvey » Fri Jul 10, 2015 12:42 pm

I may be stoned, but... Perhaps there's already an information SETI but if not wouldn't such a thing be interesting? First task might be identifying the characteristics of life. Who imagines that the first instance of digital life would be planned or even sentient? Monitoring for unexpected 'behaviour' in complex systems might be the first place to look. Wouldn't sentience be a later emergent phenomenon? And how much later? Assuming faster evolution and after some critical point, perhaps even the emergence of multiple competing sentient life forms. Why not?
And while we spoke of many things, fools and kings
This he said to me
"The greatest thing
You'll ever learn
Is just to love
And be loved
In return"


Eden Ahbez
User avatar
Harvey
 
Posts: 4202
Joined: Mon May 09, 2011 4:49 am
Blog: View Blog (20)

Re: Realtime map of cyber attacks

Postby DrEvil » Fri Jul 10, 2015 2:53 pm

IT'S ALIVE! IT'S ALIVE! Google's secretive Omega tech just like LIVING thing

'Biological' signals ripple through massive cluster management monster

One of Google's most advanced data center systems behaves more like a living thing than a tightly controlled provisioning system. This has huge implications for how large clusters of IT resources are going to be managed in the future.

"Emergent" behaviors have been appearing in prototypes of Google's Omega cluster management and application scheduling technology since its inception, and similar behaviors are regularly glimpsed in its "Borg" predecessor, sources familiar with the matter confirmed to The Register.

Emergence is a property of large distributed systems. It can lead to unforeseen behavior arising out of sufficiently large groups of basic entities.

Just as biology emerges from the laws of chemistry; ants give rise to ant colonies; and intersections and traffic lights can bring about cascading traffic jams, so too do the ricocheting complications of vast fields of computers allow data centers to take on a life of their own.

The kind of emergent traits Google's Omega system displays means that the placement and prioritization of some workloads is not entirely predictable by Googlers. And that's a good thing.

"Systems at a certain complexity start demonstrating emergent behavior, and it can be hard to know what to do with it," says Google's cloud chief Peter Magnusson. "When you build these systems you get emergent behavior."

By "emergent behavior", Magnusson is talking about the sometimes unexpected ways in which Omega can provision compute clusters, and how this leads to curious behaviors in the system. The reason this chaos occurs is due to the 10,000-server-plus cluster scale it runs at, and the shared state, optimistic concurrency architecture it uses.

Omega was created to help Google efficiently parcel out resources to its numerous applications. It is unclear whether it has been fully rolled out, but we know that Google is devoting resources to its development and has tested it against very large Google cluster traces to assess its performance.

Omega will handle the management and scheduling of various tasks and places apps onto the best infrastructure for their needs in the time available.

It does this by letting Google developers select a "priority" for an application according to the needs of the job, the expected runtime, its urgency, and uptime requirements. Jobs relating to Google search and ad platforms will get high priorities while batch computing jobs may get lower ones, and so on.

Omega nets together all the computers in a cluster and exposes this sea of hardware to the application layer, where an Omega sub-system arbitrages the priorities' of an innumerable number of tasks then neatly places them on one, ten, a hundred, or even more worker nodes.

"You're on this unstable quicksand all the time and just have to deal with it," Google senior fellow Jeff Dean told The Reg. "Things are changing out from under you fairly readily as the scheduler decides to schedule things or some other guy's job decides to do some more work."

Some of these jobs will have latency requirements, and others could be scattered over larger collections of computers. Given the thousands of tasks Google's systems can run, and the interconnected nature of each individual application, this intricacy breeds a degree of unexpectedness.

"There's a lot of complexity involved, and one of the things that distinguishes companies like Google is the degree to which these kinds of issues are handled," said John Wilkes, who is one of the people at Google tasked with building Omega. "Our goal is to provide predictable behaviors to our users in the face of a huge amount of complexity, changing loads, large scale, failures, and so on."

The efficiencies bought about by Omega means Google can avoid building an entirely new data center, saving it scads and scads of money and engineering time, Wilkes told former Reg man Cade Metz earlier this year.

"Strict enforcement of [cluster-wide] behaviors can be achieved with centralized control, but it is also possible to rely on emergent behaviors to approximate the desired behavior," Google wrote in an academic paper [PDF] that evaluated the performance of Omega against other systems.

By handing off job scheduling and management to Omega and Borg, Google has figured out a way to get the best performance out of its data centers, but this comes with the cost of increased randomness at scale.

"What if the number of workers could be chosen automatically if additional resources were available, so that jobs could complete sooner?" Google wrote in the paper. "Our specialized [Omega] MapReduce scheduler does just this by opportunistically using idle cluster resources to speed up MapReduce jobs. It observes the overall resource utilization in the cluster, predicts the benefits of scaling up current and pending MapReduce jobs, and apportions some fraction of the unused resources across those jobs according to some policy."

This sort of fuzzy chaos represents the new normal for massive infrastructure systems. Just as with other scale-out technologies – such as Hadoop, NoSQL databases, and large machine-learning applications – Google is leading the way in coming up against these problems and having to deal with them.

First in the firing line

Omega matters because soon after Google runs into problems, they trickle down to Facebook, Twitter, eBay, Amazon, and others, and then into general businesses. Google's design approaches tend to crop up in subsequent systems, either through direct influence or independent development.

Omega's predecessor also behaved strangely, Sam Schillace, VP of engineering at Box and former Googler, recalled.

"Borg had its sharp edges but was a very nice service," he told us. "You run a job in Borg at a certain priority level. There's a low band [where] anybody can use as much as they want," he explained, then said there's a production band which has a higher workload priority.

"Too much production band stuff will just fight with each other. You can get very unstable behavior. It's very strange – it behaves like biological systems from time to time," he says. "We'll probably wind up moving in some of those directions – as you get larger you need to get into it."

Though Omega is obscured from end users of Google's myriad services, the company does have plans to use some of its capabilities to deliver new types of cloud services, Magnusson confirmed. The company could use the system as the foundation of spot markets for virtual machines in its Compute Engine cloud, he said.

"Spot markets for VMs is a flavor of trying to adopt that," he said. "To adopt that moving forward [we might] use SLA bin packing. If you have some compute jobs that you don't really care exactly what is done – don't care about losing one percent of the results – that's a fundamentally different compute job. This translates into very different operational requirements and stacks."

Google wants to "move forward in a way so you can represent that to the developer," he said, without giving a date.

Omega's unpredictability is a strength for effectively portioning out workloads, and the chaos that resides within it comes less from its specific methodology, and perhaps more from the way that at scale, in all things, strange behaviors occur – a fact that is both encouraging, and in this hack's mind, humbling.

"When you have multiple constituencies attempting the same goal, you end up with unexpected behaviors," said JR Rivers, the chief of Cumulus Networks and a former Googler. "I would argue that [Omega's unpredictability is] neither the outcome of a large system nor specific to a monolithic stack, but rather the law of unintended consequences."

A mind of its own? It seems that way. Just ask open-source Mesos

Already, researchers at the University of California at Berkeley have taken tips from Google to create their own variant called Apache Mesos, which is an open-source Google Borg clone running at large web properties such as Twitter and Airbnb.

However, Mesos is also exhibiting strange behaviors.

"Depending on a combination of things like weights and priorities there's a potential reallocation of resources across and around these jobs that has a compounding affect that can exaggerate these non-determinisms," said Benjamin Hindman, VP of Apache Mesos.

"For some jobs that are good at dealing with these non-determinisms [Omega's behavior] is totally fine. For some of these jobs it can mean much decreased latency to finish."

As stated, emergence leaps out of scale. So, while some engineers might like to be given a completely deterministic system, this may soon prove to be impossible for sufficiently large data centers.

Instead, applications will need to be built with all the reliability features that big business needs – such as transaction guarantees, distributed locking, and coherence – but must be able to be run in a sufficiently distributed manner on systems like Mesos and Borg that can tolerate failures without disrupting overall reliability.

"There's two directions to go out here - one is to go out to the system and try and eliminate the non-determinism, the other is tell the software there's inherent non-determinism and program around that," Hindman said.

"While I'd love to tell someone 'your interface is a completely deterministic user interface' oftentimes the cost of doing that is so prohibitive you couldn't do it. You might be able to do something like that for a very particular type or class of apps [but] if you do it for one class of app it could have really bad effects on one other class of app."

All applications need to be built to sustain certain failures or slowdowns or obscure latency scenarios, and not fail. Some companies are already doing this, such as Salesforce with its Keystone system.

The job of a system like Omega, or Borg, or Mesos, or even the revamped MapReduce structure of the YARN resource negotiator in Hadoop version 2, is to hide as much of this as possible from the developer straddling the stack. But some programmers will notice when they deploy it at sufficient scale.

"We've had a lot of experience running YARN at scale now," said Arun Murthy, the founder and architect of Hadoop specialist Hortonworks. "YARN cannot guarantee at scale. We're talking about running a million-plus jobs per day – at that point for a given job you might see variation."

This variation could be the placement of replicas for certain jobs, he said. "Today you might get resources in host 1 and tomorrow in host 82."

By exposing some level of non-determinism to the developer, YARN can give assurances it will make sensible use of compute resources at scale, but on the fringes of sufficiently large clusters, weird things will happen, he admitted.

"It's not an exact science," he says. "What you really need is at very low cost to the end user good performance in the aggregate."

How we learned to stop worrying and embrace chaos

The unpredictable behavior that systems such as Borg, Omega, Mesos, and YARN can display, are a direct result of the number of components within them that all need to jostle for attention.

"My strong belief is that these [emergent properties] manifest in interesting ways in each system as you scale up – I mean, really scale up to 5,000-plus nodes," said Arun Murthy of Hortonworks.

This element of randomness has roots in how we've built low-level components of infrastructure systems in the past.

"There's an emergent behavior that comes out," said Hindman of the Apache Mesos project. "There's all sorts of reasons for that. When it gets to large scale there's a combination of the fact that machine failures now at a large scale can change the property of the job whereas at the smaller scale there wasn't probability of machine failures as much, the second one is there's a lot of other non-determinism in and around the job."

In the past, similar behaviors have been seen in the way garbage collectors work in Java virtual machines, he said. "All of a sudden now you'll get weird things going on like things in the JVM will make those weird behaviors develop... a lot of this stuff starts to creep up at larger scale."

Hindman finds another example in the behavior of any highly concurrent parallel system with numerous cores running hundreds of threads. "You'd see a lot of interestingly similar behaviors. Just based on the Linux thread scheduler, the I/O thread scheduler these types of systems often have a lot of the same non-determinism issues but it's compounded because we have many, many layers of this."

Because systems such as YARN, Omega, Borg, Mesos, and so on, are designed to run thousands and thousands of tasks with vast amounts of network chatter, I/O events, and running apps across time periods that vary from milliseconds to months, the chance of a level of this underlying randomness becoming exposed and having a knock-on effect on high-level tasks is much, much higher.

Over the long term, approaches like this will make widely deployed intricate tangles of software much more reliable, because it will force developers to design their apps to effectively deal with the shifting quicksand-like hardware pools that their code lives on top of. By programming applications to be able to deal with failures at this scale, software will become more like biological systems with the redundancy and resiliency that implies.

It reminds us of what Urs Hölzle, Google's senior director of technical infrastructure, remarked a couple of years ago: "At scale everything breaks no matter what you do and you have to deal reasonably cleanly with that and try to hide it from the people actually using your system."

With schedulers such as Borg and Omega, and community contributions from Mesos or YARN, the world is waking up to the problems of scale.

Instead of fighting these non-determinisms and rigidly dictating the behavior of distributed systems, the community has instead created a fleet of tools to coerce this randomness into some semblance of order, and in doing so has figured out a way to turn the randomness and confusion that lurks deep within any large sophisticated data center from a barely seen cloud-downing beast into an asset that focuses apps to be stronger, healthier, and more productive.


http://www.theregister.co.uk/2013/11/04 ... ud/?page=1
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 4158
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Realtime map of cyber attacks

Postby coffin_dodger » Tue Aug 04, 2015 3:59 pm

Dr Evil, may I ask a question - what is it about the emergence of AI that excites you so? i.e. what do you predict it's going to do for us?
User avatar
coffin_dodger
 
Posts: 2216
Joined: Thu Jun 09, 2011 6:05 am
Location: UK
Blog: View Blog (14)

Previous

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 168 guests