Meshnet - Creating A Redundant Decentralized Internet

Moderators: Elvis, DrVolin, Jeff

Meshnet - Creating A Redundant Decentralized Internet

Postby General Patton » Mon Oct 15, 2012 8:36 pm

My experience on the Darknet is limited, but most of the people on the TOR onion's seemed to be crazy as fuck, Alex Jones is tame by comparison.

There isn't much of value beyond the black market, the most resilient site so far has been SilkRoad:

http://arstechnica.com/tech-policy/2012/08/study-estimates-2-million-a-month-in-bitcoin-drug-sales/
Silk Road sellers have collectively had around $1.9 million of sales per month in recent months. Almost 1,400 sellers have participated in the marketplace, and they have collectively earned positive ratings from 97.8 percent of buyers. And the service is growing, with Silk Road's estimated commission revenue roughly doubling between March and July of this year.


$1.9 million shows you that this market is still in it's infancy, compared to it's real world equivalent which nets hundreds of billions per year in profits with a GDP in the tens of trillions. The same is true for most of the psuedo stock exchanges and bitcoin mining operations, the economy is tiny compared to real world activity. Software piracy is most widely practiced in countries where people can't afford the software anyway:
http://www.nationmaster.com/graph/cri_s ... iracy-rate

So what if, in a magical fantasy land, the government decides to shut down or dramatically censor the internet because of this tiny but growing sliver of black market activity?*

Beyond TOR, there are other plans to decentralize and ensure anonymity in communications:
https://projectmeshnet.org/

https://wiki.projectmeshnet.org/Getting_Started
In order to understand how cjdns works, it is important to understand how the existing internet works when you send a packet, at each "intersection in the road" the router reads the address on the packet and decides which turn it should take. In the cjdns net, a packet goes to a router and the router labels the packet with directions to a router which will be able to best handle it. That is, a router which is near by in physical space and has an address which is numerically close to the destination address of the packet. The directions which are added to the packet allow it to go through a number of routers without much handling, they just read the label and bounce the packet wherever the next bits in the label tell them to. Routers have a responsibility to "keep in touch" with other routers that are numerically close to their address and also routers which are physically close to them.




https://commotionwireless.net/
Should Commotion be used in a particular location?
It is expected that the mesh networks will grow, shrink, and move as necessary and according to available resources.

Will this project be backed by a satellite Internet service?
A local mesh network, as used by Commotion, may be supported by any available traditional Internet connection, up to and including satellite service.

If we have access to this service will we still need to use a locally available Internet Service for specific purposes such as Internet purchases or payments within the country?
It would certainly be possible to set up local services on a Commotion-based network, allowing citizens and regional visitors to communicate and advertise effectively within the bounds of the network. However, any Internet-based services, such as credit card verification, will still require a connection to the Internet. The Commotion network can be used to more effectively distribute access to the public Internet in order to grant more people access to those services.


Satellite launch costs are going down over time, eventually we will be able to use small satellites to create intranets and darknets as backup networks.

*In all likelyhood the surveillance system(s) will push out "certain types of people" from the cities into sub-urban/rural areas. If anyone would shut down whatever constitutes the internet by that time, it would be guerrilla forces.
штрафбат вперед
User avatar
General Patton
 
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Meshnet - Creating A Redundant Decentralized Internet

Postby Hammer of Los » Tue Oct 16, 2012 6:40 am

...

Quick reply of the kind that says brilliant great work, might be important, hope not, keep it up!

thumbs up smiley.

...
Hammer of Los
 
Posts: 3309
Joined: Sat Dec 23, 2006 4:48 pm
Blog: View Blog (0)

Re: Meshnet - Creating A Redundant Decentralized Internet

Postby General Patton » Tue Apr 23, 2013 1:38 pm

http://www.nature.com/nature/journal/v4 ... 378a0.html
Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network1. Complex communication networks2 display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web3, 4, 5, the Internet6, social networks7 and cells8. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.




Here is the critical difference between random failure and a directed attack. Over time the attack becomes more effective as it prunes high degree nodes regardless of the network type, even when very small amounts of nodes are removed.
Image
a, Comparison between the exponential (E) and scale-free (SF) network models, each containing N = 10,000 nodes and 20,000 links (that is, k = 4). The blue symbols correspond to the diameter of the exponential (triangles) and the scale-free (squares) networks when a fraction f of the nodes are removed randomly (error tolerance). Red symbols show the response of the exponential (diamonds) and the scale-free (circles) networks to attacks, when the most connected nodes are removed. We determined the f dependence of the diameter for different system sizes (N = 1,000; 5,000; 20,000) and found that the obtained curves, apart from a logarithmic size correction, overlap with those shown in a, indicating that the results are independent of the size of the system. We note that the diameter of the unperturbed ( f = 0) scale-free network is smaller than that of the exponential network, indicating that scale-free networks use the links available to them more efficiently, generating a more interconnected web. b, The changes in the diameter of the Internet under random failures (squares) or attacks (circles). We used the topological map of the Internet, containing 6,209 nodes and 12,200 links (k = 3.4), collected by the National Laboratory for Applied Network Research http://moat.nlanr.net/Routing/rawdata/. c, Error (squares) and attack (circles) survivability of the World-Wide Web, measured on a sample containing 325,729 nodes and 1,498,353 links3, such that k = 4.59.


Note that this applies to a generic set of hubs, not the entire internet. The error that the Nature paper above is that it generalizes instead of taking into account the performance optimization qualities of routers. It tries to view the internet as a natural system instead of an engineered one.

http://scenic.princeton.edu/network20q/ ... s'_heel%3F
Topology Accuracy

There is no complete, accurate map of the Internet. To estimate the topology of the graph of routers, researchers use algorithms such as trace-route. However, trace-route measurements are prone to biased sampling. Furthermore, Internet exchange points and shortcuts are not detected by trace-route, so its resulting graphs may miss as much as 50% of links. Since the Achilles' Heel argument depends upon a specific topology, the shortcomings of trace-route seriously challenge the argument.

Moreover, as elaborated below, the Preferential Attachment model of the Internet is inaccurate. The Constraint Optimization model is more accurate, and it does not have an Achilles' heel.

Topology Shortcomings

The Achilles' heel argument also oversimplifies the issue by focusing solely on topology. In reality, if certain routers were destroyed, other routers can use special protocols to respond. The surviving routers would not continue to helplessly communicate with destroyed routers. The Achilles' heel argument ignores the functionality of routers, and does not address this shortcoming.


However, there are also economic and technological considerations that shape the Internet into a Constrained Optimization Graph.

The setup and running costs of building and connecting a route increase with the distance of the connections. This is especially true of routers in the backbone. The cost of these links can dominate the cost structure, so the Internet must optimize physical distance between routers too. Thus, in the Preferential Attachment graph is an inaccurate model because the high costs prohibit central routers from having several long-distance connections. It is more economical to follow the Constrained Optimization graph.

Moreover, the cost structure encourages networks to minimize the length and number of links. A graph that is highly connected at all levels, not just a few core nodes, more accurately follows these economic considerations.

There are also technological limitations on the Preferential Attachment graph. The number of packets a router can handle is fundamentally limited by technological advances. As such, the highly centralized routers in the Preferential Attachment model do not exist because no router could, given technological limitations, actually handle the bandwidth required to support his model.

A model that more closely follows such technological and economic goals is the HOT (highly optimized tradeoffs) model. HOT models can have maximize multiple objective functions, multiple constraints, and also includes an uncertainty component. The uncertainty component is important because it more accurately reflects the distributed nature of building the Internet. A centralized model for building the Internet (such as the Power Law) is therefore less accurate than the HOT model. In certain special cases of the HOT model, the power law for nodes can actually be generated.

The HOT model can consider technological constraints such as maximum router connectivity and the prohibitive cost of very long router connections. It also considers the economic costs of building routers in general. The HOT model is also robust when highly centralized nodes are removed; it cannot collapse if only a few routers are taken out. Thus, their is no Achilles' Heel if the HOT model is accurate, and theoretical and empirical evidences suggests it is indeed more accurate.

Properties

The topology of the CO graph more accurately resembles the Internet. It is scale-free, and the edge nodes have high degrees. The central nodes tend to have small degrees. It does not suffer from the Achilles' Heel problem of the PA graph. The CO graph is the result of deliberate human engineering. A randomly constructed graph is not likely to become a CO graph; a PA graph is much more likely. The Achilles' Heel argument falsely assumes the Internet is a PA graph because it is far more likely to happen in a randomly constructed graph. However, the Internet was not randomly constructed but deliberately designed to maximize total throughput, making it a CO graph.


http://www.pnas.org/content/102/41/14497.full#sec-2
A popular case study for complex networks has been the Internet, with a central issue being the extent to which its design and evolution have made it “robust yet fragile” (RYF), that is, unaffected by random component failures but vulnerable to targeted attacks on its key components. One line of research portrays the Internet as “scale-free” (SF) with a “hub-like” core structure that makes the network simultaneously robust to random losses of nodes yet fragile to targeted attacks on the highly connected nodes or “hubs” (1–3). The resulting error tolerance with attack vulnerability has been proposed as a previously overlooked “Achilles' heel” of the Internet. The appeal of such a surprising discovery is understandable, because SF methods are quite general and do not depend on any details of Internet technology, economics, or engineering (4, 5).

One purpose of this article is to explore how this SF depiction compares with the real Internet and explain the nature and origin of some important discrepancies. Another purpose is to suggest that a more coherent perspective on the Internet as a complex network, and in particular its RYF nature, is possible in a way that is fully consistent with Internet technology, economics, and engineering. A complete exposition relies on the mathematics of random graphs and statistical physics (6), which underlie the SF theory, as well as on the very details of the Internet ignored in the SF formulation (7). Nevertheless, we aim to show here that the essential issues can be readily understood, if not rigorously proven, by using less technical detail, and the lessons learned are relevant well beyond either the Internet or SF-network models (8–10).


The most significant SF claims for the Internet are that the router graph has power-law degree sequences that give rise to hubs, which by SF definition are highly connected vertices that are crucial to the global connectivity of the network and through which most traffic must pass (3). The SF assertion (later formalized in ref. 12) is that such hubs hold the network together, giving it “error tolerance” to random vertex failures, because most vertices have low connectivity (i.e., are nonhubs) but also have “attack vulnerability” to targeted hub removal, a previously overlooked Achilles' heel. The rationale for this claim can be illustrated by using the toy networks shown in Fig. 1, all of which have the identical scaling-degree sequence D shown in Fig. 1e .Fig. 1a shows a graph (size issues notwithstanding) that is representative of the type of structure typically found in graphs generated by SF models, in this case preferential attachment (PA). This graph is drawn in two ways: the left and right visualizations emphasize the growth process and Internet properties, respectively. Clearly, the highest-degree nodes are essential for graph connectivity, and this feature can be seen even more clearly for the more idealized SF graph shown in Fig. 1b . Thus, the SF claims would certainly hold if the Internet looked at all like Figs. 1 a and b . As we will see, the Internet looks nothing like these graphs and is much closer to Fig. 1d , which has the same degree sequence D but is otherwise completely different, with high-degree vertices at the periphery of the network, where their removal would have only local effects. Thus, although scaling-degree sequences imply the presence of high-degree vertices, they do not imply that such nodes form necessarily “crucial hubs” in the SF sense.


Image
Diversity among graphs having the same degree sequence D. (a) RNDnet: a network consistent with construction by PA. The two networks represent the same graph, but the figure on the right is redrawn to emphasize the role that high-degree hubs play in overall network connectivity. (b) SFnet: a graph having the most preferential connectivity, again drawn both as an incremental growth type of network and in a form that emphasizes the importance of high-degree nodes. (c) BADNet: a poorly designed network with overall connectivity constructed from a chain of vertices. (d) HOTnet: a graph constructed to be a simplified version of the Abilene network shown in Fig. 2. (e) Power-law degree sequence D for networks shown in a–d. Only di > 1 is shown.
штрафбат вперед
User avatar
General Patton
 
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Meshnet - Creating A Redundant Decentralized Internet

Postby DrEvil » Tue Apr 23, 2013 2:18 pm

Thanks General! I was daydreaming about how to decentralize the internet yesterday, and today you post this. It's almost enough to make me a synchronicity fan.

The thing that stumped me yesterday was how to extend an ad-hoc network to rural areas. Urban areas shouldn't really be too hard because of the density.
Only thing I could come up with was either balloons or drones with a tether and a permanent power-supply to relay the signal.

Taking it one step further - in urban areas where data traffic often congests the system, you could use a small fleet of relay-drones. They swoop in and daisy-chain the signal to a base-station with less traffic. It's probably a really stupid idea, but it would look awesome! :yay
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3971
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Meshnet - Creating A Redundant Decentralized Internet

Postby General Patton » Wed May 29, 2013 3:51 pm

DrEvil » Tue Apr 23, 2013 1:18 pm wrote:Thanks General! I was daydreaming about how to decentralize the internet yesterday, and today you post this. It's almost enough to make me a synchronicity fan.

The thing that stumped me yesterday was how to extend an ad-hoc network to rural areas. Urban areas shouldn't really be too hard because of the density.
Only thing I could come up with was either balloons or drones with a tether and a permanent power-supply to relay the signal.

Taking it one step further - in urban areas where data traffic often congests the system, you could use a small fleet of relay-drones. They swoop in and daisy-chain the signal to a base-station with less traffic. It's probably a really stupid idea, but it would look awesome! :yay


Check 'em:
http://www.techthefuture.com/technology ... ork-video/
Earlier this week The Pirate Bay announced it will experiment with sending out small drones to serve as Low Orbit Server Stations. In a next step in the cat and mouse game between the most resilient bittorrent site and those opposing the file-sharing age, TPB is looking to get its machines of land and into the air to make raiding their servers even more difficult. The idea got a lot of attention but was generally considered too far out, even for TPB.

But not to Liam Young of the London-based think tank called Tomorrows Thoughts Today. He and his fellows already built an airborne pirate internet, TorrentFreak reported.

Inspired by the notion that human interaction in cities is decreasingly dependent on permanent infrastructure like streets and squares and is moving into the virtual plain of digital networks, the team developed the drones. On the one hand they want to visualize digital human interaction. As the drones hover above a crowd they light up and break formation when they relay data creating a dance of light and movement. But the mesh network also has a darker inspiration, serving as a fail safe for kill-switching the internet as happened in 2011 during the Egyptian uprising.


http://www.wired.co.uk/news/archive/201 ... gle-blimps
Search giant Google is intending to build huge wireless networks across Africa and Asia, using high-altitude balloons and blimps.

The company is intending to finance, build and help operate networks from sub-Saharan Africa to Southeast Asia, with the aim of connecting around a billion people to the web.

To help enable the campaign, Google has been putting together an ecosystem of low-cost smartphones running Android on low-power microprocessors. Rather than traditional infrastructure, Google's signal will be carried by high-altitude platforms - balloons and blimps - that can transmit to areas of hundreds of square kilometres.

Google has also considered using satellites to achieve the same goal. "There's not going to be one technology that will be the silver bullet," an unnamed source told the Wall St Journal. A Google spokesperson declined to comment.

&
http://google-africa.blogspot.se/2013/0 ... al-in.html
штрафбат вперед
User avatar
General Patton
 
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Meshnet - Creating A Redundant Decentralized Internet

Postby tapitsbo » Thu Dec 10, 2015 10:08 pm

Hey General Patton,

my understanding is authorities are really going after attempts to set up meshnets and the like. what's your take on the current state of all this?
tapitsbo
 
Posts: 1824
Joined: Wed Jun 12, 2013 6:58 pm
Blog: View Blog (0)


Return to Data & Research Compilations

Who is online

Users browsing this forum: No registered users and 8 guests