11-16-2010, 10:43 PM
u100, I think this is a valuable topic. Thanks for introducing it.
Based on my research about software and network engineering, I am more optimistic than you. You may already be familiar with all these engineering points - if so, let's discuss them - but I'm including them for full context. This is directly at the heart of my professional expertise.
In my career developing and supporting online systems, I have read many of the fundamental technical papers about Internet technologies from the 60's to the present day, along with memoirs by those involved and reports by analysts and journalists who have studied the history.
The original design goals for what became the Internet were well defined in a research paper from the RAND Institute. As with all that I mention here, I can look up the original reference if you're not able to find it for yourself.
The original end: make a communications system that could survive even in a nuclear attack, even if many communication lines and switching stations were destroyed. The original means: replace circuit switching (requiring a switch to hold a line open for a call) with packet switching, dynamically routing each chunk of a message through whatever route happens to be open.
Even if some links or switches are up and down in a war, each individual packet uses whatever means are available, moment to moment, to get one hop closer to their destination. At the receiving end, requests are made for any missed packets to be sent again.
This design is still used in today's Internet. The packets are anywhere from a few dozen to a few thousand bytes.
From an engineering point of view, Ali is right that there is no difference between a node that disappears because it was hit by a bomb, and a node that disappears because the government censored it. All remaining available nodes automatically participate in rerouting the traffic.
Dynamically routed peer to peer packet switching has overtaken circuit switching in so many applications.
Inside computers, memory and peripheral interconnect busses inside computers use interleaved packet transactions, often with no central controller - as, for instance, in PCI Express having no single authority on the bus after autoconfiguration at boot time.
Between computers, Ethernet has no central authority on the wire or in the air, but uses collision detection: everyone may talk at any time, and if there's interference, everyone waits a random amount of time and tries again.
Between networks, the Internet's TCP/IP protocol stack, originally a lowest common denominator between networks, is now widely used inside networks.
VOIP is destroying the economics of circuit-switched telephone networks. Linux development uses distributed version control. Downloads are optimized with content distribution networks and peer to peer torrent software. With protocols such as ZFS over iSCSI to multiple storage nodes, content could be securely copied worldwide, automatically.
You're right that deep packet inspection can be used against proxies. That approach is only useful for packets that are in cleartext. In the next year to few years, we will see more widespread use of encryption in routine traffic.
The faster Javascript runtimes will allow downloading code to the browser that dynamically encrypts and decrypts traffic on the fly, inside the user's session, leaving no trace in the operating system after the session ends.
Dynamic languages like Smalltalk and Lisp are now available with Javascript back-ends, as for example with the Lively Kernel, Clamato, and Parenscript; and with the most common already available browser plug-ins, such as Vista Smalltalk for Silverlight/Mono Moonlight, and OpenLaszlo for Flash. With a dynamic language and mobile runtime engine, objects, functions and code can migrate between client and server automatically. The flexibility of these systems makes for faster time to market, reduced maintenance and improved modularity. A side effect is that it is impossible to statically inspect code and be certain exactly what it will do at runtime, or even which functions will be executed let alone where they will run.
The weak centralized point is the domain name system, but if any proxy can be reached outside the firewall, then the true identity and content of any server can be obtained. The new Web Sockets protocols will allow routine cross-domain proxy use for popular services like Google and Facebook.
The current sticking point on the encrypt-everything world is runtime performance of the necessary math code in the browser. Through Google's now-in-development Native Code browser plugin sandbox, plus OpenCL's use of graphics chips as general-purpose massively parallel number crunchers, encryption and steganography can be made realtime.
This all means that a user's text can be scrambled, hidden inside slightly noise image and sound files, rerouted through Youtube, Google, and Facebook, all inside the same encrypted channels that governments will soon urge everyone to use to avoid identity theft.
This puts authoritarian regimes in a bind. If they block the technology used by the most popular services worldwide, their economies and people will obviously notice the disappearance of these services, and identity theft crimes will increase. In a world of redirectable web sockets carrying encrypted, embedded traffic that may or may not include runtime objects that create new code and forward it, an attempt at a national firewall may even lock the leaders out of their own offshore bank accounts!
I see these technological trends as unstoppable. Within five years, it will be technically impossible for a government to effectively block, filter, or censor portions of the Internet. Any attempt to do so will be seen by the people, and more importantly by the governments themselves, as futile attempts to cut off their nose to spite their face.
The human desire for interpersonal communication and personal growth is ultimately more powerful and resourceful than the manipulations of the dark cabal. I believe we will really see this tide turning in the next few years.
Based on my research about software and network engineering, I am more optimistic than you. You may already be familiar with all these engineering points - if so, let's discuss them - but I'm including them for full context. This is directly at the heart of my professional expertise.
In my career developing and supporting online systems, I have read many of the fundamental technical papers about Internet technologies from the 60's to the present day, along with memoirs by those involved and reports by analysts and journalists who have studied the history.
The original design goals for what became the Internet were well defined in a research paper from the RAND Institute. As with all that I mention here, I can look up the original reference if you're not able to find it for yourself.
The original end: make a communications system that could survive even in a nuclear attack, even if many communication lines and switching stations were destroyed. The original means: replace circuit switching (requiring a switch to hold a line open for a call) with packet switching, dynamically routing each chunk of a message through whatever route happens to be open.
Even if some links or switches are up and down in a war, each individual packet uses whatever means are available, moment to moment, to get one hop closer to their destination. At the receiving end, requests are made for any missed packets to be sent again.
This design is still used in today's Internet. The packets are anywhere from a few dozen to a few thousand bytes.
From an engineering point of view, Ali is right that there is no difference between a node that disappears because it was hit by a bomb, and a node that disappears because the government censored it. All remaining available nodes automatically participate in rerouting the traffic.
Dynamically routed peer to peer packet switching has overtaken circuit switching in so many applications.
Inside computers, memory and peripheral interconnect busses inside computers use interleaved packet transactions, often with no central controller - as, for instance, in PCI Express having no single authority on the bus after autoconfiguration at boot time.
Between computers, Ethernet has no central authority on the wire or in the air, but uses collision detection: everyone may talk at any time, and if there's interference, everyone waits a random amount of time and tries again.
Between networks, the Internet's TCP/IP protocol stack, originally a lowest common denominator between networks, is now widely used inside networks.
VOIP is destroying the economics of circuit-switched telephone networks. Linux development uses distributed version control. Downloads are optimized with content distribution networks and peer to peer torrent software. With protocols such as ZFS over iSCSI to multiple storage nodes, content could be securely copied worldwide, automatically.
You're right that deep packet inspection can be used against proxies. That approach is only useful for packets that are in cleartext. In the next year to few years, we will see more widespread use of encryption in routine traffic.
The faster Javascript runtimes will allow downloading code to the browser that dynamically encrypts and decrypts traffic on the fly, inside the user's session, leaving no trace in the operating system after the session ends.
Dynamic languages like Smalltalk and Lisp are now available with Javascript back-ends, as for example with the Lively Kernel, Clamato, and Parenscript; and with the most common already available browser plug-ins, such as Vista Smalltalk for Silverlight/Mono Moonlight, and OpenLaszlo for Flash. With a dynamic language and mobile runtime engine, objects, functions and code can migrate between client and server automatically. The flexibility of these systems makes for faster time to market, reduced maintenance and improved modularity. A side effect is that it is impossible to statically inspect code and be certain exactly what it will do at runtime, or even which functions will be executed let alone where they will run.
The weak centralized point is the domain name system, but if any proxy can be reached outside the firewall, then the true identity and content of any server can be obtained. The new Web Sockets protocols will allow routine cross-domain proxy use for popular services like Google and Facebook.
The current sticking point on the encrypt-everything world is runtime performance of the necessary math code in the browser. Through Google's now-in-development Native Code browser plugin sandbox, plus OpenCL's use of graphics chips as general-purpose massively parallel number crunchers, encryption and steganography can be made realtime.
This all means that a user's text can be scrambled, hidden inside slightly noise image and sound files, rerouted through Youtube, Google, and Facebook, all inside the same encrypted channels that governments will soon urge everyone to use to avoid identity theft.
This puts authoritarian regimes in a bind. If they block the technology used by the most popular services worldwide, their economies and people will obviously notice the disappearance of these services, and identity theft crimes will increase. In a world of redirectable web sockets carrying encrypted, embedded traffic that may or may not include runtime objects that create new code and forward it, an attempt at a national firewall may even lock the leaders out of their own offshore bank accounts!
I see these technological trends as unstoppable. Within five years, it will be technically impossible for a government to effectively block, filter, or censor portions of the Internet. Any attempt to do so will be seen by the people, and more importantly by the governments themselves, as futile attempts to cut off their nose to spite their face.
The human desire for interpersonal communication and personal growth is ultimately more powerful and resourceful than the manipulations of the dark cabal. I believe we will really see this tide turning in the next few years.