Trends in Broadband
* What broadband techniques will emerge in the future?
There are a couple of possible techniques which can significantly change the paradigm of Internet connectivity. WiMax and Broadband over Power Lines (BPL) are two of them. Both forms of broadband delivery could be a real threat to the established DSL and cable operators. However, none of them have been fully implemented yet.
Names of the standards are relatively unimportant, but there are some conceptual ideas in those techniques that will definitely survive independent of the employing standard. For example WiMax provides long range, high speed, low battery consuming wireless Internet connections. Even if WiMax fails as a standard (because of problems in the standard or lack of supporting sponsors etc.), another standard employing the same idea under a different name will emerge.
WiMax and BPL have one thing in common: they try to reduce the infrastructure need for broadband. WiMax uses space as an infrastructure, whereas BPL uses electric lines which are definitely needed for a computer to run. If the infrastructure need for the broadband vanishes, this will boost the accessibility and therefore penetration of the broadband significantly.
* Will the emerging Broadband over Powerline (BPL) technology be successful?
BPL proposed the use of the existing electrical grid as a giant data network that could form an immediate alternative to Cable and DSL. So far all intents to introduce this service have not been successful. However, it is a genius concept and might represent a real threat to cable and phone companies. BPL companies have started to develop alliances and partnerships with different types of companies to try to implement this concept.
Although technical problems like interference still exists, it is only a matter of time until problems are solved, assuming big players keep supporting and sponsoring the technology.
* What are the future opportunities of broadband?
Future Broadband Satellite Networks: next generation multi-beam broadband satellite systems will target high throughput, data transparency, high routing flexibility, small granularity, etc. and at low bit delivery cost. Challenges: travel off data, irreversible delay due to speed of light. costly to implement.
WIMAX: wireless technology enabling broadband over a large area (5km) without any significant infrastructure investment. challenges: security, pierce throught building and transport infrastructure.
* Is there a growing distance between countries with a good developed broadband system and countries with an underdeveloped broadband system?
As we can infere from the graph below, there is a significant correlation (0.6) between the GDP per capital and the broadband penetration. Further, we can notice that the countries at the left side (highest penetration) of the graph are the ones notorious for the widespread of their broadband infrastructure.
* Will the limited availability of broadband in under-developed countries limit their development?
Developing countires can have a lack of information compared to developed countries. In those countries where access to drinking water and electricity is not trivial, broadband is often considered as a luxury. Neverthless broadband should not be viewed as a luxury but as a necessity in an increasingly information-based society. Providing broadband access opens up a new door to a knowledge-based economy, which in turn will promote developing regions' social and economic development.
*What will be the issues raised in mass multimedia download and sharing related to broadband growth?
*When will hardware bottlenecks limit the growth of broadband?
Because the traditional way of usage of the Internet is not based on broadband (e-mail, surfing, etc.), hardware limitations like processing speed were much more higher. However, with the introduction of increasing transfer speeds the gap between the two bottlenecks gets smaller and smaller. The idea of a streaming high quality DVD is not possible if the computer receiving the transmitted data can't process that amount of data in the given time. Downloading something is impossible if the hard drive doesn't have any more storage capacity.
The whole internet is designed back-compatible, which means devices (routers, hubs) that are installed in early slow stages of the Internet are still running somewhere out there. It can take between 5-10 years until the limit is reached and necessary investments are taken to replace old hardware with broadband-compatible versions.
There is also limitations of the existing Internet Protocol (IP) structure, because it limits the number of possible users (and therefore the penetration). IPv6 is the expected solution to IP limitations, however some network devices have some functionalities related to IP embedded in hardware. Those devices need to be replaced or upgraded before moving to IPv6.
*When will the marginal utility of increased bandwidth be negligible?
In the booming years of computers in terms of speed, the computational speed was doubling every 24 months (Moore's Law [1]) . Thas was the main driver of the evolution of the computers from basic fast calculators to today's life support units. However, the exponential growth of the technology didn't stop because of technological limits, but from the end of common demand. Desktop computers of 2006 can address almost every complicated user needs from DVD editing to broadband applications in a satisfactory speed. The marginal need of additional bandwidth is minimal because it practically doesn't matter whether the user runs an application in 0.1 seconds or 0.05 seconds.
The same pattern is observable also in communication. The first bandwidths were so little that it could only carry minimal amounts of data per day. As the Internet booms and bandwidth capacities increase, more applications were developed that push the limits of the bandwidth consumption. This demand for more complicated applications that require more bandwidth still exists (mobile video telephony, on-demand online DVD rental). However, it is definitely going to stop sometime in 10 - 20 years, when bandwidth increase to more than satisfactory speeds for every common application.
This effectively means that the market in every related sector will reach a saturation point, where satisfactory standards emerge and "a next technological wave never comes". Assuming that all competitors are technologically capable enough to imitate others in terms technological complication, efficiency will be the main driver of the competition in such a market. It may be the case that the marginal utility of new standards are so small that nobody would bother to change to new applications.
*What security problems arise by the combination of fast computational speed limits and broadband communications?
All encryption systems are based on making the attackers job hard by requiring computationally "easy to construct, too hard to deconstruct" mechanisms. Nevertheless, there is one kind of attack that can solve (given enough time and processing power) any security riddle: Brute Force)
Revolutionary computing methods and unimaginable data transfer speeds allow attackers to use "Brute Force" much more often. This renders some old methods of security obsolete, because "too hard to solve" questions are not too hard anymore. Especially quantum computing, which simply allows computers to do processing in parallel, destroys many of the known forms of encryption. Quantum computers can try millions of secret combinations in one step. If the future broadband provides those types of computers the necessary bandwidth to try one billion combinations on a remote server, it is a matter of seconds until even most promising defenses are broken.
*What problems have to be solved before broadband communication can be used as a substitute to other ways of communication?
There are two kinds of problems limiting the substitution of classical communication methods with broadband based Internet applications: technological and sociological.
Technological problems are mainly Quality of Service issues. The Internet works in a completely different way than telephones, TV's or magazines work. For example, if two friends call each other on the phone, the phone line is dedicated to them during their whole conversation. Even if they remain silent for half an hour, the line is connected, reserved and a fee is charged. Although this reduces efficiency dramatically, it ensures that both parties always remain connected, don't disconnect randomly and communicate instantly. Communicating over the Internet, however, is not done by dedicated lines. Every data transmitted in the Internet is put in an envelope with the recipient address and literally thrown into the Internet jungle, hoping that it will find its way. This procedure does not guarantee any delivery, has no control on delivery time and sent data can arrive out of order. Quality of Service is traded for efficiency and easy connectivity.
Successful mechanism are able to correct for those mistakes when downloading, surfing or requesting mail. However, live services like streaming TV or Skype have completely different rules. It is hard to imitate the phone line structure with a completely different Internet structure.
The backward compatibility requirement also does not help. Any technology trying to replace the existing standards without backward compatibility is doomed to fail because of the current widespread standards. A completely new design might solve all technological problems, but it is worth nothing if people will not abondon their old systems to move to the new one.
Sociological issues include lack of motivation and lack of demand for additional technology and technological scepticism. People have been using telephones for many years. They find it easy to use and 99.9% reliable. Therefore, people are not willing to abondon their existing methods of communication unless they are convinced the new technology will be as easy, as cheap and as reliable as the old ones. Current VoIP or wireless streaming media technologies are far from convincing in easiness and reliability.