Monday, August 22, 2011
Do you like this Article?
The advent of autonomic systems, the multiplication of networks, the presence of huge storage capacity at the edges of the old network (more specifically in the terminals, cell phones, media centers, and so on), and the growing intelligence outside the network will change networking paradigms significantly. Efforts in the past 30 years have focused on exploiting the progressive penetration of computers in the network to make the network more intelligent.
A simple economic drive motivated this evolution: The network is a central resource, the cost of which can be split among users. It makes more sense to invest in the network to provide better services to low-cost, low-intelligence edges. The intelligent network finds economic justification in that fact.
The first dramatic shift happened with cell phones, with the mobile network. If you were to develop a network from scratch and decide to use a fixed-line network to provide services, you would have to pay almost 100% of the investment. On the other hand, if you were to deliver the same services using the mobile network approach, the overall cost would be split 30% in the network and 70% in the terminals (this latter part is likely to be sustained by customers). This reflects the shift of processing, storage, and intelligence from the network to the edges, to the terminals using it. A possible, and likely, vision for the network of the future is very high-capacity pipes of several Tbps each, having a meshed structure to ensure high reliability and to decrease the need for maintenance, and in particular for responsive maintenance, the one that is most costly and that affects most the service quality. This network terminates with local wireless drops that will present a geographical hierarchy in the sense that we will see very small radio coverage through local wireless networks dynamically creating wireless coverage through devices; very small cells (femto and picocells); cells on the order of tens of meters (WiFi); larger cells belonging to a planned coverage like LTE, 3G, and the remnants of GSM (Group Special Mobile) or the likes; and even larger cells covering rural areas (such as WiMax when used to fill the digital divide); and even larger coverage provided by satellites (as shown in the illustration).
In this vision the crucial aspect is ensuring seamless connectivity and services across a variety of ownership domains (each drop in principle may be owned by a different party), and vertical roaming in addition to horizontal roaming (across different hierarchy layers rather than along cells in the same layer). Authentication and identity management are crucial. This kind of evolution requires more and more transparency of the network to services. The overall communication environment will consist of millions of data and service hubs connected by very effective (fast and cheap) links. How could it be “millions” of data and service hubs? The trend is toward shrinking the number of data centers, and the technology (storage and processing) makes theoretically possible to have just one data center for the world. Reliability requires replication of it several times in different locations, but still we can be talking of several units! The fact is that the future will see the emergence of data pulverization in terms of storage. Basically every cell phone can be seen as a data hub and any media center in any home becomes a data hub. Put all this together and millions of data hubs actually serves as a very low estimate.
How can one dare to place on the same level 1 TB of storage in a cell phone, 10 TB in a media center, and several EB in a network (service) data center? The fact is that from an economic point of view if we do the multiplication the total storage capacity present in terminals far exceeds that present in the network service data center (TB ∗ Gterminals = 1,000 EB). The economics of value is also on the side of the terminals, as the data we have in our cell phone will be worth much more (to us) than the data in any other place. People will consider local data as “The Data” and the data in the network as very important backup. Synchronization of data will take care of the reliability but at the same time asynchronous (push) synchronization from the network and service databases (DB) to the terminals will make perceptually invisible those centralized DBs. The same is happening for services. Services are produced everywhere, make use of other services, of data, and of connectivity, and they are perceived “locally” by the users. They are bought (or acquired for free, possibly because there is some indirect business model in place to generate revenues for the service creator and to cover its operational cost). Services can be discovered on the open Web or can be found in specific aggregator places; the aggregator usually puts some sort of markup on the service, but at the same time provides some sort of assurance to the end user (see, for example, the Apple Store). We’ll come back to this in a moment.
Once we have a network that conceptually consists of interconnected data service hubs, one of which is in our hand and possibly another in our home, what are the communications paradigms used? Point-to-point communications mean that calling a specific number is going to be replaced by person-to-person or person-to-service (embedding data) communications. This represents quite a departure from today because we are no longer calling a specific termination (identified by a telephone number); rather we are connected to a particular value point (a person, a service). Conceptually we are always connected to that value point, we just decide to do something on that existing connection. The fact that such a decision might involve the setting up of a path through the network(s) is irrelevant to the user, particularly if these actions involve no cost to the user. The concept of number disappears and with it a strong asset of today’s operators.
The value of contextualized personal information finds its mirror in the “sticker” communication paradigm. A single person, a machine, asks, implicitly or explicitly, to be always connected with certain information. Most of this might reside on the terminal, but a certain part can relate to the particular place the terminal is operating, or to new information being generated somewhere else. The communication operates in the background, ensuring that relevant information is at one’s fingertips when needed. It is more than just pushing information; it requires continuous synchronization of user profile, presence or location, and ongoing activities. This embeds concepts like mashups of services and information, metadata, and metaservice generation. It requires value tracking and sharing. It might require shadowing (tracking data generated through that or other terminals with which that person or machine interact).
The variety of devices available for communications in any given environment, some belonging to a specific user, some shared by several users (e.g., a television), and some that might be “borrowed” for a time by someone who is not the usual owner, can be clustered to provide ambient-to-ambient communications that may be mirrored by the “cluster” paradigm.
This kind of communications will be at the same time more spontaneous (simple) for the parties involved and more complex to be executed by the communications manager. The communications manager can, in principle, reside anywhere, but network operators may be the ones to propose this communication service. Contextualized communication is going to be the norm in the future and it is a significant departure from the communications model we are used to.
Subscribe via Email
This post was written by: Alex Wanda