Path to Self-managing Services: A Case for Deploying Managed Intelligent Services Using Dumb Infrastructure in a Stupid Network

“WETICE 2012 Convergence of Distributed Clouds, Grids and their Management Conference Track is devoted to transform current labor intensive, software/shelf-ware-heavy, and knowledge-professional-services dependent IT management into self-configuring, self-monitoring, self-protecting, self-healing and self-optimizing distributed workflow implementations with end-to-end service management by facilitating the development of a Unified Theory of Computing.”

“In recent history, the basis of telephone company value has been the sharing of scarce resources — wires, switches, etc. – to create premium-priced services. Over the last few years, glass fibers have gotten clearer, lasers are faster and cheaper, and processors have become many orders of magnitude more capable and available. In other words, the scarcity assumption has disappeared, which poses a challenge to the telcos’ “Intelligent Network” model. A new type of open, flexible communications infrastructure, the “Stupid Network,” is poised to deliver increased user control, more innovation, and greater value.”

                     —–Isenberg, D. S., (1998). “The dawn of the stupid network”. ACM netWorker 2, 1, 24-31.

Much has changed since the late 90’s that drove the Telco’s to essentially abandon their drive for supremacy in intelligent services creation, delivery and assurance business and take the back seat in the information services market to manage the ‘stupid network’ that merely carries the information services.  You have to only look at the demise of major R&D companies such as AT&T Bell Labs, Lucent, Nortel, Alcatel and the rise of a new generation of services platforms from Apple, Amazon, Google, Facebook, Twitter, Oracle and Microsoft to notice the sea change that has occurred in a short span of time. The data center has replaced the central office to become the hub from which myriad voice, video and data services are created, and delivered on a global scale. However the management of these services which determines their resiliency, efficiency and scaling is another matter.

While, the data center value has been the sharing of expensive resources – processor speed, memory, network bandwidth, storage capacity, throughput and IOPs – to create premium-priced services, over the last couple of decades, the complexity of the infrastructure and its management has exploded. It is estimated that up to 70% of the total IT budget now goes to the management of infrastructure rather than to develop new services (www.serverdesignsummit.com). It is important to define what TCO (total cost of ownership) we are talking about here because it is often, used to justify different solutions as the following picture showing three different TCO representations of a data center. Figure 1 shows three different TCO views presented by three different speakers in the Server Design Summit in November 2011.  Each graph, while it is accurate, represents a different view. For example, the first view represents the server infrastructure and its management cost. The second one represents the power infrastructure and its management. The third view shows both the server infrastructure and power management. As you can see the total power and its management, while steadily increasing, is only a small fraction of the total infrastructure management cost.  In addition, these views do not even show the network and storage infrastructure and their management. It is also interesting to see the explosion of management cost shown in figure 3 over the last two decades. Automation has certainly improved the number of servers that can be managed by a single person by orders of magnitude. This is borne by the labor cost in the left picture by Intel which shows it is about 13% of the TCO from server view-point. But this does not tell the whole story.

Figure 1: Three different views of Data center TCO presented in the Server Design Summit conference in November 2011 (http://www.serverdesignsummit.com/English/Conference/Proceedings_Chrono.html). These views do not touch the storage, network and application/service management costs both in terms of software systems and labor.

A more revealing picture can be obtained by using the TCO calculator by one of the Virtualization infrastructure vendors. Figure 2 shows percentage Total Cost of Ownership (TCO) (for a 1500 server data center) over five years by each component with and without virtualization.

Figure 2: Five Year TCO of Virtualization According to a Vendor ROI Calculator. While virtualization reduces the TCO from 35% to 25%, it is almost offset by the software, services and training costs.

While virtualization introduces many benefits such as consolidation, multi-tenancy in a physical server, real-time business continuity and elastic scaling of resources to meet wildly fluctuating workloads, it adds another layer of management systems in addition to current computing, network, storage and application management systems. Figure 3 shows a reduction by 50% of the five-year TCO with virtualization. The Virtual Machine density of about 13 allows a great saving in hardware costs which is somewhat off-set by the new software, training and services costs of virtualization.

Figure 3: TCO over 5 Years with virtualization of 1500 servers using 13 VMs per Server. While the infrastructure and administration costs drop, it is almost offset by the software, services and training costs.

In addition, there is the cost of new complexity in optimizing the 13 or so VMs within each server in order to match the resources (network bandwidth, storage capacity, IOPs and throughput) to application workload characteristics, business priorities and latency constraints. According to a storage consultant, Jon Toigo “Consumers need to drive vendors to deliver what they really need, and not what the vendors want to sell them. They need to break with the old ways of architecting storage infrastructure and of purchasing the wrong gear to store their bits: Deploying a “SAN” populated with lots of stovepipe arrays and fabric switches that deliver less than 15% of optimal efficiency per port is a waste of money that bodes ill for companies in the areas of compliance, continuity, and green IT.”

Resource management based data center operations miss an important feature of services/applications management which is that all services are not created equal. They have different latency and throughput requirements. They have different business priorities and different workload characteristics and fluctuations. What works for the goose does not work for the gander. Figure 4 shows a classification of different services based on their throughput and latency requirements presented by Dell in the server design summit. The applications are characterized by their need for throughput, latency and storage capacity. In order to take advantage of the differing priorities and characteristics of the applications, additional layers of services management are introduced which focus on service specific resource management. Various appliance or software based solutions are added to the already complex resource management suites that address server, network and storage to provide service specific optimization. While this approach is well suited for making recurring revenues for vendors, it is not ideally suited for customers to lower the final TCO when all piece-wise TCO’s are added up. Over a period of time, most of these appliances and software end up as shelf-ware while the venodors tout more new TCO reducing solutions. For example, a well known solution vendor makes more annual revenue from maintenance and upgrades than new products or services that help their cutomers really reduce the TCO.

 Figure 4: Various services/Applications characterized by their throughput and latency requirements. Current resource management based data center does not optimally exploit the resources based on application/service priority, workload variations and latency constraints. It is easy to see the inefficiency in deploying a “one size fits all” infrastructure. It will be more eff icient to tailor “dumb” infrastructure and “Stupid Network” pools specialized to cater to different latency and throughput characteristics and let intelligent services provision themselves with the right resources based on their own business priorities, workload characteristics and latency constraints. This requires the visibility and control of service specification, management and execution available at run time which necessitates a search for new computing models.

In addition to the current complexity and cost of resource management to assure service availability, reliability, performance and security, there is even more fundamental issue that plagues the current distributed systems architecture. A distributed transaction that spans multiple servers, networks and storage devices in multiple geographies uses resources that span across multiple data centers. The fault, configuration, accounting, performance and security (FCAPS) of a distributed transaction behavior requires the end-to-end connection management more like telecommunication service spanning distributed resources. Therefore, focusing on only resource management in a data center without the visibility and control of all resources participating in the transaction will not provide assurance of service availability, reliability, performance and security.

Distributed transactions transcend the current stored program control implementation of the Turing machine which is at the heart of the atomic computing element in current computing infrastructure.  The communication and control are not an integral part of this atomic computing unit in the stored program control implementation of the Turing machine. The distributed transactions require interaction which integrates computing, control and communication to provide the ability to specify and execute highly temporal and hierarchical event flows. According to Goldin and Wegner, Interactive computation is inherently concurrent, where the computation of interacting agents or processes proceeds in parallel. Hoare, Milner and other founders of concurrency theory have long realized that Turing Machines (TM) do not model all of computation (Wegner and Goldin, 2003). However, when their theory of concurrent systems was first developed in the late ’70s, it was premature to openly challenge TMs as a complete model of computation. Their theory positions interaction as orthogonal to computation, rather than a part of it. By separating interaction from computation, the question whether the models for CCS and the Pi-calculus went beyond Turing Machines and algorithms was avoided. The resulting divide between the theory of computation and concurrency theory runs very deep. The theory of computation views computation as a closed-box transformation of inputs to outputs, completely captured by Turing Machines. By contrast, concurrency theory focuses on the communication aspect of computing systems, which is not captured by Turing Machines – referring both to the communication between computing components in a system, and the communication between the computing system and its environment. As a result of this division of labor, there has been little in common between these fields and their communities of researchers. According to Papadimitriou (Papadimitriou, 1995), such a disconnect within the theory community is a sign of a crisis and a need for a Kuhnian paradigm shift in our discipline.”

Kuhnian paradigm shift or not, a new computing model called DIME computing model (discussed in WETICE2010) provides a convergence of these two disciplines by addressing the computing and the communications in a single computing entity that is a managed Turing machine. The DIME network architecture provides a fractal (recursive) composition scheme to create an FCAPS managed network of DIMEs implementing business workflows as DAGs supporting both hierarchical and temporal event flows. The DIME computing model supports only those computations that can be specified as managed DAGs where a management signaling network overlay allows execution of managed computing tasks (executed by a computing unit called MICE) in each Turing machine node that is endowed with self-management using parallel computing threads. The MICE (see the video referenced in this blog for a description of DIME and its use in distributed computing and its management) constitutes the atomic Turing machine that is controlled by the FCAPS manager in a DIME which allows configuring, executing and managing the MICE to load and execute well specified computing workflow and its FCAPS management. The MICE under parallel real-time control of the DIME FCAPS manager aided by a signaling network overlay provides control over start, stop, read and write abstractions of the Turing machine. Two implementations have proven the existence proof for the DIME network architecture.

Figure 5 shows a DIME network implementing Linux, Apache, MySQL and PHP/Perl/Python web services delivery and assurance infrastructure.

Figure 5: The GUI showing the configuration of a LAMP Cloud (Mikkilineni, Morana, Zito, Di Sano, 2012). Each Apache and DNS are DIME aware running in a DIME aware Linux Operating System which, transforms a process into a managed element in the DIME network. A video describes the implementation of auto-failover, auto-scaling and performance management of the DIME aware LAMP cloud

Look Ma! No Hypervisor or VM in My Cloud (See Video)

The prototype implementations demonstrates a side effect of the DIME network architecture, which combines the computing and communication abstractions at an atomic level, – it decouples the services management from the underlying hardware infrastructure management. This makes it possible to implement highly resilient distributed transactions with auto-scaling, self-repair, state-aware migration, and self-protection – in-short, end-to-end transaction FCAPS management – based on business priorities, workload fluctuations and latency constraints.  No Hypervisors or VMs are required. The intelligent management of services workflow with resilient distributed transactions offers a new architecture for the data center infrastructure. For the first time it will be possible to remove embedding service management in the infrastructure management intelligence using myriad expensive appliances and software systems. It will be possible to design new tiers of dumb infrastructure pools (of servers, storage and network devices) with different latency and throughput characteristics and the services will be able to manage themselves based on policies by requesting appropriate resources based on their specifications. They will be able to self-migrate when quality of service levels are not met. The case for dumb infrastructure on a stupid network with intelligent services management puts forth the following advantages:

  1. Separation of concerns: The network, storage and server hardware provides hardware infrastructure management with signaling enabled FCAPS management. They do not encapsulate service management as the current generation equipment does.
  2. Specialization: The hardware is designed to meet specific latency and throughput characteristics to simplify its design through specialization. Different hardware with FCAPS management and signaling will provide plug and play components at run time.
  3. End-to-end service connection FCAPS management using the signaling network overlay allows dynamic service FCAPS management facilitating self-repair, auto-scaling, self-protection, state-aware migration and end to end transaction security assurance.

Figure 4 shows an example design of a possible storage device using simple storage architecture enabled with FCAPS management over a signaling overlay. It can be easily built with commercially off the shelf (COTS) hardware. This design allows separation of the services management from storage device management and eliminates a host of storage software management systems thus simplifying the data center infrastructure.

Figure 5: A gedanken design of autonomic storage and autonomic storage service deployment using the new DIME network architecture. The signaling overlay and FCAPS management are used to provide dynamic service management. Each service can request, using standard Linux OS services during run time, services from the storage device based on business priorities, workload fluctuations and latency constraints.

It is easy to see that the service connection model eliminates the need for clustering and provides new ways to provide transaction resilience with features such as service call forwarding, service call waiting, data broadcast, 800 service call model etc. It is also equally easy to see that with many-core servers, how the DIME Network architecture eliminates the inefficiencies of communication between Linux images within the same container (e.g., TCP/IP) and also how simple SAS storage and Flash storage can replace current generation appliance based storage strategies and their myraid management systems. Looking at the trends, it is easy to see that a paradigm shift soon will be in play to transform the data centers from their current role of being just managed server, networking, and storage hosting centers (whether physical or virtual), to true service switching centers with telecom grade trust. The emphasis will shift from resource switching and resource connection management to services switching and service connection management thus replacing the current efforts to replicate the complexity inside the data center today, also inside the many-core servers. With the resulting decoupling of services management from the infrastructure management, the next generation data centers will perhaps be more like central offices of the old Telcos, switching service connections.

Obviously the new computing model is in its infancy and requires participation from academicians who can validate or reject its theoretical foundation, VCs who can see beyond current approaches and are not satisfied by how many servers can be managed by a single administrator to measure the data center efficiency (as one Silicon Valley VC claimed it as progress in the Server Design Summit) and architects who exploit new paradigms to disrupt the status-quo. The DIME computing model by allowing Linux processes to be converted into a DIME network transcending physical boundaries allows easy migration from current infrastructure to the new one without abandoning legacy applications as the prototype of LAMP cloud demonstrates.

In closing, I like to point out that there have been many calls for a new computing model that combines computing and communication at an atomic computing element level which the Turing machine falls short as discussed above. However, without high bandwidth communication and exploitation of the parallelism that is abundant in the new generation hardware, it is not practically very useful to seriously utilize such new computing models. However, it seems that the hardware advances have outpaced the software advances and perhaps it is about time for computer scientists to seriously take a second look at addressing the software short-fall in dealing with distributed transactions. As the following fable illustrates, it may be futile to look for parallel break-through solutions in a serial boat.

“When Master Foo and his student Nubi journeyed among the sacred sites, it was the Master’s custom in the evenings to offer public instruction to UNIX neophytes of the towns and villages in which they stopped for the night.  On one such occasion, a methodologist was among those who gathered to listen.  “If you do not repeatedly profile your code for hot spots while tuning, you will be like a fisherman who casts his net in an empty lake,” said Master Foo.
“Is it not, then, also true,” said the methodology consultant, “that if you do not continually measure your productivity while managing resources, you will be like a fisherman who casts his net in an empty lake?”
“I once came upon a fisherman who just at that moment let his net fall in the lake on which his boat was floating,” said Master Foo. “He scrabbled around in the bottom of his boat for quite a while looking for it.”  “But,” said the methodologist, “if he had dropped his net in the lake, why was he looking in the boat?”  “Because he could not swim,” replied Master Foo.
Upon hearing this, the methodologist was enlightened”        — Master Foo and the Methodologist
                                                                   (http://www.catb.org/esr/writings/unix-koans/methodology-consultant.html)

If you have transformational research results, or want to make a real difference in computer science research, see Call for Papers at:

www.workshop.kawaobjects.com and http://WETICE.org

Advertisements

There are no comments on this post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: