From Being to Becoming: Function, Structure and Fluctuations – Incremental versus Leap-frog Innovation in Datacenters
February 16, 2015

“We grow in direct proportion to the amount of chaos we can sustain and dissipate” ― Ilya Prigogine, Order out of Chaos: Man’s New Dialogue with Nature

Abstract

According to Gartner “Alpha organizations aggressively focus on disruptive innovation to achieve competitive advantage. Characterized by unknowns, disruptive innovation requires business and IT leaders to go beyond traditional management techniques and implement new ground rules to enable success.”

While there is a lot of buzz about “game changing” technologies, and “disruptive innovation”, real “game changers” and “disruptive innovators” are few and far between. Leap-frog innovation is more like a “phase transition” in physics. A system is composed of individual elements with a well-defined function which interact with each other and the external world with a well-defined structure. The system usually exhibits normal equilibrium behavior that is predictable and when there are small fluctuations, incremental innovation allows to adjust itself and maintain the equilibrium with predictability. Only when the external forces inflict large or wild unexpected fluctuations in the system, the equilibrium is threatened and the system exhibits an emergent behavior where unstable equilibrium introduces unpredictability in the evolution dynamics of the system. A phase transition occurs with a reconfiguration of the structure of the system going through an architecture transformation resulting in order from chaos.

The difference between “Kaizen” (incremental improvement) and “disruptive innovation” is in dealing with stable equilibrium with small fluctuations versus dealing with meta-stable equilibrium with large-scale and big fluctuations. Current datacenter is in a similar transition from “being” to “becoming” driven by both the hyper-scale structure and fluctuations (which, the hardware and software systems delivering business processes are experiencing) caused by rapidly changing business priorities on a global scale, workload fluctuations and latency constraints. Is the current von Neumann stored program control implementation of the Turing machine reaching its limit? Is the datacenter poised for a phase transition from current ad-hoc distributed computing practices to a new theory-driven self-* architecture? In this blog we discuss a non-von Neumann managed Turing oracle machine network with a control architecture as an alternative.

From Being to Becoming” – What Does It Mean?

The representation of the dynamics of a physical systems as linear, reversible (hence deterministic), temporal order of states requires that, in a deep sense, physical systems never change their identities through time; hence they can never become anything radically new (e.g., they must at most merely rearrange their parts, parts whose being is fixed). However, as elements interact with each other and their environment, the system dynamics can dramatically change when large fluctuations in the interactions induce a structural transformation leading to chaos and the eventual emergence of a new order out of chaos. This is denoted as “becoming”. In short, the dynamics of near equilibrium states with small-scale fluctuations in a system represent the “being” and large deviations from the equilibrium, emergence of an unstable equilibrium and the final restoration of order in a new equilibrium state represent the “becoming”. According to Plato “being” is absolute, independent, and transcendent. It never changes and yet causes the essential nature of things we perceive in the world of “becoming”. The world of becoming is the physical world we perceive through our senses. This world is always in movement, always changing. The two aspects – the static structures and their dynamics of evolution are two sides of a coin. Dynamics (becoming) represents time and static configurations at any particular instance represent the “being”. Prigogine applied this concept to understand the chemistry of matter, phase transitions and the like. Individual elements represent function and the groups (constituting a system) represent structure with dynamics. Fluctuations caused by the interaction within the system and between the system and its environment, cause the dynamics of the system to induce transitions from being to becoming. Thus, function, structure and fluctuations determine the system and its dynamics defining the complexity, chaos and order.

Why is it Relevant to Datacenters?

Datacenters are dynamic systems where software working with hardware delivers information processing services that allow modeling, interaction, reasoning, analysis and control of the environment external to them. Figure 1 shows the hardware, software and their interaction among themselves and the external world. There are two distinct systems interacting with each other to deliver the intent of the datacenter which is to execute specific computational workflows that model, monitor and control the external world processes using the computing resources:

  1. Service workflows modeling the process dynamics of the system depicting the external world and its interactions. Usually this consists of functional requirements of the system that is under consideration such as business logic, sensors and actuator monitoring and control (the computed) etc. The model consists of various functions captured in a structure (e.g., a directed acyclic graph, DAG, and it’s evolution in time. This model does not include the computing resources required to execute the process dynamics. It is assumed tat the resources will be available for the computation (cpu, memory, time etc.)
  2. The non-functional requirements that address the required resources to execute the functions as a function of time and fluctuations both in the interactions in the external world and also in the computing resources available to accomplish the intent defined in the functional requirements. The computation as implemented in the von Neumann stored program control model of the Turing machine requires time (impacted by the cpu speed, network latency, bandwidth, storage IOPs, throughput, capacity) and memory. The computing model assumes unbounded resources including time for completing the computation. Today, these resources are provided by a cluster of servers and other devices containing multi-core cpu’s and memory networked with different types of storage. The computations are executed in the server or device by allocating the resources using an operating system which itself is a software that mediates the resources to various computations.

On the right hand side of Figure 1, we depict the computing resources required to execute the functions in a given structure whether it is distributed or not. In the middle, we represent the application workflows composed of various components constituting an application area network (AAN) that is executed in a distributed computing cluster (DCC) made up of the hardware resources with specified service levels (cpu, memory, network bandwidth, cluster latency, storage capacity, IOPs , throughput and capacity). The left hand side shows a desired end-to-end process configuration and evolution monitoring and control mechanism. When all is said and done, the process workflows need to execute various functions using the computing resources made available in the form of a distributed cluster providing required CPU, memory, network bandwidth, latency, storage IOPs, throughput and capacity. The structure is determined by the non-functional requirements such as resource availability, performance, security and cost. Fluctuations evolve the process dynamics and require adjusting the resources to meet the needs of applications to cope with the fluctuations.

Figure 1: Decoupling service orchestration and infrastructure orchestration to deliver function, structure and dynamic process flow to address the fluctuations both in resource availability and service demand

Figure 1: Decoupling service orchestration and infrastructure orchestration to deliver function, structure and dynamic process flow to address the fluctuations both in resource availability and service demand

There are two ways to match the resources available to the computing nodes connected by links that execute the business process dynamics. First approach is the current state of the art and the second one is an alternative approach based on extensions to the current von Neumann stored program implementation of  the Turing machine.

Current State of the Art

The infrastructure is infused with intelligence about various applications and their evolving needs and adjust the resources (time of computation affected by cpu, network bandwidth, latency, storage capacity, throughput and IOPs and the memory required for the computation). Current IT has evolved from a model where the resources are provisioned anticipating the peak workloads and the structure of the application network is optimized for coping with deviations from equilibrium. Conventional computing models using physical servers (often referred to as bare-metal) cannot cope with wild fluctuations if the new server provisioning times are much larger than the time it takes for the onset of fluctuations and the predictability of their magnitude to pre-plan the provisioning of additional resources. Virtualization of the servers and on-demand provisioning of Virtual machines reduces the provisioning times substantially to institute auto-scaling, auto-failover and live migration across distributed resources using Virtual Machine image mobility. However, it comes with a price:

    1. The Virtual Image is still tied to the infrastructure (network, storage and computing resources supporting the VM and moving a VM involves manipulating a multitude of distributed resources often owned or operated by different owners and touch many infrastructure management systems thus increasing complexity and cost of management.
    2. If the distributed infrastructure is homogeneous and supports VM mobility, it is simpler but the solution forces vendor lock-in and does not allow to take advantage of commodity infrastructure offered by multiple suppliers.
    3. If the distributed infrastructure is heterogeneous, VM mobility now must depend on myriad management systems and most often, these management systems themselves need other management systems to manage their resources.
    4. The VM mobility and management also increase bandwidth and storage requirements and proliferation of point solutions and tools to move across heterogeneous distributed infrastructure that increase operational complexity and additional cost.

Current state of the art based on the mobility of VMs and infrastructure orchestration  is summarized in figure 2.

Anti-Thesis: urrent State of the Art

Figure 2: The infrastructure orchestration based on second guessing the application quality of service requirements and its dynamic behavior

 It clearly shows the futility of orchestrating service availability, performance, compliance, cost and security in a very distributed and heterogeneous environment where scale and fluctuations dominate. The cost and complexity of navigating multiple infrastructure service offerings often outweigh the benefits of commodity computing. It is one reason why enterprises complain that 70% of their budget often is spent on keeping the service lights on.

Alternative Approach: A Clean Separation of Business Logic Implementation and the Operational Realization of Non-functional Requirements

Another approach is to decouple application and business process workflow management from the distributed infrastructure mobility by placing the applications in the right infrastructure that has the right resources, monitor the evolution of the applications and proactively manage the infrastructure to add or delete resources with predictability based on history. Based on the RPO and RTO, adjust the application structure to create active/passive or active/active nodes to manage application QoS and workflow/business process QoS. This approach requires top down method of business process implementation with the specification of the business process intent followed by a hierarchical and temporal specification of process dynamics with context, constraints, communication, control of the group and its constituents and the initial conditions for the equilibrium quality of service (QoS). The details include:

  1. Non-functional requirements that specify availability, performance, security, compliance and cost constraints and the policies specified with hierarchical and temporal process flows. The intent at higher level are translated to the down-stream intent of the computing nodes contributing to the workflow.
  2. A distributed or otherwise structure of network of networks providing the computing nodes with specified SLAs for resources (cpu, memory, network bandwidth, latency, storage IOPs, throughput and capacity)
  3. A method to implement autonomic behavior with visibility and control of application components so that they can be managed with policies defined. When scale and fluctuations demand a change in the structure to transition to a new equilibrium state, the policy implementation processes proactively add or subtract computing nodes or find existing nodes to replicate, repair, recombine or reconfigure the application components. The structural change implements the transition from being to becoming.

A New Architecture to Accommodate Scale and Fluctuations: Toward the Oneness of the Computer and the Computed

There is a fundamental reason why current Turing, von Neumann stored program computing model cannot address large-scale distributed computing with fluctuations both in resources and in computation workloads without increasing complexity and cost (Mikkilineni et. al. 2012). As von Neumann put it “It is a theorem of Gödel that the description of an object is one class type higher than the object.” An important implication of Gödel’s incompleteness theorem is that it is not possible to have a finite description with the description itself as the proper part. In other words, it is not possible to read yourself or process yourself as a process. In short, Gödel’s theorems prohibit “self-reflection” in Turing machines. According to Alan Turing, Gödel’s theorems show that every system of logic is in a certain sense incomplete, but at the same time it indicates means whereby from a system L of logic a more complete system L_ may be obtained. By repeating the process we get a sequence L, L1 = L_, L2 = L_1 … each more complete than the preceding. A logic Lω may then be constructed in which the provable theorems are the totality of theorems provable with the help of the logics L, L1, L2, … Proceeding in this way we can associate a system of logic with any constructive ordinal. It may be asked whether such a sequence of logics of this kind is complete in the sense that to any problem A, there corresponds an ordinal α such that A is solvable by means of the logic Lα.”

This observation along with his introduction of the oracle-machine influenced many theoretical advances including the development of generalized recursion theory that extended the concept of an algorithm. “An o-machine is like a Turing machine (TM) except that the machine is endowed with an additional basic operation of a type that no Turing machine can simulate.” Turing called the new operation the ‘oracle’ and said that it works by ‘some unspecified means’. When the Turing machine is in a certain internal state, it can query the oracle for an answer to a specific question and act accordingly depending on the answer. The o-machine provides a generalization of the Turing machines to explore means to address the impact of Gödel’s incompleteness theorems and problems that are not explicitly computable but are limit computable using relative reducibility and relative computability.

According to Mark Burgin, an Information processing system (IPS) “has two structures—static and dynamic. The static structure reflects the mechanisms and devices that realize information processing, while the dynamic structure shows how this processing goes on and how these mechanisms and devices function and interact.”

The software contains the algorithms (à la the Turing machine) that specify information processing tasks while the hardware provides the required resources to execute the algorithms. The static structure is defined by the association of software and hardware devices and the dynamic structure is defined by the execution of the algorithms. The meta-knowledge of the intent of the algorithm, the association of specific algorithm execution to a specific device, and the temporal evolution of information processing and exception handling when the computation deviates from the intent (be it because of software behavior or the hardware behavior or their interaction with the environment) is outside the software and hardware design and is expressed in non-functional requirements. Mark Burgin calls this Infware which contains the description and specification of the meta-knowledge that can be also be implemented using the hardware and software to enforce the intent with appropriate actions.

The implementation of Infware using Turing machines introduces the same dichotomy mentioned by Turing with respect to the manager of manager conundrum. This is consistent with the observation of Cockshott et al. (2012) ““The key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.”

The goals of the distributed system determine the resource requirements and computational process definition of individual service components based on their priorities, workload characteristics and latency constraints. The overall system resiliency, efficiency and scalability depend upon the individual service component workload and latency characteristics of their interconnections that in turn depend on the placement of these components (configuration) and available resources. The resiliency (fault, configuration, accounting, performance and security often denoted by FCAPS) is measured with respect to a service’s tolerance to faults, fluctuations in contention for resources, performance fluctuations, security threats and changing system-wide priorities.  Efficiency depicts the optimal resource utilization.  Scaling addresses end-to-end resource provisioning and management with respect to increasing the number of computing elements required to meet service needs.

A possible solution  to address resiliency with respect to scale and fluctuations is an application network architecture, based on increasing the intelligence of computing nodes which, is presented in the Turing centenary conference (2012) for improving the resiliency, efficiency and scaling of information processing systems. In its essence, the distributed intelligent managed element (DIME) network architecture extends the conventional computational model of information processing networks, allowing improvement of the efficiency and resiliency of computational processes. This approach is based on organizing the process dynamics under the supervision of intelligent agents. The DIME network architecture utilizes the DIME computing model with non-von Neumann parallel implementation of a managed Turing machine with a signaling network overlay and adds cognitive elements to evolve super recursive information processing. The DIME network architecture introduces three key functional constructs to enable process design, execution, and management to improve both resiliency and efficiency of application area networks delivering distributed service transactions using both software and hardware (Burgin and Mikkilineni):

  1. Machines with an Oracle: Executing an algorithm, the DIME basic processor P performs the {read -> compute -> write} instruction cycle or its modified version the {interact with a network agent -> read -> compute -> interact with a network agent -> write} instruction cycle. This allows the different network agents to influence the further evolution of computation, while the computation is still in progress. We consider three types of network agents: (a) A DIME agent. (b) A human agent. (c) An external computing agent. It is assumed that a DIME agent knows the goal and intent of the algorithm (along with the context, constraints, communications and control of the algorithm) the DIME basic processor is executing and has the visibility of available resources and the needs of the basic processor as it executes its tasks. In addition, the DIME agent also has the knowledge about alternate courses of action available to facilitate the evolution of the computation to achieve its goal and realize its intent. Thus, every algorithm is associated with a blueprint (analogous to a genetic specification in biology), which provides the knowledge required by the DIME agent to manage the process evolution. An external computing agent is any computing node in the network with which the DIME unit interacts.
  2. Blue-print or policy managed fault, configuration, accounting, performance and security monitoring and control (FCAPS): The DIME agent, which uses the blueprint to configure, instantiate, and manage the DIME basic processor executing the algorithm uses concurrent DIME basic processors with their own blueprints specifying their evolution to monitor the vital signs of the DIME basic processor and implements various policies to assure non-functional requirements such as availability, performance, security and cost management while the managed DIME basic processor is executing its intent. This approach integrates the evolution of the execution of an algorithm with concurrent management of available resources to assure the progress of the computation.
  3. DIME network management control overlay over the managed Turing oracle machines: In addition to read/write communication of the DIME basic processor (the data channel), other DIME basic processors communicate with each other using a parallel signaling channel. This allows the external DIME agents to influence the computation of any managed DIME basic processor in progress based on the context and constraints. The external DIME agents are DIMEs themselves. As a result, changes in one computing element could influence the evolution of another computing element at run time without halting its Turing machine executing the algorithm. The signaling channel and the network of DIME agents can be programmed to execute a process, the intent of which can be specified in a blueprint. Each DIME basic processor can have its own oracle managing its intent, and groups of managed DIME basic processors can have their own domain managers implementing the domain’s intent to execute a process. The management DIME agents specify, configure, and manage the sub-network of DIME units by monitoring and executing policies to optimize the resources while delivering the intent.

The result is a new computing model, a management model and a programming model which infuse self-awareness using an intelligent Infware into a group of software components deployed on a distributed cluster of hardware devices while enabling the monitoring and control of the dynamics of computation to conform to the intent of the computational process. The DNA based control architecture configures appropriately the software and hardware components to execute the intent. As the computation evolves, the control agents monitor the evolution and makes appropriate adjustments to maintain an equilibrium conforming to the intent. When the fluctuations create conditions for unstable equilibrium, the control agents reconfigure the structure in order to create a new equilibrium state that conforms to the intent based on policies.

Figure 3 shows the Infware, hardware and software executing a web service using DNA.

Infware

Figure 3: Hardware and software networks with a process control Infware orchestrating the life-cycle evolution of a web service deployed on a Distributed Computing Cluster

The hardware components are managed dynamically to configure an elastic distributed computing cluster (DCC) to provide the required resources to execute the computations. The software components are organized as managed Turing oracle machines with a control architecture to create AANs that can be monitored and controlled to execute the intent using the network management abstractions of replication, repair, recombination and reconfiguration. With DNA, the datacenters are able to evolve from being to becoming.

It is important to note that DNA is implemented (Mikkilineni, et. al. 2012, 2014) to demonstrate a couple of functions that cannot be accomplished today with current state of the art:

  1. Migrating a workflow being executed in a physical server (a web service transaction including a web server, application server and a database) to another physical server without a reboot or losing transactions to maintain recovery time and recovery point objectives. No virtual machines are required although they can be used just as if they were bare-metal servers.
  2. Provide workflow auto-scaling, auto-failover and live migration with retention of application state using distributed computing clusters with heterogeneous infrastructure (bare metal servers, private and public clouds etc.) without infrastructure orchestration to accomplish them (e.g., without moving virtual machine images or LXC container based images).

The approach using DNA allows the implementation of the above functions without requiring changes to existing applications, OSs or current infrastructure because the architecture non-intrusively extends the current Turing computing model to a managed Turing oracle machine network with control network overlay. It is not a coincidence that similar abstractions are present in how cellular organisms, human organizations and telecommunication networks self-govern and deliver the intent of the system (Mikkilineni 2012).

Only time will tell if the DNA implementation of Infware is an incremental or leap-frog innovation.

Acknowledgements

This work originated from discussions started in  IEEE WETICE 2009 to address the complexity, security and compliance issues in Cloud Computing. The work of Dr. Giovanni Morana, the C3DNA Team and the theoretical insights from professor Eugene Eberbach, Professor Mark Burgin and Pankaj Goyal are behind the current implementation of DNA.

Advertisements

SDDC, SDN, NFV, SFV, ACI, Service Governor, Super Recursive Algorithms and All That Jazz:
October 27, 2014

“It’s very likely that on the basis of philosophy that every error has to be caught, explained, and corrected, a system of the complexity of the living organism would not run for a millisecond.“
—–   von Neumann, Papers of John von Neumann on Computing and Computing Theory, Hixon Symposium, September 20, 1948, Pasadena, CA, The MIT Press, 1987.

Communication, Collaboration and Commerce at the Speed of Light:

With the advent of many-core servers, high bandwidth network technologies connecting these servers, and new class of high performance storage devices that can be optimized to meet the workload needs (IOPs intensive, throughput sensitive or capacity hungry workloads), Information Technology (IT) industry is looking at a transition from its server-centric, low-bandwidth, client-server origins to geographically distributed, highly scalable and resilient composed service creation, delivery and assurance environments that meet the rapidly changing business priorities, latency constraints, fluctuations in workloads and availability of required resources. Distributed service composition and delivery brings new challenges with scale and fluctuations both in demand and the availability of resources. New approaches are emerging to improve resiliency and the efficiency of distributed system design, deployment, management and control.

The Jazz Metaphor:

The quest for transition is best described by the Jazz metaphor aptly summarized by Holbrook [1] (Holbrook 2003), “Specifically, creativity in all areas seems to follow a sort of dialectic in which some structure (a thesis or configuration) gives way to a departure (an antithesis or deviation) that is followed, in turn, by a reconciliation (a synthesis or integration that becomes the basis for further development of the dialectic). In the case of jazz, the structure would include the melodic contour of a piece, its harmonic pattern, or its meter…. The departure would consist of melodic variations, harmonic substitutions, or rhythmic liberties…. The reconciliation depends on the way that the musical departures or violations of expectations are integrated into an emergent structure that resolves deviation into a new regularity, chaos into a new order, and surprise into a new pattern as the performance progresses.”

The Thesis:

The thesis in the IT evolution is the automation of business processes and service delivery using client-server architectures. It served well as long as the service scale and fluctuations of service delivery infrastructure resources were within certain bounds that allowed the action to increase or decrease available resources and meet the fluctuating demands. In addition, the resiliency of the service is always adjusted by improving the resiliency (availability, performance and security) of the infrastructure through various appliances, processes and tools. This introduced a timescale for meeting the resiliency required for various applications in terms of recovery time objectives and recovery point objectives. The resulting management “time constant” (defined as the time to recover a service to meet customer satisfaction) has been continuously decreasing with the use of newer technologies, tools and process automation.
However, with the introduction of the high-speed Internet, access to mobile technology and globalization of e-commerce, the scale and fluctuations in service demand have radically changed which have put challenging demands on provisioning the resources within shorter and shorter periods of time. Figure 1 summarizes the key drivers that are forcing the drastic reduction of management time constant.

Business Drivers for Anti-Thesis

Figure 1: Global communication, collaboration and commerce at the speed of light is forcing the drastic reduction in IT resource management time constant

 The Anti-Thesis:

The result is the anti-thesis (the word is not used pejoratively but actually it denotes innovation, creativity and a touch of anti-establishment rebellion in the Jazz metaphor) to virtualize the infrastructure management (compute, storage and network resources) and provide intelligent resource management services that utilize commodity infrastructure connecting fat pipes. Software defined data center (SDDC) is used to represent the dynamic provisioning of  server clusters connected by a network attached to required storage all meeting the service levels required by the applications that are composed to create a service transaction. The idea is to monitor the resource utilization by these service components and adjust the resources as required to meet the Quality of Service (QoS) needs of the service transaction (in terms of cpu, memory, network bandwidth, latency, storage throughput, IOPs and capacity.) Network function virtualization (NFV) is used to denote the dynamic provisioning and management of network services such as routing, switching and controlling commodity hardware that is solely devoted to connect various devices to assure desired network bandwidth and latency. Storage function virtualization (SFV) similarly denotes the dynamic provisioning and management of commodity storage hardware with required IOPs, throughput and capacity. ACI denotes application centric infrastructure which is sensitive to the needs of particular application and dynamically adjusts the resources to provide right cpu, memory, bandwidth, latency, storage IOPs, throughput and capacity. The drive to move away from proprietary network and storage equipment to commodity high performance hardware made ubiquitous with open interface architectures are intended to foster competition and innovation both in hardware and software. The open software is supposed to match the needs of the application by tuning the resources dynamically using the compute, network and storage management function made available with open-source software.

Unfortunately, the anti-thesis brings its own issues in transforming the current infrastructure that has evolved over few decades to the new paradigm.

  1. The new approach has to accommodate current infrastructure and applications and allow seamless migration to new paradigm without vendor lock-in to use new infrastructure. Fork-lift strategy will not work that involves time. money and service interruption.
  2. Current infrastructure is designed to provide low latency high performance application quality of service with various levels of security. For mission critical applications to migrate to new paradigm, these requirements have to be met without compromise.
  3. The new paradigm should not require new way of developing applications or it must support current development languages and processes without new methodology lock-in. An application is defined both by functional requirements that dictate the specific domain functions and logic as well as non-functional requirements that define operational constraints related to service availability, reliability, performance, security and cost dictated by business priorities, workload fluctuations and resource latency constraints. A non-functional requirement specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture. The architecture for non-functional requirements plays a key role in whether the open systems approach will succeed or fail. An architecture that defines a plug and play approach requires a composition scheme which leads to the next issue.
  4. There must be a way to compose applications developed by different vendors without having to look inside their implementation. In essence there must be a composition architecture that allows applications to be developed independently but can be composed to create new applications without having to modify the original components. Even when you have open-sourced applications, integrating them and creating new workflows and services is a labor intensive and knowledge sensitive task. The efficiency will be thwarted by the need for service engagements, training and maintenance of integrated workflows.

Current approaches suggested in the anti-thesis movement embracing virtual machines (VM), open-sourced applications and cloud computing fail on all these accounts by increasing complexity or requiring vendor, API and architecture dependency. The result is increased operation cost of integration dependency on ad-hoc software and services.

The increase in complexity with scale and distribution is more an issue of architecture and is not addressed by throwing more ad-hoc software to automate with managers of managers, point solutions and tools. It has to do more with the limitation of current computing architecture than lack of good ad-hoc software approaches.

Server virtualization creates a Virtual Machine image that can be replicated easily in different physical servers with shared resources. The introduction of Hypervisor to virtualize hardware resources (cpu and memory) allows multiple virtual machine images to share the resources in a physical server. NFV and SFV provide management functions to control the underlying commodity hardware. OpenStack and other infrastructure provisioning mechanisms have evolved through the anti-thesis movement to integrate VM provisioning integrated with NFV and SFV provisioning to create clusters of VMs on which the applications can deliver the service transactions. Figure 2 shows OpenStack implementation of such a service provisioning process. A cluster of VMs required for a service delivery can be provisioned with required service level agreements to assure right cpu, memory, bandwidth, latency, storage IOPs, throughput and capacity. It is also important to note that OpenStack not only can provision a VM cluster but also physical server cluster or a mixture. It allows adding or deleting or tuning a VM on demand. In addition, OpenStack allows including applications themselves to be part of the image and snapshots that can be reused to replicate the VM on any server. Clusters with appropriate applications and dependencies with connectivity and firewall rules can be provisioned and replicated. This allows for orchestration of VM images to provide auto-failover, auto-scaling, live-migration and auto-protection for service delivery.

OpenStack based infrastructure control plane

Figure 2: OpenStack is used to provision infrastructure with required service level agreements to assure cpu, memory, bandwidth, storage IOPs, throughput, storage capacity of individual virtual machine (VM) and the network latency of the VM cluster

Unfortunately, the anti-thesis movement solely depends on infrastructure mobility and management through VMs and associated plumbing which requires a lock-in on the availability of same OpenStack in a distributed environment or complex image orchestration add-ons. More recently instead of moving the whole virtual image containing the OS, run-time environments and applications along with their configurations, a mini-OS (using subset of operating system services) image is created with application and their configurations. LXC containers and Docker containers are examples. The use of mobility of VMs or containers to move applications from one infrastructure to another to manage the infrastructure SLAs to meet QoS needs of an application has created a plethora of ad-hoc solutions adding to the complexity. Figure 3 shows the current state-of-the-art.

Anti-Thesis: urrent State of the Art

Figure 3: Current state-of-the-art that provides application QoS through Virtual Machine mobility or container mobility where container is also an image

While this approach provides a solution to meet application scaling and fluctuations needs as long as the infrastructure meets certain requirements, there are certain shortcomings in distributed heterogeneous infrastructures provided by different vendors:

  1. Multiple Orchestrators are required when different architectures and infrastructure management systems are involved
  2. Too many infrastructure management tools, point solutions and integration services increase cost and complexity
  3. Manager of Managers create complexity
  4. Cannot scale across distributed infrastructures belonging to different service providers to leverage co0mmodity infrastructures resulting vendor lock-in
  5. VM Image Mobility creates additional VM image management, run away bandwidth and storage with proliferation of VM instances
  6. Lack of end-to-end service security visibility and control when services span across multiple service infrastructures.
  7. Managing low-latency transactions in distributed environments increases cost and complexity

Figure 4 shows the complexity involved in scaling services across distributed heterogeneous infrastructures with different owners using different infrastructure management systems. Integrating multiple distributed infrastructures with disparate management systems is not a highly scalable solution without increasing complexity and cost.

Obviously if scale, distribution and fluctuations (both in demand and resources) are not a requirement, then, the thesis will do well. Today, there are still many main-frame systems providing high transaction rates albeit at a higher cost. Anti-thesis is born out of the need for high degree of scalability, distribution and fluctuations with higher efficiency. Big data analysis, large scale collaboration systems are examples. However there is a large class of services that like to leverage commodity infrastructure and resiliency with security and application QoS management without vendor lock-in or high cost of complexity.

There are three stakeholders in an enterprise who want different things from infrastructure to provide QoS assurance:

  1. The Line of business owners and the CIO want:
    1. Service Level Quality (availability, performance, security and cost) Assurance
    2. End-to-end service visibility and control
    3. Precise resource accounting
    4. Regulatory Compliance
  2. The IT infrastructure providers want:
    1. Provide “Cloud-like Services” in private datacenters
    2. Advantage of commodity infrastructure without vendor lock-in
    3. Ability to “migrate service” or “tune infrastructure SLAs” based on Policies and application demand
    4. Ability to burst into cloud without vendor-lock-in
  3. The developers want:
    1. Focus on business logic coding and specification of run-time requirements for resources (application intent, context, communications, control and constraints) without worrying about run-time infrastructure configurations
    2. Converged DevOps to develop test and deploy with agility
    3. Service deployment architecture decoupling non-functional and functional requirements
    4. Service composition tools for reuse
    5. End-to-end visibility and profiling at run-time across the stack for Debugging

In essence, service developers would want to focus on functional requirement fulfillment without having to worry about resource availability in a fluctuating environment. Monitoring resource utilization and taking action on non-deterministic impact of scaling and fluctuations should be supported by a common architecture that decouples application execution from underlying resource management distributed or not.

Complexity and Cost

Figure 4: Complexity in a distributed infrastructure where scaling and fluctuations are increasing

The Synthesis:

The synthesis depends on addressing the scaling and fluctuation issues without vendor lock-in or architecture lock-in that restricts developers to use their current environments and requires accommodating current infrastructure while allowing new infrastructure with NFV and SFV to seamlessly integrate. For example the anti-thesis solutions require certain features in their OSs and new middleware must run in distributed environments. This leaves a host of legacy systems out.

A call for the synthesis is emerging from two quarters:

  1. Industry analysts such as Gartner who predict that a service governor will emerge in due time. “A service governor [2] is a runtime execution engine that has several inputs: business priorities, IT service descriptions (and dependency model), service quality and cost policies. In addition, it takes real-time data feeds that assess the performance of user transactions and the end-to-end infrastructure, and uses them to dynamically optimize the consumption of real and virtual IT infrastructure resources to meet the business requirements and service-level agreements (SLAs). It performs optimization through dynamic capacity management (that is, scaling resources up and down) and dynamically tuning the environment for optimum throughput given the demand. The service governor is the culmination of all technologies required to build the real-time infrastructure (RTI), and it’s the runtime execution management tool that pulls everything together.”
  2. From the academic community who recognize the limitations of Turing’s formulation of computation in terms of functions to process information using simple read, compute (change state) and write instructions combined with the introduction of program, data duality by von Neumann which has allowed information technology (IT) to model, monitor, reason and control any physical system. Prof. Mark Burgin [3] in his 2005 book on super recursive algorithms states “it is important to see how different is functioning of a real computer or network from what any mathematical model in general and a Turing machine,(as an abstract, logical device), in particular, reputedly does when it follows instructions. In comparison with instructions of a Turing machine, programming languages provide a diversity of operations for a programmer. Operations involve various devices of computer and demand their interaction. In addition, there are several types of data. As a result, computer programs have to give more instructions to computer and specify more details than instructions of a Turing machine. The same is true for other models of computation. For example, when a finite automaton represents a computer program, only certain aspects of the program are reflected. That is why computer programs give more specified description of computer functioning, and this description is adapted to the needs of the computer. Consequently, programs demand a specific theory of programs, which is different from the theory of algorithms and automata.”

In short, the programs (or functions) developers develop to code business logic do not contain knowledge about how compute, storage and network devices interact with each other (structure) and how to deal with changing business priorities, workload variations and latency constraints (fluctuations that force changes to structure). This knowledge has to be incorporated in the architecture of the new computing, management and programming model.

These non-functional requirements are requirements that specify criteria that can be used to judge the operation of a system, rather than specific behavior. This should be contrasted with functional requirements that define specific behavior or functions that deal with algorithms, or business logic. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture. These requirements include availability, reliability, performance, security, scalability and efficiency at run-time. The new architecture must encapsulate the intent of the program, its operational requirements such as the context, connectivity to other components, constraints and control abstractions that are required to manage the non-functional requirements. Figure 5 shows an architecture where the service management architecture is decoupled from the infrastructure management systems monitoring and managing distributed resources that may belong to different providers with different incentives.

Infusing Cognition into Service Control Plane

Figure 5: A cognition infused service composition architecture that decouples distributed heterogeneous multi-vendor infrastructure management

The infrastructure control plane provides automation, monitoring and management of infrastructure required for applications to execute their intent. The output of the infrastructure is a cluster of physical servers or virtual servers with an operating system in each server to provide well-defined computing resources in terms of total CPU, Memory, network bandwidth, latency, storage IOPs, throughput and capacity. The infrastructure control plane will be able to provide required clusters on demand and elastically scale the nodes or the individual node resources on demand. The elastic on-demand resources use automation processes or NFV and SFV resources connected to Virtual or Physical servers.

As Professor Mark Burgin points out, the intent and the application monitoring to process information, apply knowledge, and change the circumstance must be part of the service management knowledge independent of distributed infrastructure management systems for providing true scalability, distribution and resiliency; and avoiding vendor lock-in or infrastructure, architecture or API lock-in. In addition, the service control plane must support recursive service composition to be able to have end-to-end service visibility and control to avail the best resources wherever they are available to meet the quality of service dictated by business priorities, latency constraints and workload fluctuations. The application quality of service must not be dictated or limited by the infrastructure limitations. Then only we can predictably deploy highly reliable services on even not so reliable distributed infrastructure and increase efficiency to meet demand that is not as predictable.

Borrowing from biological and intelligent systems which specialize in exploiting  architectures that provide predictability, we can argue that infusing cognition into service management will provide such an architecture. Cognition [4] is associated with intent and its accomplishment through various processes that monitor and control a system and its environment. Cognition is associated with a sense of “self” (the observer) and the systems with which it interacts (the environment or the “observed”). Cognition [4] extensively uses time, history and reasoning in executing and regulating tasks that constitute a cognitive process. There is a fundamental reason why current Turing, von Neumann stored program computing model cannot address large-scale distributed computing with fluctuations both in resources and in computation workloads without increasing complexity and cost. As von Neumann [5] put it “It is a theorem of Gödel that the description of an object is one class type higher than the object.” An important implication of Gödel’s incompleteness theorem is that it is not possible to have a finite description with the description itself as the proper part. In other words, it is not possible to read yourself or process yourself as a process. In short, Gödel’s theorems prohibit “self-reflection” in Turing machines. Turing’s O-machine was designed to provide information that is not available in the computing algorithm executed by the TM. More recently, the super recursive algorithms proposed by Mark Burgin [3] points a way to model the knowledge about the hardware and software to reason and act to self-manage.  He proves that the super recursive algorithms are more efficient than plain Turing computations which assume unbounded resources.

Perhaps, we should look for “synthesis” solutions not in familiar places where we feel comfortable with more ad-hoc software and services that are labor and knowledge intensive. We should look for clues in biology, human organizational networks and even telecommunication networks to transform current datacenters from being infrastructure management systems to services switching centers of the future [6]. This requires search for new computing, management and programming models without disturbing current applications, operating systems or infrastructure while facilitating smooth migration to a more harmonious melody of orchestrated services on a global scale with high efficiency and resiliency.

References:

[1] Holbrook, Morris B. 2003. ” Adventures in Complexity: An Essay on Dynamic Open Complex Adaptive Systems, Butterfly Effects, Self-Organizing Order, Coevolution, the Ecological Perspective, Fitness Landscapes, Market Spaces, Emergent Beauty at the Edge of Chaos, and All That Jazz.” Academy of Marketing Science Review [Online] 2003 (6) Available: http://issuu.com/gfbertini/docs/adventures_in_complexity_-_an_essay_on_dynamic_ope/search 

[2] https://www.gartner.com/doc/2075838/infrastructure-service

[3] M. Burgin, Super-recursive Algorithms, New York: Springer, 2005.

[4] Mikkilineni, R. (2012). Applied Mathematics,  3, 1826-1835 doi:10.4236/am.2012.331248 Published Online November 2012 (http://www.SciRP.org/journal/am)

[5] Aspray W., and Burks A., 1987. Editors, Papers of John von Neumann on Computing and Computer Theory. In Charles Babbage Institute Reprint Series for the History of Computing, MIT Press. Cambridge, MA, p409, p.474.

[6] Rao Mikkilineni, “Designing a New Class of Distributed Systems” Springer, New York, 2011

Changing Landscape of Backup and Disaster Recovery
September 16, 2012

“Consumers need to drive vendors to deliver what they really need, and not what the vendors want to sell them.”

——  Jon Toigo (http://www.datastorageconnection.com/doc.mvc/Jon-Toigo-Exposes-More-About-Data-Storage-Ven-0001 )

Starting from the mainframe datacenters where applications are accessed using narrow bandwidth networks and dumb terminals and evolving to client-server and peer-to-peer distributed computing architectures which exploit higher bandwidth connections, business process automation has contributed significantly to reduce the TCO. With the Internet, global e-commerce was enabled and the resulting growth in commerce led to an explosion of storage.  Storage networking and resulting NAS (network attached storage) and SAN (storage area network) technologies have further changed the dynamics of the enterprise IT infrastructure in a significant way to meet business process automation needs.  The storage backup and recovery technologies have further improved the resiliency of services delivery processes by improving the time it takes to respond in case of service failure.  Figure 1 shows the evolution of the data recovery time objective, (the recovery point objective (RPO) is the point in time to which you must recover data as dictated by business needs.  Recovery time objective (RTO) is the period of time after an outage in which the application and its data must be restored to a predetermined state defined by RPO.), which dropped from days to minutes and seconds.  While the productivity, flexibility and global connectivity made possible with this evolution have radically transformed the business economics of information systems, the complexity of heterogeneous and multi-vendor solutions have created high dependence on specialized training and service expertise to assure availability, reliability, performance and security of various business applications.

Figure 1: The evolution of Recovery Time Objective. Virtualization of server technology provides an order of magnitude improvement in the way applications are backed-up, recovered and protected against disasters.

Successful implementation must integrate various server, network and storage centric products with their local optimization best-practices with end-to-end optimization strategies.  While each vendor attempts to assure their success with more software and services, the small and medium enterprises often cannot afford the escalating software and service expenses associated with optimization strategies and become vulnerable.  The exponential growth in services demand for voice, data and video in the consumer market also has introduced severe strains on current IT infrastructures.  There are three main issues that are currently driving distributed computing solutions to seek new approaches:

  1. Current IT datacenters have evolved to meet the business services needs in an evolutionary fashion from server-centric application design to client-server networking to storage area networking without an end-to-end optimized architectural transformation along the way.  The server, network and storage vendors optimized management in their own local domains often duplicating functions from other domains to compete in the market place.  For example, cache memory is used to improve the performance of service transactions by improving response time. However, redundancy of cache management in server, storage and even network switches make tuning of the response time a complex task requiring multiple management systems. Application developers have also started to introduce server, storage and network management within their applications.  For example, Oracle is not just a database application.  It also is a storage manager, and a network manager as well as being an application manager.  It tries to optimize all its resources for performance tuning.  No wonder it takes an army of experts to keep it going.  The result is an over-provisioned datacenter with multiple functions duplicated many times by the server, storage and networking vendors.  Large enterprises with big profit margins throw human bodies, tons of hardware and a host of custom software and shelf-ware packages to address their needs.  Some data centre managers do not even know what assets they have — of course, yet another opportunity for vendors to sell an asset management system to discover what is available, and services to provide asset management using such an asset manager.  Another system is de-duplication software that finds out multiple copies of the same files and removes duplication.  This shows how expensive it is to clean up after the fact.
  2. Heterogeneous technologies from multiple vendors that are supposed to reduce IT costs actually increase the complexity and management costs.  Today, many CFOs consider IT as a black hole that sucks in, expensive human consultants and continually demands capital and operational expenses to add hardware and software which often end up as shelf-ware because of their complexity.  Even for mission-critical business services, enterprises CFOs are starting to question the productivity and effectiveness of current IT infrastructures.  It becomes even more difficult to justify the costs and complexity to support the massive scalability and wild fluctuations in workloads demanded by consumer services.  The price point is set low for the mass market but the demand is high for massive scalability (a relatively simple, but massive, service like Facebook is estimated to use about 40,000 servers and Google is estimated to run a million servers to support its business).
  3. More importantly, Internet-based consumer services such as social networking, e-mail and video streaming applications have introduced new elements: wild fluctuations in demand, massive scale of delivery to a divergent set of customers.  The result is an increased sensitivity to the economics of service creation, delivery and assurance. Unless the cost structure of IT management infrastructure is addressed, the mass-market needs cannot be met profitably.  Large service providers such as Amazon, Google, Facebook etc., have understandably implemented alternatives to meet wildly fluctuating workloads, massive scaling of customers and latency. constraints to meet demanding response time requirements.

Cloud computing technology has evolved to meet the needs of massive scaling, wild fluctuations in consumer demand and response time control of distributed transactions spanning multiple systems, players and geographies.  More importantly, cloud computing changes the backup and Disaster Recovery (DR) strategies in a drastic manner reducing the RTO to minutes and seconds doing much better than SAN/NAS based server-less backup and recovery strategies. Live migration is accomplished as follows:

  1. The entire state of a virtual machine is encapsulated by a set of files stored on shared storage such as Fibre Channel or iSCSI Storage Area Network (SAN) or Network Attached Storage (NAS).
  2. The active memory and precise execution state of the virtual machine is rapidly transferred over a high-speed network, allowing the virtual machine to instantaneously switch from running on the source host to the destination host. This entire process could take less than few seconds on a Gigabit Ethernet network.
  3. The networks being used by the virtual machine are virtualized by the underlying host. This ensures that even after the migration, the virtual machine network identity and network connections are preserved.

While Virtual machines improve resiliency and live migration to reduce the RTO, the increased complexity of hypervisors, their orchestration, Virtual Machine images and their management adds an additional burden in the datacenter. Figure 2 shows the evolution of current datacenters from the mainframe days to the cloud computing transformation.  The cost of creating and delivering a service has continuously decreased with increased performance of hardware and software technologies. What used to take months and years to develop and deliver new services now only takes weeks and hours. On the other hand, as service demand increased with ubiquitous access using the Internet and broadband networks, the need for resiliency (availability, reliability, performance and security management), efficiency and scaling also put new demands on service assurance and hence on the need for continuous reduction of RTO and RPO. The introduction of SAN server-less backup and virtual machine migration in turn have increased complexity and hence the cost of managing the service transactions during delivery while reducing the RTO and RPO.

Figure 2: Cost of Service Creation, Delivery and Assurance with the Evolution of Datacenter Technologies. The management cost has exploded because of a myriad point-solution appliances, software and shelf-ware are cobbled together from multiple vendors. Any future solution that addresses the datacenter management conundrum must provide end-to-end service visibility and control transcending multiple service provider resource management systems. Future datacenter focus will be on a transformation from Resources Management to Services Switching to provide telecom-grade “trust”.

The increased complexity of management of services implemented using the von Neumann serial computing model executing a Turing machine turns out to be more a fundamental architectural issue related to Godel’s prohibition of self-reflection in Turing machines than a software design issue. Cockshott et al. conclude their book “Computation and its limits” with the paragraph “The key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.” While the last statement is not strictly correct (for example current operating systems facilitate incorporating computing resources and their management interspersed with the computations that attempt to model any physical system to be executed in a Turing machine), it still points to a fundamental limitation of current Turing machine implementations of computations using the serial von Neumann stored program control computing model. The universal Turing machine allows a sequence of connected Turing machines synchronously model a physical system as a description specified by a third-party (the modeler). The context, constraints, communication abstractions and control of various aspects during the execution of the model (which specifies the relationship between the computer acting as the observer and the computed acting as the observed) cannot be also included in the same description of the model because of Gödel’s theorems of incompleteness and decidability. Figure 3 shows the evolution of computing from mainframe/client-server computing where the management was labor-intensive to the cloud computing paradigm where the management services (which include the computers themselves in the model controlling the physical world) are automated.

 Figure 3: Evolution of Computing with respect to Resiliency, Efficiency and Scaling.

The first phase (of conventional computing) depended on manual operations and served well when the service transaction times and service management times could be very far apart and did not affect the service response times. As the service demands increased, service management automation helped reduce the gap between the two transaction times at the expense of increased complexity and resulting cost of management. It is estimated that 70% of today’s IT budget goes to self-maintenance and only 30% goes to new service development. Figure 4 shows current layers of systems contributing to cloud management.

Figure 4: Services and their management complexity

The origin of complexity is easy to understand. Current ad-hoc distributed service management practices originated from server-centric operating systems and narrow bandwidth connections. The need to address end-to-end service transaction management and the resource allocation and contention resolution required to address changing circumstances which, depend on business priorities, latency and workload fluctuations, were accommodated as an after-thought. In addition, open competitive market place has driven server-centric, network-centric and storage-centric oriented devices and appliances to multiply. The resulting duplication of many of the management functions in multiple devices without an end-to-end architectural view has largely contributed the cost and complexity of management. For example the storage volume management is duplicated in server, network and storage devices leading to a complex web of performance optimization strategies. Special purpose appliance solutions have sprouted to provide application, network, storage, and server security often duplicating many of the functions. Lack of an end-to-end architectural framework has led to point solutions that have dominated service management landscape often negating the efficiency improvements of service development and delivery made possible by the hardware performance improvements (Moore’s law) and software technologies and development frameworks.

The escape from this conundrum is to re-examine the computation models and circumvent the computational limit to go beyond Turing machines and serial von-Neumann computing model. Recently proposed computing model implemented in the DIME network architecture (Designing a New Class of Distributed Systems, Springer 2011) attempts to provide a new approach based on the old Turing O-machine proposed by Turing in his thesis. The phase 3 in figure 3 shows the new computing model implementing non-von Neumann managed Turing machine to implement hierarchical self-management of temporal computing processes. The implementation exploits the parallel threads and high bandwidth available with many-core processors and provides auto-scaling, live-migration, performance optimization and end to end transaction security by providing FCAPS (fault, configuration, accounting, performance and security) management of each Linux process and a network of such Linux processes provide a distributed service transaction. This eliminates the need for Hypervisors and Virtual machines and their management while reducing complexity. Since a Linux process is virtualized instead of a Virtual machine, the backup and DR are at a process level and also include a network of processes providing the service. Hence it is much more light-weight than VM based backup and DR.

In its simplest form the DIME computing model modifies the Turing machine SPC implementation by exploiting the parallelism and high bandwidth available in today’s infrastructure.

Figure 5: The DIME Computing Model – A Managed Turing Machine with Signaling incorporates the spirit of Turing Oracle machine proposed in his thesis.

Figure 5 shows the transition from the TM to a managed TM by incorporating three attributes:

  1. Before any read or write, the computing element checks the fault, configuration, accounting, performance and security (FCAPS) policies assigned to it,
  2. Self-management of the computing element is endowed by introducing parallel FCAPS management that sets the FCAPS policies that the computing element obeys, and
  3. An overlay of signaling network provides an FCAPS monitoring and control channel which allows the composition of managed network of TMs implementing managed workflows.

Figure 6 shows the services architecture with DIME network management providing end-to-end service FCAPS management.

Figure 6: Service Management with DIME Networks

The resulting decoupling of services management from infrastructure management provides a new approach to service management including backup and DR. While, the DIME computing model is in its infancy, two prototypes have already demonstrated its usefulness one with a LAMP stack and another with a new native-OS designed for many-core servers. Unlike Virtual Machine based backup and DR, the DIME network architecture supports auto-provisioning, auto-scaling, self-repair, live-migration, secure service isolation, and end-to-end distributed transaction security across multiple devices at the process level in an operating system. Therefore, this approach not only avoids the complexity of Hypervisors and Virtual machines (although, it still works with Virtual servers) but also allows adopting live-migration to existing applications without requiring changes to their code. In addition, it offers a new approach where the hardware infrastructure is simpler without the burden of anticipating service level requirements and let intelligence of services management reside in the services infrastructure leading to the deployment of intelligent self-managing services using a dumb infrastructure on stupid networks.

In conclusion, we emphasize that the DIME network architecture works with or without Hypervisors and associated Virtual Machine, IaaS and PaaS complexity and allows uniform service assurance across hybrid clouds independent of the service provider management systems. Only the Virtual server provisioning commands are required to configure just enough OS, DIMEX libraries and execute service components using DNA.

The power of DIME network architecture is easy to understand. By introducing parallel management to the Turing machine, we are converting a computing element to a managed computing element. In current operating systems, it is at the process level. In the new native operating system (parallax-OS) we have demonstrated, it is the Core in a many-core processor. A managed element provides plug-in dynamism to service architecture.

Figure 7 shows a service deployment in a Hybrid cloud with integrated service assurance across the private and public clouds without using service provider management infrastructure. Only the local operating system is utilized in DIME service network management.

Figure 7: A DNA based services deployment and assurance in a Hybrid Cloud. The decoupling of dynamic service provisioning and management from infrastructure resource provisioning and management (server, network and storage administration) enabled by DNA makes static provisioning of resource pools possible and dynamic service migration of services allows them to seek right resources at the right time based on workloads, business priorities and latency constraints.

As mentioned earlier, the DIME network architecture is still in its infancy and researchers are developing both the theory and practice to validate its usefulness in mission critical environments. Hopefully in this year of Turing centenary celebration, some new approaches will address the computation and its limits pointed out by Cockshott et al., in their book. Paraphrasing Turing (Turing was unimpressed by Wilkes’s EDSAC design, commenting that it was “much more in the American tradition of solving one’s difficulties by means of much equipment rather than by thought.”) a lot of appliances or code may not be often, a sustainable substitute for thoughtful architecture.

Cloud Computing, Management Complexity, Self-Organizing Fractal Theory, Non Equilibrium Thermodynamics, DIME networks, and all that Jazz
May 5, 2012

“There are two kinds of creation myths: those where life arises out of the mud, and those where life falls from the sky. In this creation myth, computers arose from the mud and code fell from the sky.”

— George Dyson, “Turing’s Cathedral: The Origins of the Digital Universe”, New York: Random House, 2012.

“The DIME network architecture arose out of the need to manage the ephemeral nature of life in the Digital Universe”

— Rao Mikkilineni (2012)

Abstract:

The explosion of current cloud computing software offerings (both open-sourced and proprietary)  to create public, private and hybrid clouds raises a question. Is it resulting in higher resiliency, efficiency and scaling of service offerings or increasing the complexity by introducing more components in an already crowded datacenter deploying myriad appliances, management frameworks, tools and people, all claiming to help lower total cost of operation? As the reliability, availability, performance, security and efficiency of the total system depends both on the number of components and their configuration, the architecture of a system plays an important role in defining the overall system resiliency, efficiency and scaling. We discuss current cloud computing architecture, the resulting complexity and investigate possible solutions using the self-organizing fractals theory and non-equilibrium thermodynamics. Evolution has taught us that when complexity increases, often, an architectural transformation occurs to lower the overall system entropy. Is a phase transition about to occur in our data centers seeded by the new many-core servers and high bandwidth communications?

Introduction:

According to Holbrook (Holbrook 2003), “Specifically, creativity in all areas seems to follow a sort of dialectic in which some structure (a thesis or configuration) gives way to a departure (an antithesis or deviation) that is followed, in turn, by a reconciliation (a synthesis or integration that becomes the basis for further development of the dialectic). In the case of jazz, the structure would include the melodic contour of a piece, its harmonic pattern, or its meter…. The departure would consist of melodic variations, harmonic substitutions, or rhythmic liberties…. The reconciliation depends on the way that the musical departures or violations of expectations are integrated into an emergent structure that resolves deviation into a new regularity, chaos into a new order, surprise into a new pattern as the performance progresses.” He goes on to explain exquisitely what “all that jazz” means and what it has to do with Dynamic Open Complex Adaptive System or DOCAS.

I borrow the jazz metaphor to understand the current state of affairs in cloud computing. Cloud computing started innocently enough as an attempt to automate systems administration tasks of computing systems to improve the resiliency (availability, reliability, performance and security), efficiency and scaling of services provided by web-hosting data centers. Before the advent of global web e-commerce enabled by broadband networks and ubiquitous access to high-powered computing, the workload fluctuations were not wild-enough to demand very fast response in provisioning to meet them. While enterprise datacenters were not pushed to deal with the wild fluctuations that some web-services companies were, companies such as Amazon, Google, Facebook, Twitter etc., dealing with uncertain (non-deterministic) workload fluctuations took a different approach to improve resiliency and scaling. They took advantage of the increased power in blade servers, high bandwidth networks and virtualization technologies to create virtual machine (VM) based systems administration with multiple VMs in a physical device consolidating workloads that are managed with dynamic resource provisioning. This has become known as cloud computing. Strictly speaking, VM is not essential for automation to improve scaling, auto-failover and live migration of applications and their data; and companies such as Google have chosen their own automation strategies without using VMs. On the other hand, many other enterprises have taken a more conservative approach by not adopting the cloud strategy and avoid the risk of impacting their highly tuned mission critical application availability, performance and security. They are probably correct given the continued occasional outages, security breaches and cost escalation in managing complexity with many public clouds.

Amazon and Google went one step further by offering their flexible infrastructures to developers outside their company to rent the resources with which they could develop, deploy and service their own applications, thus unleashing a new class of developers. Startups could substitute OPEX for CAPEX to obtain the resources required for their new product and services development. Resulting explosion of applications and services has created a new demand for more clouds and more automation of systems administration to extend resiliency and provide a high degree of isolation from multiple tenants sharing resources while resolving the resulting contentions. The result is a complex web of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) offerings to meet the needs of developers, service providers and service consumers.  To be sure, these offerings are not independent. On the contrary, each layer influences the other in a complex set of interactions often in non-deterministic way based on workloads, business priorities and latency constraints. Figure 1 shows an example of these relationships.

Figure 1: Complex relationships of information flow between nested layers and information flows between components in each layer. The complexity is only compounded by multi-vendor offerings in each layer (not shown here)

The origin of complexity is easy to understand. While attempting to solve the issue of multi-tenancy and agility, the introduction of the virtual machines gives rise to another complexity of virtual image management and sprawl control. In order to address VM mobility issue, recent efforts to introduce application level mobility using other container constructs such as Gears, Cartridges etc., in the case of Redhat PaaS (or Dynos in the case of Heroku, the salesforce PaaS), introduce yet another layer of management of Gears and Cartridges (or Dynos). Another example is the Eucalyptus Infrastructure as a Service, which goes to great lengths to provide High Availability (HA) of the Infrastructure platform but fails to guarantee HA of applications. It is left to the applications to fend for themselves.  These ad-hoc approaches to automate management have mushroomed the software required, increased the learning curve and made the operation and maintenance even more complex. While all platforms demonstrate drag and drop software with pretty displays that allow developers to easily create new services, there is no guarantee that if something goes wrong, one will be able to debug and find out where the root cause is. Or there is no assurance that when multiple services and applications are deployed on same platform, the feature interactions and shared resource management provided by a plethora of management systems designed independently will cooperate to provide the required reliability, availability, performance and security at the service level. More importantly, when the services cross server, data-center and geographical boundaries, there is no visibility and control of end to end service connections and their FCAPS management. Obviously, the platform vendors are only very eager to provide professional services and additional software to resolve the issues but without end to end service connection visibility and control that spans across multiple modules, systems, geographies and management systems, troubleshooting expenses could outweigh the realized benefits. What we need probably is not more “code” but an intelligent architecture that results in a synthesis of computing services and their management and a decoupling of end to end service connection and service component management from underlying resource (server, network and storage) management.

Self-organizing Fractals and Non-equilibrium Thermodynamics:

Fortunately, the self-organizing fractal theory (SOFT) and non-equilibrium thermodynamics (NET) (Kurakin 2011), provide a way to analyze complex systems and identify solutions. A very good glimpse into the theory can be found in the video (http://www.scivee.tv/node/4994). According to the SOFT-NET theory, the process of self-organization is scale-invariant and proceeds through sequential organizational state transitions, in a manner characteristic of far-from-equilibrium systems, with macrostructure-processes emerging via phase transition and self-organization of microstructure-processes. Once they have emerged as a result of an organizational transition, newborn structure-processes strive to persist and expand, growing in size/number, diversity, complexity, and order, while feeding on pre-existing energy/matter gradients. Economic competition among alternatively organized structure-processes feeding on the same energy/mater gradients leads to the elimination of economically deficient or inferior structure-processes and the improvement, diversification, and specialization of survivors, who are forced to fill and exploit all the available resource niches (the Darwinian phase of self-organization) (Kurakin 2007). Promoted by mutually profitable exchanges of energy/matter, the self-organization of specializing survivors (structure-processes) into larger scale structure-processes transforms (mostly) competing alternatives into (mostly) cooperating complements. As a result, Darwinian competition is transferred onto a larger spatiotemporal scale, where it commences among alternative organizations of self-organized survivors (the organizational phase (Kurakin 2007). Such an economy-driven, scale-invariant process of self-organization leads to the emergence of increasingly long-lived, multi-scale, hierarchical organizations (structures-processes) that expand over increasingly larger scales of space and time, feeding on available energy/matter gradients and eventually destroying them. Yet because energy/matter exists as a non-equilibrium system of interdependent gradients and conjugated fluxes of interconverting energy/matter forms, new gradients and fluxes are created and become dominant as old gradients and fluxes are consumed and destroyed. Such processes are responsible for the continuous birth, death, and transformation of energy/matter forms.

Obviously, cloud computing systems (or for that matter, distributed computing systems in general based on Turing machines) are not living organisms and thus are not susceptible to self-organization. However, if you substitute information to replace energy/matter, there are many similarities between the structure and dynamics of computing systems and living self-organizing systems. The nested computing layers, meta-stable organizational patterns (both macro- and micro- structures) in each layer, and process evolution through inter-layer interaction are the same features that contribute to self-organization. So one can ask what is missing for the cloud computing environments to become self-organizing. The answer lies in two observations:

  1. First one is the Gödel’s prohibition of self-reflection by computing elements that form the fundamental building block in the computing domain, the Turing machine (TM) (Samad and Cofer, 2001).
  2. Second one is the lack of scale invariant macro and micro structure-processes mentioned above for the organization of computing components and their management across various nested layers resulting from current ad-hoc implementation of computing processes using the serial von Neumann implementation of the Turing machine.

I have discussed both these deficiencies elsewhere (Mikkilineni 2011, 2012). The DIME network architecture proposed there attempts to address both these deficiencies.

The DIME Network Architecture:

In its simplest form a DIME is comprised of a policy manager (determining the fault, configuration, accounting, performance, and security aspects often denoted by FCAPS); a computing element called MICE (Managed Intelligent Computing Element); and two communication channels. The FCAPS elements of the DIME provide setup, monitoring, analysis and reconfiguration based on workload variations, system priorities based on policies and latency constraints. They are interconnected and controlled using a signaling channel which overlays a computing channel that provides I/O connections to the MICE (or the computing element) (Mikkilineni 2011). The DIME computing element acts like a Turing oracle machine introduced in his thesis and circumvents Gödel’s halting and un-decidability issues by separating the computing and its management and pushing the management to a higher level. Figure 2 shows the DIME computing model.

Figure 2: The DIME Computing Model. For details on the different implementations of DIME networks (a LAMP stack without VMs and a native Parallax OS) visit http://www.youtube.com/kawaobjects

In addition the introduction of signaling in the DIME network architecture allows a fractal composition scheme of the DIME network to create a recursive distributed computing engine with scale invariant FCAPS management of the computing workflow at node, sub-network and network level. Figure 2 shows the comparison between living organisms with self-organizing fractal attributes and Cloud computing infrastructure organized to exhibit self-management fractal attributes.

Figure 3: Comparison of the nested hierarchical organization of living organisms and DIME network architecture.

While both models exhibit the genetic transactions of replication, repair, recombination and reconfiguration (Stanier and Moore, 2006) (Mikkilineni 2011), there is a fundamental difference between the two. The DIME network architecture is not self-organizing but it is self-managing based on initial policies and constraints defined at the root levels of the hierarchies. These policies can be modified during run time but only with the influence of agents external to the computing element whose behavior is under modification (at the DIME node, sub-network and network level).

At each level, the FCAPS management defines the initial conditions and policy constraints (meta-model if you will, denoting the context and defining the destiny of the ensuing process workflow) that will define the information flows and workflows executed by the DIME network downstream. The resulting metastable configurations are monitored and managed by the managers upstream. This model exhibits the three-step processes that provide self-management in living organisms – establish routine, monitor cues and respond with corrective action based on FCAPS parameters at every level. Figure 4 shows the metastable configuration entropy of the whole system. The FCAPS parameters monitored provide a measure of system entropy shown and the reconfiguration alters the state from higher entropy to lower entropy providing a “measure” of the stable pattern.

Figure 4: System Entropy as a function of time

The SOFT-NET theories provide a path to reexamine the way we design distributed computing systems. Perhaps the living organisms with their self-organizing properties could provide us a way to bring self-management to cloud computing configurations to improve resiliency, efficiency and scaling. The DIME network architecture is a baby-step to implement a recursive distributed computing engine to execute managed workflows that constitute hierarchical and temporal sequences of events executing business workflows.

The DIME network architecture raises some interesting questions about Turing machines and their management. How is it related to the Universal Turing Machine (UTM)? It is important to point out that I do not claim that DIME networks are the answer to Cloud computing vows or that the UTM can or cannot do what a DIME network does. While communicating Turing machines are modeled by a UTM (Penrose 1989), can the managed Turing machine networks also be modeled by the UTM? Is the scale-invariant organizational macro and micro structure-processes discussed in SOFT-NET theory essential for self-organizing systems? What are the differences between living self-organizing systems and self-managing networks? I leave this to the experts. I only point out that the DIME is inspired by the oracle machine discussed by Turing in his thesis and implements the architectural resiliency of cellular organisms in distributed computing infrastructure by introducing parallel management of both the computing elements and networks. While its feasibility has been demonstrated (Mikkilineni, Morana and Seyler, 2012), the DIME network architecture is still in its infancy and presents an opportunity on the eve of Turing’s centenary celebration to investigate its usefulness and theoretical soundness.  Only time will tell if the DIME network architecture is useful in mission critical environments. Figure 5 shows a comparision of Physical server based computing, Virtual Machine based cloud computing and DIME network implementation in Linux server eliminating the Hypervisors and Virtual Machines.

Figure 5: Comparision between conventional, cloud and DIME network computing paradigms. The DIME network Architecture requires no Hypervisors, Virtual Machines, IaaS or PaaS. Linux processes are FCAPS managed and networked using a middleware library without any changes to the Operating System.

The DIME network architecture with its self-management, parallel signaling network overlay and its recursive distributed computing engine model supports all features that current cloud computing provides and more while eliminating the need for Hypervisors, Virtual Machines, IaaS and PaaS. The DNA offers the simplicity by providing FCAPS management of a Linux process through a middle-ware library using standard services of the Linux operating syatem and parallelism available in a multi-core/many-core processor.

Conclusion:

I conclude with one lesson from the past (Mikkilineni and Sarathy, 2009) I take away working in POTS (Plain Old Telephone System), PANS (Pretty Amazing New Services enabled by the Internet), SANs and Clouds. It is that wherever there is networking, switching always trumps other approaches. When services are executed by a network of distributed components, service switching and end-to-end service connection management are the ultimate meta-stable structure-processes and it seems that cellular organisms, telephone networks, and human network eco-systems have figured this out. Signaling and nested FCAPS management structure-processes seem to be the common ingredients. Therefore, I predict that eventually the data centers which are currently computing resource management centers will transform themselves into services switching centers just as in telephony. Perhaps computer scientists should look to telephony, neuroscience and organizational dynamics for answers than engaging in hackathons and coding ad-hoc complex systems to manage distributed computing resources. SOFT-NET theories seem to be pointing to the right direction. The solution may lie in discovering scale invariant micro- and macro structure processes that provide nested FCAPS management and self-managed local and global policy enforcement. Perhaps Holbrook’s “All that Jazz” metaphor is an appropriate metaphor for cloud computing research. Time may be ripe for the reconciliation (the synthesis of the thesis of implementing services and the anti-thesis of services management).

References:

Holbrook, Morris B. 2003. ” Adventures in Complexity: An Essay on Dynamic Open Complex Adaptive Systems, Butterfly Effects, Self-Organizing Order, Coevolution, the Ecological Perspective, Fitness Landscapes, Market Spaces, Emergent Beauty at the Edge of Chaos, and All That Jazz.” Academy of Marketing Science Review [Online] 2003 (6) Available: http://www.amsreview.org/articles/holbrook06-2003.pdf

Kurakin, A., Theoretical Biology and Medical Modelling, 2011, 8:4. http://www.tbiomed.com/content/8/1/4

Kurakin A: The universal principles of self-organization and the unity of Nature and knowledge. 2007 [http://www.alexeikurakin.org/text/thesoft.pdf ].

Mikkilineni, R., Sarathy, V., (2009), “Cloud Computing and the Lessons from the Past,” Enabling Technologies: Infrastructures for Collaborative Enterprises, 2009. WETICE ’09. 18th IEEE International Workshops on , vol., no., pp.57-62, June 29 2009-July 1 2009. doi: 10.1109/WETICE.2009.

Mikkilineni, R., (2011). Designing a New Class of Distributed Systems. New York,NY: Springer. (http://www.springer.com/computer/information+systems+and+applications/book/978-1-4614-1923-5)

Mikkilineni (2012) Turing Machines, Architectural Resilience of Cellular Organisms and DIME Network Architecture (http://www.computingclouds.wordpress.com )

Mikkilineni, R., Morana, G., and Seyler, I., (2012), “Implementing Distributed, Self-managing Computing Services Infrastructure using a Scalable, Parallel and Network-centric Computing Model” Chapter in a Book edited by Villari, M., Brandic, I., & Tusa, F., Achieving Federated and Self-Manageable Cloud Infrastructures: Theory and Practice (pp. 1-374). doi:10.4018/978-1-4666-1631-8

Penrose, R., (1989) “The Emperor’s New Mind: Concerning Computers, Minds, And The Laws of Physics” New York, Oxford University Press pp. 48

Samad, T., Cofer, T., (2001). Autonomy and Automation: Trends, Technologies, In Gani, R., Jørgensen, S. B., (Ed.) Tools in European Symposium on Computer Aided Process Engineering volume 11, Amsterdam, Netherlands: Elsevier Science B. V., p. 10

Stanier, P., Moore, G., (2006) “Embryos, Genes and Birth Defects”, (2nd Edition), Edited by Patrizia Ferretti, Andrew Copp, Cheryll Tickle, and Gudrun Moore, London, John Wiley & Sons