Cellular Resiliency, Turing Machines, New Computing Models and the Zen of Consciousness

“WETICE 2012 Convergence of Distributed Clouds, Grids and their Management Conference Track is devoted to transform current labor intensive, software/shelf-ware-heavy, and knowledge-professional-services dependent IT management into self-configuring, self-monitoring, self-protecting, self-healing and self-optimizing distributed workflow implementations with end-to-end resource management by facilitating the development of a Unified Theory of Computing.”

Here is more food for thought…

Abstract:

Cellular biology has evolved to capture dynamic representations of self and its surroundings and a systemic view of monitoring and control of both the self and the surroundings to optimize the organism’s chances of survival. Signaling plays a key role in shaping the structure and behavior of cellular organisms to exhibit a high degree of resiliency by monitoring and controlling its own activity and its interactions with the outside environment with a Zen-like one-ness of the observer and the observed. Evolution has invented the genetic transactions of replication, repair, recombination and reconfiguration to support the survival of living cells by organizing themselves to execute a coordinated set of activities and signaling provides a vehicle for managing the system-wide behavior.

By introducing signaling and self-management in a Turing node and a signaling network as an overlay over the computing network, the current von-Neumann computing model is evolved to bring the architectural resiliency of cellular organisms to computing infrastructure. The new approach introduces the genetic transactions of replication, repair, recombination and reconfiguration to program self-resiliency in distributed computing systems executing a managed workflow. Perhaps, the injection of parallelism and network based composition of “Self” identity are the first steps in introducing the elements of homeostasis and self-management required for developing consciousness in the computing infrastructure.

Introduction:

As recent advances in neuroscience throw new light on the process of evolution of the cellular computing models, it is becoming clear that communication and collaboration mechanisms of distributed computing elements and end-to-end distributed transaction management played a crucial role in the development of self-resiliency, efficiency and scaling which are exhibited by diverse forms of life from the cellular organisms to highly evolved human beings. According to Antonio Damasio (Damasio 2010), managing and safe keeping life is the fundamental premise of biological value and this biological value has influenced the evolution of brain structures. “Life regulation, a dynamic process known as homeostasis for short, begins in unicellular living creatures, such as bacterial cell or a simple amoeba, which do not have a brain but are capable of adaptive behavior. It progresses in individuals whose behavior is managed by simple brains, as in the case with worms, and it continues its march in individuals whose brains generate both behavior and mind (insects and fish being examples)….” Homeostasis is the property of a system that regulates its internal environment and tends to maintain a stable, constant condition of properties like temperature or chemical parameters that are essential to its survival. System-wide homeostasis goals are accomplished through a representation of current state, desired state, a comparison process and control mechanisms.

He goes on to say that “consciousness came into being because of biological value, as a contributor to more effective value management. But consciousness did not invent biological value or the process of valuation. Eventually, in human minds, consciousness revealed biological value and allowed the development of new ways and means of managing it.” The governance of life’s processes is present even in single-celled organisms that lack a brain and it has evolved to the conscious awareness which is the hallmark of highly evolved human behavior. “Deprived of conscious knowledge, deprived of access to the byzantine devices of deliberation available in our brains, the single cell seems to have an attitude: it wants to live out its prescribed genetic allowance. Strange as it may seem, the want, and all that is necessary to implement it, precedes the explicit knowledge and deliberation regarding life conditions, since the cell clearly has neither.  The nucleus and the cytoplasm interact and carry out complex computations aimed at keeping the cell alive.  They deal with the moment-to-moment problems posed by the living conditions and adapt the cell to the situation in a survivable manner. Depending on the environmental conditions, they rearrange the position and distribution of molecules in their interior, and they change the shape of sub-components, such as microtubules, in an astounding display of precision.  They respond under duress and under nice treatment too. Obviously, the cell components carrying out those adaptive adjustments were put into place and instructed by the cell’s genetic material.”  This vivid insight brings to light the cellular computing model that:

  1. Spells out the computational workflow components as a stable sequence of patterns that accomplishes a specific purpose,
  2. Implements a parallel management workflow with another sequence of patterns that assures the successful execution of the system’s purpose (the computing network to assure biological value with management and  safekeeping),
  3. Uses a signaling mechanism that controls the execution of the workflow for gene expression (the regulatory network) and
  4. Assures real-time monitoring and control (homeostasis) to execute genetic transactions of replication, repair, recombination and reconfiguration (Stanier, Moore, 2006).

The managing and safekeeping life efficiently are evident at the lowest level of biological architecture that provides the resiliency that von Neumann was discussing in his Hixon lecture (von Neumann, 1987). ‘‘The basic principle of dealing with malfunctions in nature is to make their effect as unimportant as possible and to apply correctives, if they are necessary at all, at leisure. In our dealings with artificial automata, on the other hand, we require an immediate diagnosis. Therefore, we are trying to arrange the automata in such a manner that errors will become as conspicuous as possible, and intervention and correction follow immediately.’’ Comparing the computing machines and living organisms, he points out that the computing machines are not as fault tolerant as the living organisms. He goes on to say ‘‘It’s very likely that on the basis of philosophy that every error has to be caught, explained, and corrected, a system of the complexity of the living organism would not run for a millisecond.’’

The connection between consciousness and computing models is succinctly summarized by Samad and Cofer (Samad, Cofer, 2001).  While there is no accepted precise definition of the term consciousness, “it is generally held that it is a key to human (and possibly other animal) behavior and to the subjective sense of being human. Consequently, any attempt to design automation systems with humanlike autonomous characteristics requires designing in some elements of consciousness.  In particular, the property of being aware of one’s multiple tasks and goals within a dynamic environment and of adapting behavior accordingly.” They point to two theoretical limitations of formal systems that may inhibit the implementation of computational consciousness and hence limit our ability to design human-like autonomous systems. “First, we know that all digital computing machines are “Turing-equivalent”-They differ in processing speeds, implementation technology, input/output media, etc., but they are all  (given unlimited memory and computing time) capable of exactly the same calculations. More importantly, there are some problems that no digital computer can solve. The best known example is the halting problem; we know that it is impossible to realize a computer program that will take as input another, arbitrary, computer program and determine whether or not the program is guaranteed to always terminate.

Second, by Gödel’s proof, we know that in any mathematical system of at least a minimal power there are truths that cannot be proven. The fact that we humans can demonstrate the incompleteness of a mathematical system has led to the claims that Gödel’s proof does not apply to humans.”

An important implication of Gödel’s incompleteness theorem is that it is not possible to have a finite description with the description itself as the proper part. In other words, it is not possible to read yourself or process yourself as process. In short, Gödel’s theorems prohibit “self-reflection” in Turing machines. Louis Barrett highlights (Barrett, 2011) the difference between Turing Machines implemented using von Neumann architecture and biological systems. “Although the computer analogy built on von Neumann architecture has been useful in a number of ways, and there is also no doubt that work in classic artificial intelligence (or, as it is often known, Good Old Fashioned AI: GOFAI) has had its successes, these have been somewhat limited, at least from our perspective here as students of cognitive evolution.” She argues that the Turing machines based on algorithmic symbolic manipulation using von Neumann architecture, gravitate toward those aspects of cognition, like natural language, formal reasoning, planning, mathematics and playing chess, in which the processing of abstract symbols in a logical fashion and leaves out other aspects of cognition that deal with producing adoptive behavior in a changeable environment. Unlike the approach where perception, cognition and action are clearly separated, she suggests that the dynamic coupling between various elements of the system, where each change in one element continually influences every other element’s direction of change has to be accounted for in any computational model that includes system’s sensory and motor functions along with analysis. To be fair, such couplings in the observed can be modeled and managed using a Turing machine network and the Turing network itself can be managed and controlled by another serial Turing network.  What is not possible is the tight integration of the models of the observer/manager and the observed/managed with a description of the “self” (or a specification of the manager) using parallelism and signaling that are the norm and not an exception in biology.

A more interesting controversy that has erupted regarding the need for new computing models (Wegner, Eberbach, 2004, Cockshott, Michaelson, 2007, Goldin, Wegner, 2008) throws some new light on the need for re-examining the Turing machines, Gödel’s prohibition of self-reflection  and von Neumann’s conjecture. An even more recent discussion of the need for new computing models was presented in the Ubiquity symposium (ACM Ubiquity, 2011). As we describe later, these authors are attempting to address how to model computational problems that cannot be solved by a single Turing machine but can be solved using a set of Turing machines interacting with each other. In particular, the property of being aware of one’s multiple tasks and goals within a dynamic environment and of adapting behavior accordingly which is related to consciousness mentioned earlier is one such problem that a single Turing machine can not solve. The insights into biology suggest that in order to model temporal dynamics of the observer and the observed while also assuring the safe-keeping of the observer (with a “self” identity) requires modifications to the Turing machine to accommodate changes to the behavior while computation is still in progress.

Self, Consciousness, and Emotions – The Dynamic Representation of the Observer and the Observed:

Self-reflection, setting expectations, monitoring the deviations and taking corrective action are essential for managing the business of life through homeostasis and evolution has figured out how to encapsulate the right descriptions to execute the life’s processes using the genetic transaction of replication, repair, recombination and reconfiguration by exploiting parallelism and signaling. As Jonah Lehrer (Lehrer, 2010) describes in his book “How We Decide”, “Dopamine neurons automatically detect the subtle patterns that we would otherwise fail to notice; they assimilate all the data that we can’t consciously comprehend. And, then, once they come up with a set of refined predictions about how the world works, they translate these predictions to emotions.” Emotions, it seems are the instinctual localized component level suggestions for corrective actions based on local experience. Conscience [1] on the other hand, is the adult who correlates the instinctual suggestions with much larger perspective and makes decisions based on global priorities.

It is becoming clear from the recent advances in neuroscience, that self-reflection is a key component in living organisms.  Homeostasis is not possible without a dynamic and active representation of the observer and the observed.

A cellular organism is the simplest form of life that maintains an internal environment that supports its essential biochemical reactions, despite changes in the external environment. Therefore, a selectively permeable plasma membrane surrounding a concentrated aqueous solution of chemicals is a feature of all cells. In addition it is capable of self-replication and self-repair which may be unicellular or multicellular. Unicellular organisms perform all the functions of life. Multicellular organisms contain several different cell types that are specialized to perform specific functions. The cell adapts to its environment by recognition and transduction of a broad range of environmental signals, which in turn activate response mechanisms by regulating the expression of proteins that take part in the corresponding processes. The nucleus of the cell houses deoxyribonucleic acid (DNA) the genetic blueprint of the organism which determines the structure and function of the organism as a whole. The DNA serves two functions. First, it contains instructions for assembling the structural and enzymatic proteins of the cell. Cellular enzymes in turn control the formation of other cellular structures and also determine the functional activity of the cell by regulating the rate at which metabolic reactions proceed. Second, by replicating (making copies of itself), DNA perpetuates the genetic blueprint within all new cells formed within the body and is responsible for passing on genetic information from the survivors to successors.

A gene is a stretch of DNA that contains instructions or code for a particular function such as synthesizing a protein or dictating the assembly of amino acids. A unique set of genes are packaged as chromosomes in complex organisms. A gene regulatory network represents relationships between genes that can be established from measuring how the expression level of each one affects the expression level of the others. In any global cellular network, genes do not interact directly with other genes. Instead, gene induction or repression occurs, the action of specific proteins, which are in turn products of certain genes as well. In essence, gene networks are abstract models that display causal relationships between gene activities and are represented by directed graphs. Nearly all of the cells of a multicellular organism contain same DNA. Yet this same genetic information yields a large number of different cell types. The fundamental difference between a neuron and a liver cell, for example, is which genes are expressed. The regulatory gene network forms a cellular control circuitry defining the overall behavior of the various cells. According to Antonio Damasio (Damasio, 2010), the brain architecture is an evolutionary aid to the business of managing life which consists of managing the body and the management gains precision and efficiency with the presence of circuits of neurons assisting the management. In describing the role of neurons, he says that “neurons are about life and managing life in other cells of the body, and that that aboutness requires two-way signaling. Neurons act on other body cells, via chemical messages or excitation of muscles, but in order to do their job, they need inspiration from the very body they supposed to prompt, so to speak. In simple brains, the body does its prompts simply by signaling to subcortical nuclei. Nuclei are filled with “dispositional know-how,” the sort of knowledge that does not require detailed mapped representations. But in complex brains, the map-making cerebral cortices describe the body and its doings in so much explicit detail that the owners of those brains become capable, for example, of “imaging: the shape of their limbs and their positions in space, or the fact that their elbows hurt or their stomach does”.

The complex network of neural connections and signaling mechanisms collaborate to create a dynamic, active and temporal representation of both the observer and the observed with myriad patterns, associations and constraints among their components. It seems that the business of managing life is more than mere book-keeping that is possible with a Turing machine. It involves the orchestration of an ensemble with a self-identity both at the group and the component level contributing to the system’s biological value. It is a hierarchy of individual components where each node itself is a sub-network with its own identity and purpose which is consistent with the system-wide purpose. To be sure, each component is capable of book-keeping and algorithmic manipulation of symbols. In addition, identity and representations of the observer and the observed at both the component and group level make system-wide self-reflection possible.

In short, the business of managing life is implemented by a system consisting of a network of networks with multiple parallel links that transmit both control information and the mission critical data required to sense and to control the observed by the observer. The data and control networks provide the capabilities to develop an internal representation of both the observer and the observed along with the processes required to implement the business of managing life. The organism is made up of autonomic components making up an ensemble collaborating and coordinating a complex set of life’s processes that are executed to sense and control both the observer and the observed.  In this sense, the brain and the body are part of a collaborating system that has a unique identity and a structure that preserves the interrelationships.  The system consists of:

  1. Components each with a purpose within a larger system (specialization)
  2. All of a component parts must be present for the system to carry out its purpose optimally,
  3. A system’s parts must be arranged in a specific way for the system to carry out its purpose (separation of concerns),
  4. Systems change in response to feedback (collect information, analyze information and control environment using specialized resources), and
  5. Systems maintain their stability (in accomplishing their purpose) by making adjustments based on feedback (homeostasis).

[1] According to Antonio Damasio (Damasio, 2010), consciousness pertains to the knowing of any object or action attributed to a self, while conscience pertains to the good or evil to be found in actions or objects. The identity of self and its safekeeping are essential parts of life processes.  “The non-conscious neural signaling of an individual organism begets the proto-self which permits core self and core consciousness, which allow for an auto-biographical self which permits extended consciousness. At the end of the chain, extended consciousness permits conscience.”

Figure 1 shows the model of core-conscience, its relationship to the Observed and the extended conscience (Damasio, 1999) proposed by Damasio based on his studies in neuroscience.

Figure 1: The mapping of the observer, the observed and myriad models, associations and processes executed using parallel signaling and data exchange networks.  Each component itself is a sub-network with a purpose defined by its own internal models.

Literature is filled with discussion about Gödel’s prohibition of self-reflection in Turing machines and why consciousness cannot emerge from the brain models that depend on Turing machines.  There are many theories on how the human brain is unique and may even involve quantum phenomena or gravity waves (Scott, 1995 and Davis 1992).  However Damasio (Damasio, 2010) takes the evolutionary approach to discuss genomic unconsciousness, the feeling of conscious will, educating the cognitive conscious, the reflective self and its consequences. He goes on to say “in one form or another, the cultural developments manifest the same goal as the form of automated homeostasis.” “They respond to a detection of the imbalance in the life process, and seek to correct it within the constraints of human biology and of the physical and social environment.”

Instead of adding to the already existing controversy (Scott, 1995) on consciousness, we take a different route using Damasio’s emphasis on homeostasis along with the dynamic representation of the observer and the observed. We apply them to extend the Turing machine and its von Neumann Serial computing implementation.  We ask how we can utilize the abstractions that assist in the business of managing life in cellular organisms, discussed above, to enhance the resiliency of distributed computing systems. In the next section we analyze the current implementation of Turing machines and suggest adding some of the abstractions that have proven useful in managing life’s processes to develop a computing model that addresses the problem of being aware of one’s multiple tasks and goals within a dynamic environment and of adapting behavior accordingly.

Turing Machines, Super Turing Machines and DIME Networks:

While a single SPC node lacks self-reflection prohibited by Gödel’s theorems, a network of Turing machines have been successfully used to implement business workflows that observe and manage the external world. This is accomplished by modeling the observed (external to the computing infrastructure) and orchestrating the temporal dynamics of the observed. This has helped us develop complex control systems that can be monitored and controlled with the resiliency of cellular organisms.

However, what is missing is the same resiliency in the infrastructure (or the observer) that implements the control of the observed. Learning fromDamasio’s analysis, in order to introduce consciousness, we must introduce the “self” identity of the observer and the observer’s multiple tasks and goals within a dynamic environment and of adapting behavior accordingly. The “self” specification must include a hierarchy of goals and execution mechanisms to include his concepts of “core” and “extended” selves.

The evolution of computing seems to follow a similar path to cellular organisms in the sense that it emerged as an individual computing element (von Neumann stored program implementation of the Turing machine) and evolved into today’s networks of managed computing elements executing complex workflows that monitor and control external environment.

The Turing machine originally started as a static closed system (Goldin, Wegner, 2008) analogous to a single cell. It was designed for computing algorithms that correspond to mathematical world view. This is the case with Assembler language programming where a CPU is programmed and the Turing machine is implemented using the von Neumann Stored Program Control computing model as shown in figure 2.

Figure 2: A Turing machine with von Neumann Stored Program Control implementation in its simplest form

The Church-Turing thesis stipulates that “Turing machines can compute any effective (partially recursive) functions over naturals (strings). Goldin and Wegner argue that the Church-Turing thesis applies only to effective computations rather than computation by arbitrary physical machines, dynamical systems or humans.

To address this issue, we stipulate that “all computations can be represented as workflows specified by a directed acyclic graph (DAG). Algorithms are a sub set of all computations. An algorithm can be viewed as a workflow of instructions executed by a stored program control (SPC) computing unit (constituting an atomic unit of computation). Then, based on the programming paradigm of one’s choice, one can compose other computing units such as procedures, functions, objects etc., to execute the specified workflow.” This can reconcile the operating system conundrum that states that the operating systems do not terminate as  the Turing machines are required to. As soon as an operating system is introduced, the Turing machine SPC implementation immediately becomes a workflow of computations to implement a process, where each process now behaves as a new Turing machine with SPC implementation. It is as if the operating system is a manager (implementing a management workflow using a group of management Turing machines dedicated for this purpose) controlling a series of other computing Turing machines based on policies set in the operating system. The operating system instructions and the computational flow dependent instructions are mixed to serially execute the process and a sequence of processes. This is analogous to the evolution of multi-cellular organisms where individual cells establish a common management protocol to execute their goals with shared resources. The individual processes may or may not have a common goal but they share the same resources. The operating system communicates with the processes to exert its role using shared memory as shown in Figure 3. While the individual processes do not have fault, configuration, accounting, performance and security management of self, the operating system provides these functions using the signaling abstractions of addressing, alerting, mediation and supervision.

Figure 3: Operating system implements the managed Turing processes.

Since then, multi-threading in a single processor, networked and interactive computing have influenced the computations. In a network, concurrency and influence of one node on another (impact of the environment on the computation) are the new elements that have to be addressed.  The Pi calculus and super Turing models (Eberbach, E., Wegner, P., Goldin, D., 2011) are an attempt to address these aspects. While these attempts are embroiled in controversy, (Cockshott, Michaelson, 2007), what is not in dispute is that a network of computers represents a network of organized Turing machines where each node is a group of Turing machines managed locally. See Figure 4.

Figure 4: A Networked set of Turing machines provide distributed computing services. However this does not provide coordination and management across the two sets of Turing machines.

In such a network, the local operating systems cannot provide Fault, Configuration, accounting, performance and security (FCAPS) management of the system as whole. The disciplines of distributed computing and distributed systems management evolved to address the FCAPS management of the system in an ad-hoc manner without a formal computing model for the system as a whole. This is even more complicated when the system as a whole now acts in unison with a system-wide purpose where one element can influence other elements as pointed out by Louise Barrett (Barrett, 2011).

In this case, the description of the functions performed and the influence of one computation on another has to be encoded at compile time and each computing element does not have the ability to change the behavior at run time. In addition, operating system function is to allocate the resources appropriately to the consumers (processes running applications) and the applications themselves do not have any influence on the resources during run time. For example, if the workload fluctuates, the application has no way of monitoring and controlling the resources.

Figure 5: A network of Turing machines implementing a service workflow that manages the external environment (the observed). The management of the observer is also implemented using the same serial Turing machines where in some nodes the management of the observer and the observed are mixed in serial fashion and some other nodes are exclusively devoted to managing the observer.

If multiple applications are contending for resources, external policies have to be implemented as other Turing machines and the applications themselves are not aware of these external influences. In order to manage distributed set of Turing machines, another set of Turing machines are introduced to provide service management to improve fault, configuration, accounting, performance and security characteristics of the distributed system. See figure 5.

The DIME computing model allows the specification and execution of a recursive composition model where each computing unit at any level specifies and executes the workflow at the lower level. The specification at a higher level eliminates the self-reflection prohibition of Gödel’s theorems on computational units. The parallel implementation of the management workflow and the computational workflow at each level allows the influence of one component in the workflow to influence another component at the lower level. At any level, the computational unit specifies and assures the execution of the lower level workflow thus it becomes the observer observing and controlling the workflow execution at lower level (which is the observed)

This model eliminates the problem of separation of communication between the computing system components in a system and the communication between the computing system and its environment. In current computing models of systems design, treating them as two separate issues has created the current disconnect in the distributed systems theories (Goldin, Wegner, 2007, pp. 22)

Figure 6 shows the new computing model we call distributed Intelligent Managed Element (DIME) network computing model and the resulting computing infrastructure is designed with DIME network architecture.

 

Figure 6: A Distributed Intelligent Managed Element (DIME) with local management of the Turing computing node and signaling channel.  The FCAPS attributes of the Turing node are continuously monitored and controlled based on local policies. In addition the signaling channel allows coordination with global policies.

The DIME network architecture (Mikkilineni 2011) consists of four components:

  1. A DIME node which encapsulates the von Neumann computing element with self-management of FCAPS.
  2. Signaling capability that allows intra-DIME and Inter-DIME communication and control,
  3. An infrastructure that allows implementing distributed service workflows as a set of tasks, arranged or organized in a DAG and executed by a managed network of DIMEs and
  4. An infrastructure that assures DIME network management using the signaling network overlay over the computing workflow

The self-management and task execution (using the DIME component called MICE, the managed intelligent computing element) are performed in parallel using the stored program control computing devices.  The DIME encapsulates the “dispositional know-how.”  Each DIME is programmable to control the MICE and provide continuous supervision of the execution of the programs executed by the MICE. The DIME FCAPS management allows to model and represent dynamic behaviour of each DIME, the state of the MICE and its evolution as a function of time based on both internal and external stimuli. The parallel management architecture allows the observer (a network or subnetworks) that forms a group to monitor and control itself while facilitating the implementation of monitoring and control of the observed in external environment. Parallelism allows dynamic information flow both in the signaling channel and the external I/O channels of the Turing computing nodes.

There are three special features of DNA that contribute to self-resiliency:

  1. Each Turing computing node is controlled by the FCAPS policies set in each DIME. Each read and write are dynamically configurable based on the FCAPS policies.
  2. Each node itself can be a sub-network of DIMES with goals set by the sub-network policies.
  3. The signaling allows dynamic connection management to reconfigure the DIME network thus changing the policies and behaviour.

It is easy to show that the DIME network architecture supports the genetic transactions of replication, repair, recombination and rearrangement.  Figure 7 shows a single node execution of a service in a DIME network.

Figure 7: Single node execution of a DIME

 A single node of a DIME that can execute a workflow by itself or by instantiating a sub-network provides a way to implement a managed DAG  (Directed Acyclic Graph) executing a workflow.  Replication is implemented by executing the same service as shown in figure 8.

DIME Replication

 Figure 8:  DIME Replication

By defining service S2 to execute itself, we replicate S2 DIME.  Note that S2 is a service that can be programmed to terminate instantiating itself further when resources are not available.  In addition, dynamic FCAPS (parallel service monitoring and control) management allows changing the behavior of The ability to execute the control commands in parallel allows dynamic replacement of services during run time.  For example by stopping service S2 and loading and executing service S1, we dynamically change the service during run time.  We can also redirect I/O dynamically during run time. Any DIME can also allow a sub-network instantiation and control as shown in figure 9.  The workflow orchestrator instantiates the worker nodes, monitors heartbeat and performance of workers and implement fault tolerance, recovery, and performance management policies.

Figure 9: Dynamic Service Replication & Reconfiguration

It can also implement accounting and security monitoring and management using the signaling channel.  Redirection of I/O allows dynamic reconfiguration of worker input and output thus providing computational network control.

Figure 10:  Shows DIME Sub-network Implementing Service Composition, Fault & Performance Managements.

Figure 10 shows DIME Sub-network Implementing Service Composition, Fault & Performance Managements. A video link http://youtu.be/Ft_W4yBvrVg provides an animated explanation of the DIME network architecture supporting the genetic transactions of software services implemented using stored program control implementation of the Turing machine.

In summary, the dynamic configuration at DIME node level and the ability to implement at each node, a managed directed acyclic graph using a DIME sub-network provides a powerful paradigm for designing and deploying managed services that are decoupled from the hardware infrastructure management. Figure 11 shows a workflow implementation of monitoring and controlling an external environment (temperature monitoring and fan control to maintain the temperature in a range) using a self-managed DIME network with signaling network overlay.

Figure 11: A workflow implementation using a DIME network. There are two FCAPS management workflows, one managing the observer (computing infrastructure) and the other managing the observed (Thermometer and the Fan)

While the DIME network architecture provides food for thought about Turing, machines, new computing models and the role of the representations of observer and the observed in consciousness, it also has practical utility in developing software exploiting the parallelism and performance of many-core servers (Mikkilineni et. al., 2011). Some of the results demonstrating self-repair, auto-scaling to control the response time of a web server are presented at the Server Design Summit (Mikkilineni, 2011).

Conclusion:

The limitation of Turing Machines as a complete model of computation has been pointed out by (Wegner, Eberbach, 2004). While it was challenged by (Cockshott, Michaelson, 2007), it was rebutted by (Goldin, Wegner, 2008). The main argument for a new computing model was to account for the interactive nature of conventional algorithmic computation and the environment outside the computing element. The Turing model dealing with algorithms is closed and static and does not address the changes affecting the computation from outside while the computation is in progress. In order to account for networked systems in which each change in one element continually influences every other element’s direction of change, more expressive computing model are required. The von Neumann implementation of the Turing machine with its serial processing and mixing of algorithmic computation and interaction using a network of von Neumann computing nodes have given rise to complex management infrastructure that makes it difficult to implement in our IT infrastructure, the architectural resiliency of cellular organisms.

The DIME computing model, by implementing parallel management infrastructure to monitor and control the Turing machine at the atomic level, allows the read and write functions of the conventional Turing machine to be influenced by external interaction. The hierarchical network based (where a node itself can be a sub-network) composition model of DIME network architecture allows the identification of “self” (the observer) at various levels and the representation of the interaction between the observer and the observed.

The beauty of the DIME computing model is that it does not impact the current implementation of the service workflow using von-Neumann SPC nodes (monitoring and control of the observed external systems).  But by introducing parallel control and management of the service workflow, the DIME network architecture provides the required scaling, agility and resilience both at the node level and at the network level (integrating the management and control of self, the observer).  The signaling based network level control of a service workflow that spans across multiple nodes allows the end-to-end connection level quality of service management independent of the hardware infrastructure management systems that do not provide any meaningful visibility or control to the end-to-end service transaction implementation at run time.  The only requirement for the DIME infrastructure provider is to assure that the node OS provides the required services for the service controller to load the Service Regulator and the Service Execution Packages to create and execute the DIME.

The network management of DIME services allows hierarchical scaling using the network composition of sub-networks.  Each DIME with its autonomy on local resources through FCAPS management and its network awareness through signaling can keep its own history to provide negotiated services to other DIMEs thus enabling a collaborative workflow execution.

Each node has a unique identity and supports local behavior and its control using local policies that are programmable using the conventional von Neumann SPC Turing machines. Each sub-network and network allows a group identity (group self) and support group behavior and control.  The resulting network of networks enables system-wide resilient business of managing both the self and the services to monitor and control external behavior. The parallel control network allows dynamic connection management of component functions to create dynamic workflows to accommodate changing environment.

The cellular implementation of the business of managing life may also show us the way to the business of managing our computing infrastructure which has already proven valuable in implementing the business of managing our lives and our environment transcending the body and mind of a single individual. As von Neumann remarked (von Neumann, 1966), “A theorem of Gödel that the next logical step, the description of an object, is one class type higher than the object and is therefore asymptotically longer to describe.” He admitted to twisting the theorem a little while describing the evolution of diversifying computational ecology from simple strings of 0s and 1s (von Neumann, 1987). Perhaps the recursive nature of a network containing sub-networks as nodes along with FCAPS management both at the node and network level, offers the definition of “self-identity” at various levels. While self-reflection at any level is prohibited by Gödel, A higher level “self” provides the required management and control to lower levels. A parallel signaling network, which allows dynamic replication, repair, recombination and reconfiguration, provides a degree of resiliency, efficiency and scaling that are not possible with a network of serial von Neumann implementations of Turing machines only. This may well be a prescription for injecting the property of being aware of one’s multiple tasks and goals within a dynamic environment and of adapting behavior accordingly.

References:

ACM Ubiquity Symposium, (2011) http://ubiquity.acm.org/symposia.cfm

Barrett, L., (2011). Beyond the Brain: How Body and Environment Shape Animal and Human Minds. Princeton, New Jersey: Princeton University Press, p 116, 122

Cockshott, P., Michaelson, G., (2007). Are There New Models of Computation? Reply to Wegner and Eberbach, Computer Journal, vol 50, no, 2, 232-247.

Damasio, A., (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York, NY: Harcourt & Company.

Damasio, A. (2010). Self Comes to Mind: Constructing the Conscious Brain. New York: Pantheon Books, p. 25 and p. 35.

Dyson, G. B., (1997). Darwin among the Machines: the evolution of global intelligence. Massachusetts: Helix books, p. 189.

Eberbach, E., Wegner, P., Goldin, D., (2011) Our Thesis: Turing Machines Do Not Model All Computations. (private communication of a unpublished paper)

Goldin, D., Wegner, P., (2008). Refuting the Strong Church-Turing Thesis: the Interactive Nature of Computing, Minds and Machines, 18:1, March, pp.17-38,

Lehrer, J., (2010) How We Decide. Boston, MA: Mariner Books, p. 50

Mikkilineni, R., (2011). Designing a New Class of Distributed Systems. New York,NY: Springer. (http://www.springer.com/computer/information+systems+and+applications/book/978-1-4614-1923-5)

Mikkilineni, R., Morana, G., Zito, D., Di Sano, M., (2011). Service Virtualization using a non-von Neumann Parallel, Distributed & Scalable Computing Model: Fault, Configuration, Accounting, Performance and Security Management of Distributed Transactions, (Preprint)

Mikkilineni, R., (2011). Service Virtualization using a non-von Neumann Computing Model, Server Design Summit (www.serverdesignsummit.com), San Jose, November 29. (A video of the presentation is available at http://www.kawaobjects.com/presentations/ServerDesignSummitVideo.wmv.)

Samad, T., Cofer, T., (2001). Autonomy and Automation: Trends, Technologies, In Gani, R., Jørgensen, S. B., (Ed.) Tools in European Symposium on Computer Aided Process Engineering volume 11, Amsterdam, Netherlands: Elsevier Science B. V., p. 10

Stanier, P., & Moore, G., (2006).  The Relationship Between Genotype and Phenotype: Some Basic Concepts. In Ferretti, P., Copp, A., Tickle, C., & Moore, G., (Ed.), Embryos, Genes and Birth Defects, London: John Wiley, p. 5

Scott, A., (1995). The Controversial New Science of Consciousness: Stairway to the Mind. New York, NY: Copernicus, Springer-Verlag. P.184.

“At the hierarchical level of human conscience it is not possible to report a consensus of the scientific community because there is none. Materialists, functionalists, and dualists are-according to a recent issue of the popular science magazine Omni (October 1993)-engaged in

Slinging mud and hitting low like politicians arguing about tax hikes. Although the epithets are more rarified-here it is “obscuritanist” and “crypto-Cartisian” rather than “liberal” and “right wing”-recent exchanges between neuroscientists and philosophers of mind (and in each group among themselves) feature the same sort of relentless defensiveness and stark opinionated name calling we expect from irate congressmen or trash-talking linebackers.

To the extent that this is a true appraisal of the current status of consciousness, it is unfortunate. Like life, the phenomenon of consciousness is intimately related to several levels of the scientific hierarchy, so the appropriate scientists-cytologists, electrophysiologists, neuroscientists, anesthegiologists, sociologists and ethnologists-should be working together. It is difficult to see how this elusive phenomenon might otherwise be understood.

Davis, P., (1992). The Mind of God: The Scientific Basis for a Rational World. New York, NY: Simon and Schuster.

von Neumann, J., (1966). Theory of Self-Reproducing Automata. Burke, A. W. (Ed.) Chicago, Illinois. University of Illinois Press.

von Neumann, J., (1987). Papers of John von Neumann on Computing and Computing Theory, Hixon Symposium, September 20, 1948, Pasadena, CA, The MIT Press, p454, p457

Wegner, P., Eberbach, E., (2004). New Models of Computation. The Computer Journal, vol 47, No. 1, 4-9.

Advertisements

There are no comments on this post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: