“The difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind” – Charles Darwin (1871) (quote taken from Frans de Waal, (2016) “Ae We Smart Enough to Know How Smart Animals are?” W. W. Norton.)
In biological systems, natural intelligence evolved over a period of long time to include managed information processing methods transcending what genes and neurons do. As history accumulates with experience, it is processed to create new best practices and gain newer predictive insight to configure/reconfigure, monitor and control themselves and their environment in a harmonious way to assure safety and survival while executing the intent in a most efficient way. Genes transmit the encoded information from the survived to the successor. Neurons organize themselves in the nervous system with or without brain (as in the case of some organisms) to deal with real-time sensory information and motor coordination. In both cases, an overlay control architecture allows the coordination of integrated information from different subsystems. Together, the body (consisting of sensors and motors), the nervous system (including the brain) providing coordination of sensory and motor activities with 4E (embedded, enacted, extended and embodied) cognition and an overlay supervisory control architecture (mind?) that provides integration of information from various quarters.
The result is a being with a sense of self-identity, intelligence (the ability to acquire and apply knowledge and skills), sentience (the capacity to feel, perceive or experience) and resilience (the capacity to recover quickly from difficulties without requiring a reboot) – all the ingredients for developing various degrees of consciousness.
Current generation neural network based systems, despite all the marketing hype, fall short in mimicking even the lowest degree of consciousness of biological systems let alone humans . Before we implement consciousness in the digital universe, we must first understand the true nature of cognition and how to implement it because cognition is a prerequisite for consciousness and culture to evolve.
Both Alan Turing and John von Neumann pointed out in 1948 that symbolic computing and neural networks are two sides of the same coin of information processing. Turing while discussing how to organize unorganized machinery says “If we are trying to produce an intelligent machine, and are following the human model as closely as we can, we should begin with a machine with very little capacity to carry out elaborate operations or to react in a disciplined manner to orders (taking the form of interference). Then by applying appropriate interference, mimicking education, we should hope to modify the machine until it could be relied on to produce definite reactions to certain commands. This would be the beginning of the process. I will not attempt to follow it further now.” He is here sowing the seeds for a process to infuse cognition into machines.
John von Neumann says (Hixon Lecture 1948) “It has often been claimed that the activities and functions of the human nervous system are so complicated that no ordinary mechanism could possibly perform them. It has also been attempted to name specific functions which by their nature exhibit this limitation. It has been attempted to show that such specific functions, logically, completely described, are per se unable of mechanical, neural realization. The McCulloch-Pitts result puts an end to this. It proves that anything that can be exhaustively and unambiguously described, anything that can be completely and unambiguously put into words, is ipso facto realizable by a suitable finite neural network. Since the converse statement is obvious, we can therefore say that there is no difference between the possibility of describing a real or imagined mode of behavior completely and unambiguously in words, and the possibility of realizing it by a finite formal neural network.”
This is an important insight that brings out the closeness of genetic and brain computing modes and Turing Machine based computing and neural networks. “The two concepts are co-extensive. A difficulty of principle embodying any mode of behavior in such a network can exist only if we are also unable to describe that behavior completely.” He asserts that there is an equivalence between logical principles and their embodiment in a neural network, and while in the simpler cases the principles might furnish a simplified expression of the network, it is quite possible that in cases of extreme complexity the reverse is true.
Cognition is the ability to process information, apply knowledge, and change the circumstance. Cognition is associated with intent and its accomplishment through various processes that monitor and control a system and its environment. Cognition is associated with a sense of “self” (the observer) and the systems with which it interacts (the environment or the “observed”). Cognition extensively uses time and history in executing and regulating tasks that constitute a cognitive process. However, as Cockshott e al., (“Computation and its limits, Oxford University Press 2012) point out the Turing’s system is limited to single, sequential processes and is not amenable for expressing dynamic concurrent processes where changes in one process can influence changes in other processes while the computation is still in progress in those processes which is an essential requirement for describing cognitive processes. Concurrent task execution and regulation require a systemic view of the context, constraints, communication and control where the identities, autonomic behaviors and associations of individual components also must be part of the description. However, an important implication of Gödel’s incompleteness theorem is that it is not possible to have a finite description with the description itself as the proper part. In other words, it is not possible to read yourself or process yourself as a process.
The last paragraph of the last chapter in the book “Computation and its limits” brings out the limitation of current computing models to infuse cognition that results in intelligence, sentience and resilience with self-identity. “The key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.”
Autonomic computing, by definition implies two components in the system: 1) the observer (or the “self”) and 2) the observed (or the environment) with which the observer interacts by monitoring and controlling various aspects that are of importance. It also implies that the observer is aware of systemic goals in terms of best practices, to measure and control its interaction with the observed. Autonomic computing systems attempt to model system wide actors and their interactions to monitor and control various domain specific goals also in terms of best practices. However, cellular organisms take a more selfish view of defining their models on how they interact with their environment. The autonomic behavior in living organisms is attributed to the “self” and “consciousness” which contribute to defining one’s multiple tasks to reach specific goals within a dynamic environment and adapting the behavior accordingly.
The autonomy in cellular organisms comes from at least three sources:
1) Genetic knowledge that is transmitted by the survivor to its successor in the form of executable workflows and control structures that describe stable patterns to optimally deploy the resources available to assure the organism’s safe keeping in interacting with its environment.
2) The ability to dynamically monitor and control organism’s own behavior along with its interaction with its environment (neural networks) using the genetic descriptions and
3) Developing a history through memorizing the transactions and identifying new associations through analysis.
In short, the genetic computing model allows the formulation of descriptions of workflow components with not only the content of how to accomplish a task but also provide the context, constraints, control and communication to assure systemic coordination to accomplish the overall purpose of the system.
A path to cognitive software is to first integrate both symbolic and neural network computing models and infuse a sense of self and a composition and control scheme to create autonomous behavior that allows the system to know its intent, configure, monitor and control its behavior in the face of non-deterministic impact of its interactions within and with outside in its environment. Cognition is the first step from computing and communication toward consciousness and culture (ability of individuals to learn habits from one another resulting in behavioral diversity between groups) that provide a global information processing system. Infusing cognition into computing and communications software requires us to first understand the true nature of cognition – how 4E cognition evolved from single cells, multi-cellular organisms, plants, and animals to humans.
It would also allow us to design next generation software systems with various degrees of consciousness that are appropriate to different tasks.
I am following two venues that are focusing on the future computing paradigms going beyond Turing machines and Neural networks. Fist,
in the last is4si conference in 2017, there were many papers devoted to these thoughts.
Second, in the first workshop on “Software Engineering for Cognitive Services in 2018 in the ACM ICSE where a path was paved for new ideas for software engineering for cognitive services that bring together symbolic computing and neural networks.