“Computer science is concerned with information in much the same sense that physics is concerned with energy… The computer scientist is interested in discovering the pragmatic means by which information can be transformed.”
— “An Undergraduate Program in Computer Science-Preliminary Recommendations, A Report from the ACM Curriculum Committee on Computer Science.” Comm. ACM 8(9), Sep. 1965, pp. 543-552.
“Computation is what happens dynamically to information from one moment to the next.”
SUSAN STUART. AND GORDANA DODIG-CRNKOVIC “Computation, Information, Cognition: The Nexus and the Liminal” Cambridge Scholars Publishing, Newcastle, 2007.
There are four major trends shaping the future of information technologies:
- The advent of multi-core and many-core processors has caused an upheaval in computing device design giving rise to orders of magnitude of price/performance improvement. According to Intel, in 2015, a typical processor chip will likely consist of dozens to hundreds of cores and parts of each core will be dedicated to specific purposes like network management, graphics, encryption, decryption etc. and a majority of cores will be available for application programs.
- As the numbers of processors to meet the global demand increases (one company Google alone expects to deploy 10 million servers in the future), the cost of the electricity needed to run a company’s servers will soon be a lot greater than the actual purchase price of the servers. In addition,the scale of computing elements involved and the resulting component failure probabilities demand new ways to address resiliency and efficiency of services that provide information processing.
- The management spending has steadily increased over the last decade to consume around 70% of the total IT budget in an organization. It is estimated that 70% of the IT budget goes to keeping the lights on. For every dollar spent on developing software, another $1.31 is spent on its operation and maintenance. To be sure this spending has improved reliability, availability, performance and security of the services deployed through automation but also has increased the complexity requiring high maintenance software/hardware appliances along with high-cost highly skilled professional services personnel to assure distributed transaction execution. More automation is resulting in point solutions and tool fatigue.
- The demand for large-scale web-based multi-media services with high availability and security is putting pressure on reducing the time it takes from developing the system to deploy it in production. The demand for resiliency, efficiency and scaling of these services is driving the need to share the distributed computing resources between the service developers, service operators and the service users in real-time at run-time to meet changing business priorities, workload fluctuations and latency constraints.
What has not improved is our understanding of large-scale distributed computing systems and their evolution to meet the ever-increasing global communication, collaboration and commerce at faster and faster pace. There are three major issues that must be addressed to improve the resiliency, efficiency and scaling of next generation large-scale distributed computing systems to enable information processing in real-time to meet the scale and scope of communication, collaboration and commerce:
- Current trend to design application aware infrastructure cannot scale in providing end-to-end service visibility and control at run-time when the infrastructure is distributed, heterogeneous, owned by different operators and designed by different vendors with conflicting profit motives. Any solution that embeds application awareness in infrastructure and requires infrastructure to be manipulated at run-time to meet changing business priorities, workload fluctuations and latency constraints will only increase complexity, reduce transparency and cannot scale across distributed environments.
- Current trends to embed infrastructure awareness to control resources at run-time in applications also suffer the same fate of being not scalable across different distributed infrastructures. Either the developers need to embed the knowledge about infrastructure in the applications or a myriad orchestrators have to provide the integration of distributed and heterogeneous infrastructure.
- In a large-scale dynamic distributed computation supported by myriad infrastructure components, the increased component failure probabilities introduce a non-determinism (for example the Google is observing emergent behavior in their scheduling of distributed computing resources when dealing with large number of resources) that must be addressed by a service control architecture that decouples functional and non-functional aspects of computing.
In essence, current datacenter and cloud computing paradigms with their server-centric and narrow-bandwidth origins are focused on embedding intelligence (mostly automation through ad-hoc programming) in the resource managers. However, the opportunity exists for discovering new post-Hypervisor computing models, which decouple service management from infrastructure management systems at run-time to assure end-to-end distributed service transaction safety and survival, while avoiding the current complexity cliff and tool fatigue. As Cockshott et al. observed “the key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.” (Cockshott P., MacKenzie L. M., and Michaelson, G, (2012) Computation and its Limits, Oxford University Press, Oxford.). Any new computing models, management models and programming models that integrate the computers and the computations they perform must support concurrency, mobility and synchronization in order to execute distributed processes with global policy based management and local control. We must find ways to cross the Turing barrier to include the computer and the computed to deal with dynamic distributed processes (concurrent and asynchronous) at large scale. This also should address the need to bring together development and operations (DevOps) with a new approach to integrate functional and non-functional requirements of dynamic process management.
As Gordana Dodig-Crnkovic points out in her paper (Gordana Dodig-Crnkovic and Raffaela Giovagnoli, (Editors), “Natural Computing/Unconventional Computing and its Philosophical Significance” AISB/IACAP World Congress 2012, Birmingham, UK, 2-6 July 2012.) Alan Turing’s Legacy: Info-Computational Philosophy of Nature, information and computation are two complementary concepts representing structure and process, being and becoming. New ideas in information technology must integrate both the structure and process to deal with the interactions of concurrent asynchronous computational processes in real-world, which are the most general representation of information dynamics and go beyond the current Turing machine computing model.
The CDCGM track has proven track record in addressing some of these issues and is looking for new ideas to be presented in Parma, Italy during June, 23-25, 2014.
Call For Papers
* Track web site: http://cdcgm.dieei.unict.it
* Conference web site: http://www.wetice.org
=== CDCGM CFP ===
Cloud Computing is becoming the reference model in the field of Distributed Service Computing. The wide adoption of Virtualization technology, Service-Oriented Architecture (SOA) within the powerful and widely distributed data centers have been allowing developers and consumers to access to a wide range of services and computational resources through a pay-per-use model.
There is an ever increasing demand to share computing resources among cloud service developers, users and service providers (operators) to create, consume and assure services in larger and larger scale. In order to meet the web-scale demand, current computing, management and programming models are evolving to address the complexity and the resulting tool fatigue in current distributed datacenters and clouds. New unified computing theories and implementations are required to address the resiliency, efficiency and scale of global web-scale services creation, delivery and assurance. While current generation virtualization technologies that focus on infrastructure management and infusing application awareness in the infrastructure have served us well, they cannot scale in a distributed environment where multiple owners provide infrastructure that is heterogeneous and is evolving rapidly. New architectures that focus on intelligent services deployed using dumb infrastructure on fat and stupid pipes must evolve.
The goal of CDCGM 2014 is to attract young researchers, Ph.D. students, practitioners, and business leaders to bring contributions in the area of distributed Clouds, Grids and their management, especially in the developments of computing, management and programming models, technologies, framework, and middleware.
You are invited to submit research papers to the following areas:
Discovering new application scenarios, proposing new operating systems, programming abstractions and tools with particular reference to distributed Grids and Clouds and their integration.
Identifying the challenging problems that still need to be solved such as parallel programming, scaling and management of distributed computing elements.
Reporting results and experiences gained in building dynamic Grid-based middleware, Computing Cloud and workflow management systems.
New infrastructures, middleware, frameworks for Grid and Cloud computing platforms.
Architectural and Design patterns for building unified and interoperable Grid and Cloud platforms.
Virtualization of Grid resources and their management within private/hybrid Cloud.
Programming Models and Paradigms to deploy, configure and control Cloud and Grid Services.
Infrastructures, models and frameworks for Service Migration to/from Cloud and Grid platforms.
Integration of suitable trust systems within Grid and Cloud platforms.
Security, Privacy, Trust for Public, Private, and Hybrid Clouds.
Policies, algorithms and frameworks for Service Level Agreement (SLA) within Public, Private and Hybrid Clouds.
New models and technologies for management, configuration, accounting of computational resources within Cloud and Grid systems.
Cloud/Grid Composition, Federation and Orchestration.
High Performance Cloud Computing.
Interoperability between native High Performance Infrastructures and Public/Hybrid Cloud.
Workflow Management within Clouds, Grid and unified computing infrastructures.
Green Cloud Computing.
Big Data models and frameworks with particular emphasis to the integration of existing systems into Public/Hybrid Cloud.
Cloud Storage and Data management within Cloud platforms.
=== Important dates ===
** Paper submissions deadline: February 4, 2014 **
** Notification to authors: March 14, 2014 **
** IEEE Camera Ready: April 11, 2014 **
=== Steering committee ===
Prof. Antonella Di Stefano….University of Catania, ITALY
Dr. Rao Mikkilineni,PhD……………C3DNA, California, USA
Prof. Giuseppe Pappalardo…University of Catania, ITALY
Prof. Corrado Santoro………..University of Catania, ITALY
Prof. Emiliano Tramontana….University of Catania, ITALY
=== Program committee ===
Mauro Andreolini……… University of Modena and Reggio Emilia, Italy
Cosimo Anglano ………. University of Piemonte Orientale, Italy
Mario Cannataro ……… University of Calabria, Italy
Giancarlo Fortino ……. University of Calabria, Italy
Pilar Herrero ……….. Universidad Politecnica de Madrid, Spain
Maciej Koutny ……….. University of Newcastle upon Tyne, UK
Carlo Mastroianni ……. ICAR-CNR, Italy
Rani Mikkilineni …….. Santa Clara University, CA, USA
Agostino Poggi ………. University of Parma, Italy
Eric Renault ………… Institut Mines-TÃ©lÃ©com — TÃ©lÃ©com SudParis, France
Ilias Savvas ………… Higher Tech. Edu. Institute of Larissa, Greece
Brian Vinter ………… University of Copenhagen, Denmark
Luca Vollero ………… CINI ITeM, Italy
Ian Welch …………… Victoria University of Wellington, New Zealand
Luyi Wang …………… West Virginia University, USA
Zhuzhong Qian ……….. Nanjing University, P.R.China
Lucas Nussbaum ………. UniversitÃ© de Lorraine, France
Rosario Giunta ………. UniversitÃ¡ di Catania, Italy
Giuseppe M.L SarnÃ© ………. UniversitÃ Mediterranea di Reggio Calabria, Italy
Domenico Rosaci ……….. UniversitÃ Mediterranea di Reggio Calabria, Italy
=== Submission instructions ===
All the papers submitted will be reviewed by peers and selected papers will be published in WETICE2014 Conference Proceedings by IEEE.
Participants are expected to submit an original research paper or a position paper, not submitted or published elsewhere, through the EasyChair web site: https://www.easychair.org/account/signin.cgi?conf=wetice2014
If you do not have an EasyChair account, you can submit the paper to firstname.lastname@example.org, email@example.com or firstname.lastname@example.org.
Submission should follow the IEEE format (single spaced, two columns, 10pt, Times font) at most SIX PAGES, including figures.
The paper should be in either PS or PDF format. Papers using different formats could be not considered for review. Please check also that the submission will include:
The title of the paper.
The names and affiliations of the authors.
A 150-word abstract.
At most eight keywords.
=== Indexing ===
All the paper published in the Wetice 2014 Proceedings will be indexed, among others, in the following DBs: