DISRUPTIVE interactive end-user devices are reaching the consumer markets. Following the purchase by Microsoft of HoloLens (a headset which projects holographic images into real space) and the $4.5 billion valuation of Magic Leap (a startup producing head-mounted displays for augmented reality applications), similar technologies are now available in the consumer market with virtual reality headsets being commercialized by HTC, LG, Oculus VR, Samsung, Sony, etc. Simultaneously, the very fast growth of Internet of Things brings a wide range of new products ranging from low-throughput sensor/actuator devices such as thermostats and connected toasters, to complex systems such as coordinated sets of surveillance cameras covering a entire neighborhood.

We therefore expect the very fast emergence of fog applications which are defined as applications building a seamless interactive continuum between the physical and digital world, blending atoms and bits. Besides the obvious usage of these technologies for entertainment purposes, numerous applications are also expected in domains as diverse as live events coverage, health care, engineering, real estate, military, retail, and education.

Following the current trend of ever-increasing importance of mobile devices and infrastructures, end users will expect fog computing applications to be mobile. Applications will however require massive computation and storage to remain continuously available and responsive while processing potentially very large volumes of data.

Yet, although large quantities of computing resources are readily available in cloud platforms, traditional cloud infrastructures are ill-equipped to fully address the challenges of future fog computing applications. Because they rely on a handful of very large data centers, public cloud providers are often located very far from their end users, which does not match the requirements of fog applications:

  • Interactive fog applications require ultra-low network latencies. To guarantee an “instantaneous” feeling for the users, applications such as augmented reality require that end-to-end latencies (including all networking and processing delays) should not exceed 10-20 ms. However, measured latencies to the closest available data centers typically range from 20-30 ms using high-quality wired networks, up to 50-150 ms on 4G mobile networks, making traditional cloud resources unsuited for demanding applications.
  • Throughput-oriented fog applications require local computations. The sources of application input data and the destination of computation results are often located geographically close to each other. However, with only a handful of data centers from which to choose the location of data processing, input and output data are often transferred needlessly over long distances. This wastes long-distance networking resources, and may even create legal issues if the data centers are not located in the same country as the end users.
  • Dependable fog applications must tolerate poor network connections. Although cloud data centers are typically provisioned with excellent network connectivity, the same is not always true on the client side. Depending on the application, the availability of stable high-bandwidth wireless network connectivity may not always be guaranteed in locations relevant to the application. For example, an application dedicated to supporting emergency services during their operation must work seamlessly regardless of the location of the emergency (underground, in rural areas, etc.) where 4G connectivity is often unreliable or even totally unavailable.

To address these challenges, the mobile networking industry is heavily investing in fog computing platforms located at the edge of the networks, in immediate proximity to the end users 14,15,16 . Instead of treating the mobile operator’s network as a high-latency dumb pipe between the end users and the external service providers, fog platforms aim at deploying cloud functionalities in immediate proximity with the end users, inside or close to the mobile access points. Doing so will deliver added value to the content providers and the end users by enabling new types of user experience. Simultaneously, it will generate extra revenue streams for the mobile network operators, by allowing them to position themselves as fog computing operators and to rent their already-deployed infrastructure to content and application providers.

Achieving this vision will require an active fog computing technology/innovation ecosystem as well as a strongly-skilled workforce capable of designing future fog computing platforms and exploiting them to their fullest extent. The FogGuru project will foster the arising of a new generation of researchers and professionals, able to work at the edge between science and innovation to effectively design the necessary technologies to deploy and operate scalable fog computing infrastructures, and develop innovative fog computing applications. The FogGuru research will be carried out by eight talented Early-Stage Researchers (ESRs) who will jointly develop the missing technologies while training themselves to become fog computing gurus.

The ESRs’ work will be organized along the following major Research Objectives (RO) which are poorly addressed by current research efforts:

RO1: To Manage Resources and Applications in Scalable Fog Platforms. While traditional clouds are composed of many powerful machines located in a handful of data centers and interconnected by very high-speed networks, fog computing platforms are composed of a very large number of points-of-presence hosting few weak and potentially unreliable servers, interconnected with each other by commodity long-distance networks. This broad geographical distribution creates difficult challenges such as optimizing the usage of resources whose distribution may not always match the distribution of user demands, migrating computations and data in the presence of end user mobility, and automatically detecting and correcting anomalies.

RO2: To Adapt Stream-Processing Middleware Systems for Fog Applications. To enable the development of innovative applications which fully exploit the specificities of fog computing platforms, new programming abstractions will be necessary. We strongly believe that the Real-Time Stream Processing model, which was initially developed by the Data Analytics community, is also extremely well-suited for fog computing applications: it provides developers with an easy-to-understand development environment, while harnessing the full capacity of fog infrastructures to achieve extremely high performance. Designing an application as a workflow of operators offers a simple yet powerful abstraction which facilitates the application deployment and run-time management in complex distributed environments. This research objective aims at designing and developing the missing functionalities to adapt stream processing middlewares to fog computing environments.

RP3: To Develop Blueprints for Innovative Fog Applications. Fog computing enables a whole new range of IoT-driven applications: (i) latency-critical applications which require client-server latencies of the order of milliseconds to ensure a smooth user experience; and (ii) context-aware and geo-distributed applications where the processing should be moved closer to the data source in order to reduce network traffic and enhance scalability. The first class includes, e.g., virtual reality gaming applications, hyper-interactive shopping apps and remote treatment applications in healthcare. The second class spans from adaptive traffic-aware traffic light control systems to IoT big data analytics services, from decentralized analysis of security video streams to monitoring of wind farms. This research objective will deliver blueprints for both types of applications, in the form of templates running on top of the stream processing middleware developed in RO2 and further verticalized for experimentation in the smart city of Valencia.