ECHO Platform & Analytics
Pushkar Ravindra, Aakash Khochare, Prateeksha Varshney, Siva Prakash Reddy
The growth of Internet of Things (IoT) is leading to an unprecedented access to observational data about physical infrastructure as well as social life-style. Such data streams are integrated with historic data and analytics models to make intelligent decisions. Traditionally, all this decision making and analytics have taken place in the Cloud due to their easy service-oriented access to seemingly infinite resources. Data is streamed from the edge devices and sensors to the data center, and control decisions communicated back from the Cloud analytics to the edge for enactment. This, however, has several down-sides. The network bandwidth to send high-fidelity video streams to the Cloud can be punitive, and the round-trip latency required to move data from the edge to the Cloud and control signals back can be high. Clouds also have a pay-as-you-go model where data transfers, compute, and storage are all billed.
An integral part of IoT deployments are Edge and Fog devices that serve as gateways to interface with sensors and actuators on the field. These are typically collocated or within few network hops of the sensors, and have non-trivial compute capacity. Rather than just have them move data and control signals between the field devices and the Cloud, these Edge and Fog resources should be actively considered as first-class computing platforms to complement the Cloud-centric model to reduce the network transfer time and costs.
We propose ECHO, an adaptive orchestration platform for hybrid dataflows across Cloud, Fog and Edge resources. With this platform, we try to achieve orchestration of IoT applications with Hybrid Data Sources, across edge resources of diverse capabilities and varying network connectivities. The platform exposes a RESTful service to the user that allows interfacing with the edge resources and the applications that run on them. ECHO also supports many native Runtime Engines, such as Tensorflow, Apach Storm and Apache Edgent to name a few.
Motivated by the problem of high network bandwidth requirement to transfer video streams for centralized analytics, we are investigating opportunities for distributed execution of Deep Neural Networks across Edge, Fog and Cloud. The current interest in Deep Neural Networks has lead to the development of a wide variety of architectures for the same task, each exhibiting a different performance vs accuracy tradeoff. We are working on characterization of the performance of these networks on constrained devices like the Raspberry Pi3 and accelerated devices like the Nvidia Jetson TX1 which will allow for a smarter deployment of these neural networks on these devices.
The source code for ECHO can be found here : https://github.com/dream-lab/echo
- Pushkara Ravindra, Aakash Khochare, Siva Prakash Reddy, Sarthak Sharma, Prateeksha Varshney and Yogesh Simmhan, ECHO: An Adaptive Orchestration Platform for Hybrid Dataflows across Edge and Cloud, International Conference on Service-Oriented Computing (ICSOC), 2017 (To Appear). https://arxiv.org/abs/1707.00889
- P. Varshney and Y. Simmhan, “Demystifying Fog Computing: Characterizing Architectures, Applications and Abstractions,” in IEEE International Conference on Fog and Edge Computing (ICFEC), 2017. https://arxiv.org/abs/1702.06331
- Y. Simmhan, “IoT Analytics Across Edge and Cloud Platforms,” IEEE Internet of Things Newsletter, 2017. [Download PDF]
Scheduling across Edge, Fog and Cloud
Rajrup Ghosh, Siva Prakash Reddy, Prateeksha Varshney
Scheduling single queries/tasks and dataflows of tasks on edge, Fog and Cloud is challenging due to the diversity in the compute capacities of the various resource, transient and unreliable nature of the edge and fog resources, mobility of edge, dynamism of the network, and variability of the data sources and IoT applications. There has been limited efforts in characterizing these resources, the types of applications that will use them, the QoS for these, and the scheduling strategies to be used. As edge and Fog computing become pervasive, this is a key gap that must be addressed to support novel IoT applications and make the best use of such emerging distributed resources.
We have examined distributed scheduling strategies for Complex Event Processing (CEP) queries that are composed as a dataflow for execution over event streams from IoT sensors. These query dataflows can operate on edge devices and Cloud VMs, and we need to plan the mapping of each query on specific devices while ensuring that the end-to-end latency for the dataflow is minimized. At the same time, these compute resources have bounded compute capacity in terms of the number of queries that can operate on each, and also the edge devices have limited energy capacity due to the recharge cycle of their solar-powered batteries. We define this distributed scheduling problem as an optimization problem that is solved using an NP-complete but optimal brute-force approach as well as a Genetic Algorithm meta heuristic that is much faster and whose solution approaches the optimal. Another variant of this problem that we tackle is when these dataflows arrive and depart dynamically, with heuristics proposed to manage the placement on available resources.
We are also generalizing this problem to one of generic dataflow execution across edge, Fog and Cloud devices where the application behavior is dynamic and the resources themselves may have mobility and transience. Here, issues of moving compute to data, data replication, distributed monitoring, opportunistic scheduling, etc. will come into play.
- https://github.com/dream-lab/ec-sim, Simulator for CEP dataflow execution across edge and Cloud resources
- R. Ghosh and Y. Simmhan, “Distributed Scheduling of Event Analytics across Edge and Cloud,” arXiv, arXiv:1608.01537, 2016. [Download PDF]
Scheduling on the Cloud
This research is based on scheduling jobs on hybrid cloud. Cloud computing has emerged in the last decade as a popular distributed computing service offered by commercial providers. Scheduling applications on Clouds is an active research area. Public Clouds offer pay-as-you-go access to elastic resources that can be acquired and released on-demand. Major public Cloud providers include Amazon AWS, Microsoft Azure, Google and IBM BlueMix. Among IaaS providers, on demand Virtual Machines(VMs) give access to compute resources, and are one of the frequently used services. These fixed-price on-demand VMs, offered universally by IaaS providers, charge a fixed rate per hour (Amazon AWS EC2) or per minute (Google Compute, Azure VM) for each VM type. An alternative pricing scheme, called spot instances by AWS and preemptive VMs by Google, offer deep discounts on the fixed-price for a given VM size, but do not guarantee either availability or reliability. While the fixed-price VMs are cost effective, the spot-priced VM offer much higher discount while trading-off reliability. For users running large workloads with many tasks on the Cloud, such deep discounts will be valuable, and should motivate them to incorporate spot VMs. However, when users want the benefits of both reliability and cost reduction, they require scheduling strategies for managing their workloads when running on spot VMs.
Currently we are investigating scheduling of jobs on Amazon Web Service (AWS) Spot and On-demand clouds. AWS’s Spot VM instances and Google’s Preemptible VMs are often cheaper but have additive risk of uninformed failure. The challenge here is to utilize these Spot clouds to reduce cost. We are investigating automated scheduling algorithms for a Bag of Tasks (BoT) that forms a workload. Our aim is to manage decisions related to spot and fixed price VM acquisition and release, and task placement, checkpointing and migration for, within a guaranteed completion time specified by the user while minimizing the cost paid by them to the Cloud provider.
Our research shows that Amazon Asia Pacific Singapore Data-center and US East Virginia Data-center shows an effective cost savings of over 90% when using spot VMs with more than 95% savings for Small, Medium, Large and Extra-Large general purpose VMs in year 2014. This is highly favorable for cost-conscious enterprises in emerging markets.
We have also worked on job scheduling on local clouds. There we have investigated different type of jobs like I/O intensive, compute intensive, massively parallel jobs, etc. The optimization parameters are utilization percentage, cost, job completion time etc. Previously we have worked with open-stack and tried different scheduling strategies for mapping VM instances to host (physical infrastructure).
- V. Kushwaha and Y. Simmhan, “Cloudy with a Spot of Opportunity: An Analysis of Spot-Priced Clouds for Practical Job Scheduling,” in Cloud Computing for Emerging Markets (CCEM), 2014. doi:10.1109/ccem.2014.7015488
- H. Chu and Y. Simmhan, “Cost-efficient and Resilient Job Life-cycle Management on Hybrid Clouds,” in IEEE International Parallel & Distributed Processing Symposium (IPDPS), 2014. doi:10.1109/IPDPS.2014.43