This page provides a listing, description and references for edge-native applications developed by and with the Living Edge Lab. See this paper for more details on edge-native applications.
|General||Wearable Cognitive Assistance||Frameworks and Libraries|
OpenRTiST utilizes Gabriel, a platform for wearable cognitive assistance applications, to transform the live video from a mobile client into the styles of various artworks. The frames are streamed to a server where the chosen style is applied and the transformed images are returned to the client.
OpenScout is an edge-native application designed for automated situational awareness. The idea behind OpenScout was to build a pipeline that would support automated object detection and facial recognition. This kind of situational awareness is crucial in domains such as disaster recovery and military operations where connection over the WAN to the cloud may be disrupted or temporarily disconnected.
Dronesearch is a python package for running live video analytics on drone video feeds leveraging edge servers. It also contains our experiment code for SEC'18 paper Bandwidth-efficient Live Video Analytics for Drones via Edge Computing.
Real-time traffic monitoring has had widespread success via crowd-sourced GPS data. While drivers benefit from this low-level, low-latency road information, any high-level traffic data such as road closures and accidents currently have very high latency as such systems rely solely on human reporting. Increasing the detail and decreasing the latency of this information can have significant value. In this paper we explore this idea by using a camera along with an in-vehicle computer to run computer vision algorithms that continuously observe the road conditions in high-detail. Abnormalities are automatically reported via 4G LTE to a local server on the edge, which collects and stores the data, and relays updates to other vehicles inside its zone. In this paper we develop and test such a system, which we call LiveMap. We demonstrate its accuracy on detecting hazards and characterize the system latency achieved.
Wearable Cognitive Assistance
Wearable Cognitive Assistance is a class of applications that provide automated real time hints and guidance to users by analyzing the actions of the user through an AR-HMD while they complete a task. The task can require very rapid response like playing table tennis, fine grain motor control like playing billiards or guiding a user through a process like assembling a lamp.
|Towards wearable cognitive assistance||Lego||When the Trainer Can’t Be There|
|Ikea||Putting It All Together With Cognitive Assistance|
The primary purpose of the Gabriel libraries is to transmit data from mobile devices to cloudlets. Wearable Cognitive applications require responses shortly after a user completes a step, so we always want to process the newest frame possible. We never want to build up a queue of stale data to process. The library accomplishes this using a flow control mechanism similar to the one proposed in our 2014 paper. Our implementation allows multiple clients to share one cloudlet.
nephele is a CLI utility that provides capabilities to create, manage, and migrate KVM virtual machines over the wide-area network. Built upon the principles devised and implemented by Kiryong Ha, nephele utilizes deduplication, compression, and bandwidth adaptation to migrate VMs efficiently in WAN environments where the bandwidth and latency characteristics are far below those found in datacenters where traditional VM migration happens. More information, including related publications, can be found on our website.
OpenTPOD (Tool for Painless Object Detection) accelerates the Wearable Cognitive Assistance object detector development process by automating many of the human steps and integrating training platforms directly into the tool.
OpenWorkFlow allows users to define the state machine for Wearable Cognitive Assistance applications. OpenWorkFlow has both a GUI and a Python API. The output from OpenWorkFlow and OpenTPOD can be used directly by the Gabriel framework to execute the application.
OpenFace is a Python and Torch implementation of face recognition with deep neural networks and is based on the CVPR 2015 paper FaceNet: A Unified Embedding for Face Recognition and Clustering by Florian Schroff, Dmitry Kalenichenko, and James Philbin at Google. Torch allows the network to be executed on a CPU or with CUDA.