Alef Stream Processing & Smart Surveillance

Learn how Alef Stream Processing & Smart Surveillance can impact your video uploads.

Reference Documentation

White Papers

Introduction

The Smart Surveillance and Stream Processing Industry is exploding. The ability to process disparate video streams from IoT devices, video surveillance cameras, cameras on machinery etc. to create actionable insights in real-time has created new possibilities for the market. The devices used for the generation of video streams for Smart Surveillance and other Stream Processing use cases are generally regular IP cameras but they are continuously generating streams.

With Alef’s Stream Processing solution running on the Edge on Alef’s Software Defined Mobile Edge (SD-ME) stack, industries do not need to stream their videos onto cloud servers anymore. Video streams will instead be brought to the Edge and processing completed directly on Alef’s local Edge server running its entire software stack. We apply object detection algorithms on video streams and generate key insights from it. We send all insights in the form of metadata to a central dashboard server for customers to leverage.

Brief Highlights

Enterprises can now use their cameras to send video streams of any size to the Edge compute node at the closest Micro edge Data Center and avail of AI algorithms running on Alef’s Stream Processing platform to generate meta data that leads to actionable insights. The video streams can be processed and analyzed at the Edge of the Enterprise network in a cloud native environment at the closest Metro edge location to the Enterprise. The Alef Smart Surveillance and Stream Processing solution is comprised of APIs which will enable Enterprises to use an AI engine of their choice and to easily obtain insights and analytics via dashboard APIs. The security protocols followed will adhere to Alef’s threat model and be an all-encompassing Edge security framework. The components of the solution are listed below.

Cloud Dashboard Components – These components are deployed on an EC2 instance in AWS.

  • Upload API – Upload API’s is a node server that exposes API’s to store stream insight data and to retrieve them as well.
  • Dashboard API – This Component is used for the show insights feature, directly through a web browser.

SD- ME Components – All the services listed below are part of the SD-ME framework at an Edge location, and are containerized.

  • Node Media Server – Node Media server accepts the video streams. It is used for publishing live streams. It takes input from an incoming stream and publishes it on live stream.
  • Detection Server – We currently use the Tensorflow Object detection algorithm for real time object detection.
  • Tensorflow Object detection API is an open source framework built on top of Tensorflow that makes it easy to construct, train and deploy object detection models. It also provides a collection of detection models which are pre-trained on different datasets.
  • Audit Server – The audit server is used to ensure accuracy of the generated stats. This module uses FFmpeg to store streaming data in 60-second MP4 files. An audit server starts capturing streaming data as soon as live streaming starts. It will capture videos of 1 minute duration until streaming stops. Audit data will currently be available until the next live streaming begins. After the start of new live streaming, it will override audit data of a previous stream with the latest one.

Scroll back up for downloads and more information.

Dr Ganesh Sundaram wins Biggest Individual Contribution to Edge Computing Development

This award recognizes the Edge Computing Ambassadors, those individuals who are instrumental in driving edge computing development and who have been particularly active over the last 12 months in ensuring the continued progression of edge computing research, development and trials.