Amazon Rekognition by Corexpert – AWS Summit Paris 2017

ArticlesBlog

Written by:


I’m going to present a project we started
as a PoC in just a few hours of development, and became a real long term project at Corexpert thanks to Cédric who we can thank for this demo which is going to be an artificial intelligence and deep learning implementation using serveless stacks of AWS I’m going to ask you to light up the room please Are we ready? We have a bit of latency It’s the demo effect What you see here is a draft of a CCTV A CCTV is a kind of webcam oversight for a commercial building or anything So we have here a small 4k webcam, the only one available on the market today to try and capture an image
with the best resolution possible and we’re going to analyse a little bit more in details
what we have behind this camera Can we switch back to the slides please? Thank you I’m going to present the architecture while we finish setting up the demo The architecture in place The basic is that we have cameras, real cameras IP cameras, smartphones, webcams, anything positionned and connected to the internet or a local network and we capture the video stream on a local machine, which will send frames at a regular interval So theses frames are sent to a service you heard about multiple times today which is the Lambda service to a first Lambda function, in charge of splitting the image with an appropriate strategy to facilitate the analysis through an AWS API This API, it’s Amazon Rekognition, which you can see at the bottom of the screen Amazon Rekognition c’est is a deep learning service which will qualify an image we send it which means how many people are on it, in what mood they are, and more what the deep learning engine recognizes in this image so here we have multiple lambdas runing on multiple images simultaneously to analyse the frames Next, we persist the data in an other serverless service called Amazon DynamoDB we persist the data, and on the other side
we query them, either with a web app or with for example Amazon Echo, Alexa, who you heard talking this morning during the keynote you heard her talking french, we are going to try making it talk a little bit of french too to make her talk french, we used the service Amazon Polly with wich you can stream a sentence in any language So we’re going to go back to the demo screen, thank you So here you have a live video stream with a small processing latency because of the network And you’re going to see what Rekognition is capable of seeing So I’m going to ask you go ahead, smile and we should see the mood change here if I check and uncheck the other mood streams we can see that about 60 persons are detected, so we have the first few rows of course, we can’t detect the whole room but we have run a few tests and we made it work with a video trailing we have demos we will upload soon
which you will see if you follow our Twitter feed We have over 60 persons detected And I can show you another screen which will analyze what we’re seeing in the room So we can see that the gender parity is not the best at the Summit We have 4 women, now 1 detected so there seems to be doubts for some, probably the hairy ones On the sixty persons detected, we have 55 men About 76% of people are happy which is good for the Summit, congratulations We have 30 people with their eyes open so, do we have 70 people sleeping right now? I’m just asking, I don’t know We also see we have about thirty people wearing glasses which is common in the IT world, wearing glasses we can confirm that here 19 people kind of hispters, bearded I’m chcking if it’s true, it looks coherent And here we have a dynamic age pyramid It looks like we have detected someone over 75 years old We won’t tell you who, he could be disapointed if it’s not the case We will now move to a small test with Alexa “Alexa” “Open camera” “Alexa, open camera” “Alexa, open camera” (Alexa) “Hello, I see all, what would you like to know?” “How many people are you seeing?” (Alexa) “There are 56 persons in the room” “Do we look happy?” (Alexa) “Yes, I see 73% of you smiling” “Tell me all information you have” Tranlating takes some time, you can come talk to us afterwards we will explain how the streaming is done (Alexa) “I see 57 persons” (Alexa) “43 are happy” (Alexa) “0 are disgusted” (Alexa) “There are 55 men” (Alexa) “There are 2 women” (Alexa) “6 are confused” (Alexa) “2 are sad” (Alexa) “1 are surprised” (Alexa) “I see 41 smiles” (Alexa) “2 are calm” (Alexa) “And finally 3 are angry” Thank you Alexa “Speak english” Thank you And thanks to Cédric (Alexa) “Okay, I will speak in english to you” Ok, I asked her to change language “Do we look happy?” “Alexa, open camera” “…” Ok, we will let her be So we can change the language And I’m going to show one more thing So here we have a “Where is Waldo” feature And we’re going to look for someone in the room Someone who talked to you earlier, it’s Rudy Krol, Solutions Architect at AWS Thank you Rudy, that’s awesome We’re going to try anyway, it may work thanks to the delay (Rudy) “Alexis I’m interrupting, we are running late” (Rudy) “we’re going to stop here thanks a lot” Thank you everyone and good bye

Leave a Reply

Your email address will not be published. Required fields are marked *