Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
for um i will take a big uh for different approaches compared to my to my critiques
00:00:08
which way are presenting on how to make reproducing research and
00:00:12
actually i would take you to my my journey
00:00:16
uh when i tried to uh actually wrote produced research on someone else
00:00:21
so uh it consisting remote a photo played t. smoke graffiti
00:00:26
which basically consisted in measuring your your heart rate so
00:00:30
the way he's done not remotely is the weights measured in in
00:00:34
a hospital so you have this little keep on your finger
00:00:38
i'm a structured light is um projected uh and that measures
00:00:43
the amount of blood flow that you have in your
00:00:46
in your finger because when your heart is pumping uh the volume of the blood is changing in in your finger
00:00:54
and then uh the based on the light that is uh either transmitted or um reflected
00:01:00
uh you can get um a getaway formed like the one you see here and that that's actually your heart rate
00:01:07
so now our remote for two plus d. mo graffiti which i we've called r. p. p. g. from now on the simpler
00:01:14
um is to do the same but using a regular camera rind from a a distance
00:01:22
so uh there is one or is she that that that shows that
00:01:26
actually the austin small changes in in car on your scheme
00:01:31
when the same process is uh is happening which or witch abroad so basically when the the heart is pumping
00:01:37
blood you go through your to your face many they they re some some variation in colour or
00:01:44
so they are they were value sometimes that were proposed to to try to
00:01:49
recognise the heart rate from from the scroll violations so that's the
00:01:52
first very first aries is the one i should be here
00:01:56
and well you can that we see but it's basically they said that in that
00:02:00
we channel you can you can see like variation the corresponding to operate
00:02:05
and right now you you don't have like a lot of a different application for smart phone we
00:02:09
you just look at just more for it we like say oh okay your heart rate is
00:02:15
i uh why we are you interested in this uh we are doing bad by metrics an
00:02:21
now um by that talked about speaker detection
00:02:23
attract and we also trying to address
00:02:27
face detection that that's what his presentation detection actually so
00:02:33
uh i'll just like in the picture was if you want to trick
00:02:36
'em face recognition system can present at the photo where masking
00:02:41
and what we hope actually is that when you are trying to get the heart rate from
00:02:46
a photograph of farm ask you will not get one artist one that that makes sense
00:02:52
the most problem with the not to problems that we have is that uh all the algorithms there are computing the heart rate
00:02:58
at the at the end of the process yeah you're the out there filtering the signals to be sure that the
00:03:03
the heart rate is in a plausible ranges so basically from forty b.
00:03:07
p. m. two two hundred and and something like that so
00:03:12
whatever you do do you use to get the results that makes sense
00:03:16
and the other problem is that the the phase uh recognise
00:03:20
you don't have the the ground truth for the heart rate but
00:03:24
that will that that goes with the with the video sequence
00:03:29
so we did i'm a bit of survey on what was going in in ah p. p. g.
00:03:34
and this is really a training topic i mean the the first the
00:03:38
paper appeared in two thousand you have um and since then
00:03:42
it's you it was really we increasing and i i think in like two thousand
00:03:46
fifteen only there was more than fifty papers that were published on the subject
00:03:52
but the the main problem uh is that um well the publisher search are usually
00:03:58
a published on well results are published on prior proper you to read databases
00:04:04
which are rather small because the work addict you know and
00:04:07
and uh it's like i don't know ten subjects
00:04:11
not much variability and uh people are just saying okay so we have like i don't know
00:04:16
twenty video sequence that supply organ algorithm to twenty
00:04:20
video sequence arm strict protocols for evaluation
00:04:24
so actually there is anyone publicly available database that that has a both
00:04:29
uh i'm a sequence of the face and the synchronised heart rate
00:04:34
and in the church or we found only one algorithm doubt to at the moment or
00:04:40
um but at the time there was only one algorithm for
00:04:43
market digital that was reporting results on on this database
00:04:48
so the first step that we did this um well okay so then
00:04:54
to summarise the problems we had so
00:04:57
this research area is not meant to be we're pretty simple at the moment but uh because as
00:05:04
i said uh there's no standard protocol for evaluation i mean people are just like using
00:05:09
the video segments saying okay we have either the average halfway to the instant operate uh the
00:05:16
main problem is that there is no uh no but we came up with the data
00:05:22
so the solution uh well we tried to to implement was to first use been
00:05:28
uh to to develop a old algorithms that that we try
00:05:33
we also corrected the a database that um captures video sequences of of a face
00:05:39
together with the with the heart rate and also the why some experimental protocols
00:05:46
and so uh that's the um the first effort we made
00:05:51
we recorded there is um with the uh um
00:05:55
for he said jets which to my knowledge makes it the
00:06:00
largest in terms of number of subject a valuable
00:06:04
it's public uh each uh individual was we call it for a
00:06:08
false sequences to with the light turns on to read
00:06:12
on the the line the light coming from the window so so you have you have this side in the nation
00:06:18
and each sequence is uh is lasting for all the um one minutes
00:06:23
then uh we selected this descent going ons as a as a
00:06:28
baseline because um as i told you before it's the only
00:06:31
one that was a presenting results on uh on the p.
00:06:35
b. g. are available that is that we also downloaded
00:06:40
so basically the weight work uh you try this region of the face and
00:06:45
you compute the mean core or on on this are we are
00:06:49
uh then you extract also the background uh you you you a feature uh to
00:06:55
correct basically the the core or here to be any um what that
00:07:02
to remove the inference of of global illumination actually and so to have
00:07:07
the violation that our just due to the to the blood flow
00:07:11
then you also compensate for the motion but cannot go when people is
00:07:16
speaking or that kind of stuff then you do a bunch of different
00:07:19
filtering and idea and you you get a signal like this
00:07:23
that you will for you transform and then detected a highest frequency
00:07:28
which is supposed to to correspond to the to the whole
00:07:33
so that's basically the are going but uh how it works
00:07:38
uh we which that to the author was an uh
00:07:40
they were kind enough to share uh some some part of the whole so basically all bills bills three blocks
00:07:47
uh because uh they weren't uh able to give us the code
00:07:51
for the tracking in the background extraction because it was
00:07:55
a former cody left and that kind of stuff and they also uh they also provide us with some data
00:08:02
uh the main colour of the face in the bargain regions uh along some of the of the video sequences
00:08:09
so uh as i said the the tracking background extraction part is missing so we implement it yourself
00:08:16
based on the uh on on what was described in the paper the code
00:08:19
was in matlab so it wasn't that ideal because it's proprietary software
00:08:26
uh it's so basically what what we did is like two we
00:08:29
implement the the the code that they gave us in brighton
00:08:33
and to make sure that it's uh uh with the uh with the code that we had in the data
00:08:39
uh we we could we assess that what we translate
00:08:43
from matlab to lighten was doing exactly the same
00:08:48
ah so that's that that that was a good thing
00:08:53
now when we when we ran actually the uh the code that we made on the um
00:09:00
the data uh so you can see that they are like a difference in performance is
00:09:05
so here it's the the mean square the would mean square or between the
00:09:09
the to operate and the and the detected operate and here is the the colour
00:09:14
correlations sorry uh between on all the database between what what was um
00:09:21
uh i detected by the algorithms and underground also the the
00:09:25
publish results are a bit better than the one which
00:09:28
is so that we can it's we can say that fairly accurately that the difference depends only on the
00:09:35
on the tracking in the background extraction procedure because it's the only difference we had in the code
00:09:42
but i'm armpit important question is is to ask ourselves okay so
00:09:46
well how good and the whole already but all those results
00:09:50
because it was testing on only one putting that aside and says the source code was not fully provided
00:09:55
then could mean that that hours and especially to the tracking part for instance could could be wrong
00:10:03
so what we did is we did other experiments uh with
00:10:07
other algorithms that we also we implemented to try to
00:10:12
to basically of comparison we also tested on other database the one we recorded
00:10:17
and we do write some streak experimental protocol so basically we had
00:10:23
divide it every by the database in a training set on the test set so that you can
00:10:27
chew no parameters on your on your training set and then assess the performance on your
00:10:32
on your tested because the way it wasn't before it was just like okay i have
00:10:35
a bunch of use quincy so i run my algorithms tonight with the parameters and
00:10:40
when i reach the best results then i'm done and i'm and i'm happy so right now you can see that the
00:10:47
the results i just put the correlation here but the results are really different any in particular
00:10:52
if you if you go from from the database what where the results were published on
00:10:57
which are quite okay should we go to our database you can see that it doesn't
00:11:02
mean anything so basically this number means that there is no correlation whatsoever between
00:11:06
uh the detected operate and uh and the ground rules and on the other hand you take another algorithm
00:11:14
and uh and you have completely different results i mean in in one database that
00:11:19
that good than on the other one the the they are quite comparable to the to the first one but on the other that is
00:11:26
so basic question which which are going would you use and well you still don't know
00:11:33
so to conclude i would just um make a make two
00:11:38
points that first reproducing published matter is really not
00:11:42
real even even when you have the source code actually i mean and and even when you can
00:11:47
you can discuss with filters we had some really made exchanges and and we've
00:11:51
been able to reproduce exactly what they did was was quite difficult
00:11:56
they are let let of details to figure out i mean in the paper
00:12:00
you cannot find all the it hidden power meters and implementation stuff and
00:12:05
well some functions only they had different the fall still um mode
00:12:11
of operation whether it's matt level python or or whatever
00:12:16
and also most most importantly is that can conclusion that you seen
00:12:20
papers should not be blindly trusting because when you read
00:12:23
the paper you're like oh well the results sounds good and finally when you when you are trying to reproduce them
00:12:29
and the best that you can and the more honest way that you can
00:12:33
well and don't necessarily or find the same results and that's it

Share this talk: 


Conference Program

Welcome
Sébastien Marcel, Senior Researcher, IDIAP, Director of the Swiss Center for Biometrics Research and Testing
March 24, 2017 · 9:17 a.m.
473 views
Keynote - Reproducibility and Open Science @ EPFL
Pierre Vandergheynst, EPFL VP for Education
March 24, 2017 · 9:20 a.m.
250 views
Q&A: Keynote - Reproducibility and Open Science @ EPFL
Pierre Vandergheynst, EPFL VP for Education
March 24, 2017 · 9:54 a.m.
Reproducible Research with Bob and the BEAT Platform
André Anjos
March 24, 2017 · 10:03 a.m.
230 views
VISCERAL and Evaluation-as-a-Service
Henning Müller, prof. HES-SO Valais-Wallis (unité e-health)
March 24, 2017 · 11:35 a.m.
109 views
Q&A - VISCERAL and Evaluation-as-a-Service
Henning Müller, prof. HES-SO Valais-Wallis (unité e-health)
March 24, 2017 · 12:07 p.m.
Making experiments on remote heart-rate measurement reproducible
Guillaume Heusch
March 24, 2017 · 12:30 p.m.
149 views
138 views

Recommended talks

Pose estimation and gesture recognition using structured deep learning
Christian Wolf, LIRIS team, INSA Lyon, France
Oct. 17, 2014 · 11:06 a.m.
386 views