Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
oh i just wondered oh how long did it take you to
00:00:03
get into machine learning and all that's that's the world
00:00:08
um so i i think i am determined to sing stories i really like maths and that was a okay
00:00:16
then i studied computer i when i decided to go to university tool without a computer science because i wanted to do something
00:00:21
more applicable so i even like i really love math i don't like that prove something i just left it there
00:00:28
that was kind of unsatisfactory scythe okay i'm just gonna do computer science and in second
00:00:33
here i had my chorus my first court machine learning course and i was hooked
00:00:36
that was it yeah uh uh i and since then i yeah i don't mind that isn't mentioning then work
00:00:41
here but in doing some uh i know p. in zurich and now i have to do mine
00:00:46
so if it was i guess love at first first fight if you believing that if
00:00:56
is there some other questions
00:01:01
objection you want first
00:01:11
so we'll take you for them is presentation i would like to us like mention the the place home
00:01:16
we should tell us at least the customs all pieces with a little blue currently would hinder
00:01:22
if i would say just look uh look at our publication so one of the really nice thing about mine is that
00:01:27
we publish a a lot of the work that we do so there's also want nips coming along the next week
00:01:33
um and there's gonna be a lot of uh a lot of our publications out there
00:01:37
so for follow that and there's also a the really nice deep mine blog
00:01:41
uh where they try to make things very visual and now
00:01:46
for example i don't know if you know wait meant there was this the the three
00:01:49
emu a text to speech uh system developed by by deep mine in in it
00:01:54
absolutely mind blowing uh and uh the they are samples
00:01:58
on the website you can see how it works
00:02:01
oh i know a lot of other information so i think that the my blog is also pretty good
00:02:05
place to look at that and on the lot publications or in the deep mine pages or
00:02:10
but i just thinking with this how to take a bit more questions at the end so this is
00:02:14
done training it thing okay i've created class right fit it it see how well it that
00:02:23
so it's all you wrong less than two percent of the time this is not state of the art this
00:02:29
is far from figure we're actually on on this problem we be way exceed human at your c.
00:02:34
or where we better than humans at recognising digit on the status that and i
00:02:38
think the the error is something ridiculous like small zero point zero zero
00:02:43
uh something that it for it and i think two percent is
00:02:47
pretty good for something that just round my computer right now
00:02:51
um while uh answering a couple of a of couple of questions but let's see what it what
00:02:56
it actually does i have here some code that firstly shows as a random example so what
00:03:02
on average will probably do right came and that it's not often wrong so here is
00:03:07
the one it saying okay this is what the model says this is the
00:03:11
the image that would say okay this is the one uh i i think it's
00:03:15
it's right here then it's the another one okay this is a tool
00:03:20
but now it's also look at one example where it gets it wrong
00:03:24
here the model says something but uh it's actually something else
00:03:30
so here this is the image and the model says it's a one but it's
00:03:37
actually a too but i mean come on a um this so yes it
00:03:44
does things wrong but also the data is not perfect so here i think for some of these things i would not have known what that is
00:03:57
so this isn't a just
00:04:00
and uh another way to use neural networks with cancer flow and the last week that
00:04:05
i will not show them off and i will just briefly mention is that
00:04:09
if you if you want to define your model to the matrix multiplication levels
00:04:15
all kind of how i did it for the linear regression you
00:04:17
can and there's really good to pour for a lot of the things that people using machine learning and this is really ideal for researchers
00:04:24
obviously it would not be a couple of lines it will take you way more time to do it but if you
00:04:29
want to do machine learning research in if you wanna read the play with things then you can go there
00:04:33
and they are a lot of people who do that but it's we you wear it there's
00:04:37
no right and wrong with three d. where you are on the on the spectrum
00:04:41
and just to conclude so i really think that machine learning is changing the
00:04:46
world in a lot of problems that we thought are not solvable
00:04:50
five years ago just think about i'll forego right like it and it's
00:04:53
just so impressive because it's not only that it beat someone that
00:04:58
we we thought it would not be a i'm machines will not be able to be the the set that
00:05:02
would doubt handcrafted features so this is the big difference between deep blue which beat uh gary kasparov chess
00:05:09
uh and i'll forego a deep blue knew about chess they had a lot of
00:05:13
human chess experts kind of fine tuning it and so on and off a
00:05:17
golden like kind of like we learn just thought a lot of examples
00:05:20
played a lot of go and then got really good at that and i think this is just reading mind blowing to see that we
00:05:27
in a time where do these things all these problems that with ah this is not gonna happen it's actually happening
00:05:32
ah and just when you try to solve a problem just think could i be sorting this
00:05:36
would machine learning and that was a really good tool for that because if you
00:05:42
it just it's for me there's a great community out there if you have
00:05:46
a problem you can just reported people will fix it for you
00:05:49
oh i'd it's really like a a great a great because system to be part of an agree community and you can
00:05:56
just do so much with it in with machine learning in general and that's it for me thank you very much
00:06:14
oh so thank you but there are some couple of questions or
00:06:22
hello thank you for it it was very impressive um i just wanted to
00:06:27
to ask about errors um we humans will learn from rose so
00:06:33
there was a quiet surprise for example and is um
00:06:37
a grammar lead to that there you read in the detector or
00:06:41
and it tells you the titles are and everything and everything
00:06:44
specified in that sense um oh to try to build
00:06:49
is more though m. m. l. but there are is becomes to the base of tunes so to say
00:06:56
that can you repeat that i'm not sure i got so it's so um
00:07:03
and we have examples from under classification tomatoes uh_huh and we have examples from um
00:07:10
um what i want to do it in terms of rubber
00:07:13
grammar yes internal grammar to to to detect errors i
00:07:17
would to try to more to that well i okay so the questions about grammar so human language right
00:07:23
so i think it has changed a lot in the last couple of years so
00:07:26
actually if you look at the language models and what they generate so badly
00:07:30
so you can't train a neural network or i'm a bit of especially on
00:07:34
network that deals with time uh you can train it to say okay
00:07:39
um to to kind of learn the distribution of words and you can ask it to predict
00:07:45
think think you look at what i actually learned in one predicts it actually doesn't make that many grammatical mistakes all
00:07:51
uh there is there some really nice blog posts out there will of the people try to
00:07:55
a train is such a neural network on the linux source call and it
00:07:59
actually generate would not help close the bracket and and all this
00:08:03
kind of things it it definitely learned this by overwrite the also the
00:08:07
thing with language ah yes they make some mistakes but they're getting
00:08:11
incredibly good at it so it's it's it's getting a bit far from again give these things were beep what people had before
00:08:17
where okay this is the noun the this thing in germany have to put that in these things are age of
00:08:22
yeah that's some data in there a lot of really good an option models right now it will learn for you for you
00:08:29
um expression
00:08:39
oh thanks a lot for your presentation um those terms of schools towards second order optimisation but
00:08:45
but does it have something what the has since real optimism yes
00:08:49
yes i i've never used it but i'm pretty sure yes
00:08:54
so i haven't used myself but i would be surprised if not if it isn't there
00:08:58
get up and stuff well file bug and probably someone will look at the issue of you
00:09:06
to request or something yeah yeah so so it please do write like this is
00:09:09
one of the reasons why it's such an good tool is because if there's not something
00:09:14
they're file feature requests the product people want that it's gonna be there right
00:09:22
is there more question you
00:09:30
a um we'll let's assume a complete beginner and i don't know anything about machine learning
00:09:38
uh what would you local recommend oh could i yeah it's inshore what is the
00:09:44
point start doesn't that's the hardest part for most people are not yet it's
00:09:53
yeah so so the the questions about how to start with machine learning well i think it
00:09:56
also depends a lot on your background i would recommend different things for people that
00:10:00
let's say no format or but i don't know that much as i think it depends
00:10:04
i think there are a lot of course is out there online courses that are becoming pretty good um there's also some
00:10:11
i'm very in introduction we block ports but just something that i
00:10:16
really wanna stress there is nothing like you playing with it
00:10:20
it may be just twenty one of these models c. doesn't work and then we allies why not
00:10:26
and for example if you really wanna learn you can start with something like you have learned
00:10:30
like i i should here but if you just implemented yourself it will go wrong
00:10:36
the first time and just try to understand why it goes wrong and
00:10:39
they they are some subtle these and sometimes for example someone please
00:10:43
this lawsuit there's also some there um numerical stability so i remember
00:10:47
the first time them into the neural network even do anything
00:10:50
and also oh god i chose this career is not gonna work out for me and
00:10:54
turns out that there's actually a known issue that now i know in a
00:10:57
lot of people know about but there's there's a numerical stability issue but i know
00:11:00
that it's very easy for me because i did it the hard way
00:11:04
and i i i i i'm a really big fan of okay try to look at all
00:11:08
these online resources that are out there and there are some really amazing thing especially
00:11:11
some of the online courses course arise a great example and if we can time with
00:11:16
one of the yeah and of people earning he had the course there are
00:11:20
uh online course uh which is there goes also quite hip so it depends again where you are
00:11:25
in the spectrum but there there's really nothing like getting getting dirty and seeing okay i'm trying
00:11:32
it doesn't and there are some data sets out there so nist is available you can play with that
00:11:37
there are some language data fits on one of the nice thing that you can do theirs
00:11:42
uh some of the um movie sentiment analysis data sets the get some movie reviews
00:11:48
and you try to predict if they're positive or negative it's the person like the
00:11:52
movie or not they can just play with that after you learn a
00:11:55
little bit uh but there's nothing i i'm i i personally really believed is
00:12:00
there's nothing like getting out getting it wrong and learning from that
00:12:07
oh oh okay
00:12:14
um where do you see the limit with all t. s. f. is there a limit
00:12:19
i don't know i um i think it's very hard to say because we're
00:12:25
we're moving very very fast and some of the things that i would not have thought possible
00:12:30
a couple of years ago are possible now so recently there was this this papers that she
00:12:35
showed how to do now what what we call like a one shot learning so basically
00:12:39
you learn how to translate between korean and japanese even though you've never
00:12:43
done that you've only know how to translate korean to english and
00:12:48
english to japanese or or some some strange thing and it's actually working
00:12:51
remarkably well in two years ago i don't think i would have
00:12:55
thought that's possible i think it's very very hard to say i
00:12:59
think we're still far from what we can do very far
00:13:02
uh i there's um and especially things and i i i think using things like reinforcement learning anything
00:13:08
reinforcement learning will grow a lot because of the only thing that actual now are just
00:13:14
oh okay i give you this set of examples and learn from that but that's not what we
00:13:19
do right like i go through life i get rewards when i do something well and
00:13:24
i get punished when i don't do something well and i think this is something where we
00:13:28
will see my personal belief is that we will see a lot of changes from that
00:13:34
with your good question
00:13:37
but uh i think the presentation is terribly oh to harness the
00:13:43
the overall lost in the models because we know locally but
00:13:48
complicated model yeah so there's cloud and now there is a a there is a mention laying a. p. i. out
00:13:54
there it is power button suppose we do use the fall under the hood yeah so the answer is yes
00:14:03
it's lost who was question is oh there we go
00:14:17
what about doing different things with the same a. r. e. i. image recognition around
00:14:25
no text to speech yeah setting so the question is how about doing different things with the same way i so
00:14:31
definitely that needs to to happen any it is happening uh uh slowly
00:14:37
but uh i definitely agree that we are not as so there's some things like for example people play the
00:14:44
different games with the fame um with the same model the completely different games
00:14:50
but it's still far from having a so so that way once we have a model
00:14:55
that can do multiple this because i'm talking to at the same time but
00:14:58
my brain is pumping things and and it's just a thing what our bodies are doing but
00:15:02
our models are still okay this is the cat i know this is a doctor
00:15:07
we're we're getting there but uh definitely this is i i i think these are the two
00:15:12
things to to stress about a reinforcement learning and having models that the multiple things
00:15:18
yeah if that was just any it also depends what do you mean by model because you can have
00:15:26
a very simple model that says if the problem is x.
00:15:30
delegate to model then knows how the it's very well
00:15:33
if the problem and then you have again you have a model that that's all they say that you know i
00:15:42
i agree with you that you should have one model that kind of figures these things out but we're we're
00:15:48
would have been getting there and also one of the one of the other things is that for example translation right
00:15:53
i think that is when we're gonna have one model that knows how to
00:15:56
just just like i like i'm sure that no thought translate multiple languages
00:16:00
oh it's uh it's not too different tasks but it's still getting there to the point of view we have this thing but not how to do
00:16:06
more things than we initially envisioned ah okay let's one of 'em to follow blues clues zoom
00:16:17
combining it from models into one do you think we can get to the
00:16:21
point where we term repeat loop the h. u. b. b. room
00:16:25
but computers like really true documents that that's a hard question can
00:16:31
we replicate and also human behaviour is a bit on defined
00:16:36
so it's very hard to to say what do you mean by human behaviour right i think
00:16:44
i think we can get very far but i i i don't know exactly what
00:16:48
yeah what you mean by human behaviour and um uh i i think we should strive for that
00:16:56
we should strive for generic learning um and i think that would help us with a lot of things but um
00:17:02
oh and uh i presume confident that it will happen but i don't it's very hard to say right i guess we here
00:17:09
a fast moving field and that's what gives me confidence that i think it will happen
00:17:13
but i can tell no one can tell you for sure um it's a it's going fast but we're still

Share this talk: 


Conference Program

How to convince organization to adopt a new technology
Daria Mühlethaler, Swisscom / Zürich, Switzerland
Nov. 26, 2016 · 10:14 a.m.
358 views
Q&A - How to convince organization to adopt a new technology
Daria Mühlethaler, Swisscom / Zürich, Switzerland
Nov. 26, 2016 · 10:38 a.m.
199 views
Animations for a better user experience
Lorica Claesson, Nordic Usability / Zürich, Switzerland
Nov. 26, 2016 · 11:01 a.m.
212 views
Q&A - Animations for a better user experience
Lorica Claesson, Nordic Usability / Zürich, Switzerland
Nov. 26, 2016 · 11:27 a.m.
Artificial Intelligence at Swisscom
Andreea Hossmann, Swisscom / Bern, Switzerland
Nov. 26, 2016 · 1:01 p.m.
399 views
Q&A - Artificial Intelligence at Swisscom
Andreea Hossmann, Swisscom / Bern, Switzerland
Nov. 26, 2016 · 1:29 p.m.
158 views
An introduction to TensorFlow
Mihaela Rosca, Google / London, England
Nov. 26, 2016 · 2:01 p.m.
532 views
Q&A - An introduction to TensorFlow
Mihaela Rosca, Google
Nov. 26, 2016 · 2:35 p.m.
244 views
Limbic system using Tensorflow
Gema Parreño Piqueras, Tetuan Valley / Madrid, Spain
Nov. 26, 2016 · 3:31 p.m.
624 views
Q&A - Limbic system using Tensorflow
Gema Parreño Piqueras, Tetuan Valley / Madrid, Spain
Nov. 26, 2016 · 4:04 p.m.
153 views
How Docker revolutionized the IT landscape
Vadim Bauer, 8gears AG / Zürich, Switzerland
Nov. 26, 2016 · 4:32 p.m.
119 views
Closing Remarks
Jacques Supcik, Professeur, Filière Télécommunications, Institut iSIS, HEFr
Nov. 26, 2016 · 5:11 p.m.
Rosie: clean use case framework
Jorge Barroso, Karumi / Madrid, Spain
Nov. 27, 2016 · 10:05 a.m.
Q&A - Rosie: clean use case framework
Jorge Barroso, Karumi / Madrid, Spain
Nov. 27, 2016 · 10:39 a.m.
The Firebase tier for your app
Matteo Bonifazi, Technogym / Cesena, Italy
Nov. 27, 2016 · 10:49 a.m.
Q&A - The Firebase tier for your app
Matteo Bonifazi, Technogym / Cesena, Italy
Nov. 27, 2016 · 11:32 a.m.
PERFMATTERS for Android
Hasan Hosgel, ImmobilienScout24 / Berlin, Germany
Nov. 27, 2016 · 11:45 a.m.
Q&A - PERFMATTERS for Android
Hasan Hosgel, ImmobilienScout24 / Berlin, Germany
Nov. 27, 2016 · 12:22 p.m.
Managing your online presence on Google Search
John Mueller, Google / Zürich, Switzerland
Nov. 27, 2016 · 1:29 p.m.
Q&A - Managing your online presence on Google Search
John Mueller, Google / Zürich, Switzerland
Nov. 27, 2016 · 2:02 p.m.
Design for Conversation
Henrik Vendelbo, The Digital Gap / Zurich, Switzerland
Nov. 27, 2016 · 2:30 p.m.
152 views
Q&A - Design for Conversation
Henrik Vendelbo, The Digital Gap / Zurich, Switzerland
Nov. 27, 2016 · 3:09 p.m.
Firebase with Angular 2 - the perfect match
Christoffer Noring, OVO Energy / London, England
Nov. 27, 2016 · 4:05 p.m.
119 views
Q&A - Firebase with Angular 2 - the perfect match
Christoffer Noring, OVO Energy / London, England
Nov. 27, 2016 · 4:33 p.m.
Wanna more fire? - Let's try polymerfire!
Sofiya Huts, JustAnswer / Lviv, Ukraine
Nov. 27, 2016 · 5 p.m.
Q&A - Wanna more fire? - Let's try polymerfire!
Sofiya Huts, JustAnswer / Lviv, Ukraine
Nov. 27, 2016 · 5:38 p.m.

Recommended talks

Understanding Transformers
James Henderson, Idiap Research Institute
March 10, 2023 · 8:46 a.m.
366 views