prev1
stringlengths
1
7.9k
prev2
stringlengths
1
7.9k
next1
stringlengths
1
7.9k
next2
stringlengths
1
7.9k
is_boundary
bool
2 classes
session_id
stringlengths
7
14
Oh , oh. I see.
Mm - hmm.
if it gets an unambiguous result then you 're definitely in a in a in a voice in a , uh , s region with speech. Uh.
So there 's this assumption that the v the voice activity detector can only use the MFCC ?
false
QMSum_120
Mm - hmm.
if it gets an unambiguous result then you 're definitely in a in a in a voice in a , uh , s region with speech. Uh.
So there 's this assumption that the v the voice activity detector can only use the MFCC ?
That 's not clear , but this e
false
QMSum_120
if it gets an unambiguous result then you 're definitely in a in a in a voice in a , uh , s region with speech. Uh.
So there 's this assumption that the v the voice activity detector can only use the MFCC ?
That 's not clear , but this e
Well , for the baseline.
false
QMSum_120
So there 's this assumption that the v the voice activity detector can only use the MFCC ?
That 's not clear , but this e
Well , for the baseline.
Yeah.
false
QMSum_120
That 's not clear , but this e
Well , for the baseline.
Yeah.
So so if you use other features then y But it 's just a question of what is your baseline. Right ? What is it that you 're supposed to do better than ?
false
QMSum_120
Well , for the baseline.
Yeah.
So so if you use other features then y But it 's just a question of what is your baseline. Right ? What is it that you 're supposed to do better than ?
I g Yeah.
false
QMSum_120
Yeah.
So so if you use other features then y But it 's just a question of what is your baseline. Right ? What is it that you 're supposed to do better than ?
I g Yeah.
And so having the baseline be the MFCC 's means that people could choose to pour their ener their effort into trying to do a really good VAD
false
QMSum_120
So so if you use other features then y But it 's just a question of what is your baseline. Right ? What is it that you 're supposed to do better than ?
I g Yeah.
And so having the baseline be the MFCC 's means that people could choose to pour their ener their effort into trying to do a really good VAD
I don't s But they seem like two separate issues.
false
QMSum_120
I g Yeah.
And so having the baseline be the MFCC 's means that people could choose to pour their ener their effort into trying to do a really good VAD
I don't s But they seem like two separate issues.
or tryi They 're sort of separate.
false
QMSum_120
And so having the baseline be the MFCC 's means that people could choose to pour their ener their effort into trying to do a really good VAD
I don't s But they seem like two separate issues.
or tryi They 're sort of separate.
Right ? I mean
false
QMSum_120
I don't s But they seem like two separate issues.
or tryi They 're sort of separate.
Right ? I mean
Unfortunately there 's coupling between them , which is part of what I think Stephane is getting to , is that you can choose your features in such a way as to improve the VAD.
false
QMSum_120
or tryi They 're sort of separate.
Right ? I mean
Unfortunately there 's coupling between them , which is part of what I think Stephane is getting to , is that you can choose your features in such a way as to improve the VAD.
Yeah.
false
QMSum_120
Right ? I mean
Unfortunately there 's coupling between them , which is part of what I think Stephane is getting to , is that you can choose your features in such a way as to improve the VAD.
Yeah.
And you also can choose your features in such a way as to prove improve recognition. They may not be the same thing.
false
QMSum_120
Unfortunately there 's coupling between them , which is part of what I think Stephane is getting to , is that you can choose your features in such a way as to improve the VAD.
Yeah.
And you also can choose your features in such a way as to prove improve recognition. They may not be the same thing.
But it seems like you should do both.
false
QMSum_120
Yeah.
And you also can choose your features in such a way as to prove improve recognition. They may not be the same thing.
But it seems like you should do both.
You should do both
false
QMSum_120
And you also can choose your features in such a way as to prove improve recognition. They may not be the same thing.
But it seems like you should do both.
You should do both
Right ?
false
QMSum_120
But it seems like you should do both.
You should do both
Right ?
and and I I think that this still makes I still think this makes sense as a baseline. It 's just saying , as a baseline , we know
false
QMSum_120
You should do both
Right ?
and and I I think that this still makes I still think this makes sense as a baseline. It 's just saying , as a baseline , we know
Mmm.
false
QMSum_120
Right ?
and and I I think that this still makes I still think this makes sense as a baseline. It 's just saying , as a baseline , we know
Mmm.
you know , we had the MFCC 's before , lots of people have done voice activity detectors ,
false
QMSum_120
and and I I think that this still makes I still think this makes sense as a baseline. It 's just saying , as a baseline , we know
Mmm.
you know , we had the MFCC 's before , lots of people have done voice activity detectors ,
Mm - hmm.
false
QMSum_120
Mmm.
you know , we had the MFCC 's before , lots of people have done voice activity detectors ,
Mm - hmm.
you might as well pick some voice activity detector and make that the baseline , just like you picked some version of HTK and made that the baseline.
false
QMSum_120
you know , we had the MFCC 's before , lots of people have done voice activity detectors ,
Mm - hmm.
you might as well pick some voice activity detector and make that the baseline , just like you picked some version of HTK and made that the baseline.
Yeah. Right.
false
QMSum_120
Mm - hmm.
you might as well pick some voice activity detector and make that the baseline , just like you picked some version of HTK and made that the baseline.
Yeah. Right.
And then let 's try and make everything better. Um , and if one of the ways you make it better is by having your features be better features for the VAD then that 's so be it.
false
QMSum_120
you might as well pick some voice activity detector and make that the baseline , just like you picked some version of HTK and made that the baseline.
Yeah. Right.
And then let 's try and make everything better. Um , and if one of the ways you make it better is by having your features be better features for the VAD then that 's so be it.
Mm - hmm.
false
QMSum_120
Yeah. Right.
And then let 's try and make everything better. Um , and if one of the ways you make it better is by having your features be better features for the VAD then that 's so be it.
Mm - hmm.
But , uh , uh , uh , at least you have a starting point that 's um , cuz i i some of the some of the people didn't have a VAD at all , I guess. Right ? And and
false
QMSum_120
And then let 's try and make everything better. Um , and if one of the ways you make it better is by having your features be better features for the VAD then that 's so be it.
Mm - hmm.
But , uh , uh , uh , at least you have a starting point that 's um , cuz i i some of the some of the people didn't have a VAD at all , I guess. Right ? And and
Yeah.
false
QMSum_120
Mm - hmm.
But , uh , uh , uh , at least you have a starting point that 's um , cuz i i some of the some of the people didn't have a VAD at all , I guess. Right ? And and
Yeah.
then they they looked pretty bad and and in fact what they were doing wasn't so bad at all.
false
QMSum_120
But , uh , uh , uh , at least you have a starting point that 's um , cuz i i some of the some of the people didn't have a VAD at all , I guess. Right ? And and
Yeah.
then they they looked pretty bad and and in fact what they were doing wasn't so bad at all.
Mm - hmm. Mm - hmm.
false
QMSum_120
Yeah.
then they they looked pretty bad and and in fact what they were doing wasn't so bad at all.
Mm - hmm. Mm - hmm.
But , um.
false
QMSum_120
then they they looked pretty bad and and in fact what they were doing wasn't so bad at all.
Mm - hmm. Mm - hmm.
But , um.
Yeah. It seems like you should try to make your baseline as good as possible. And if it turns out that you can't improve on that , well , I mean , then , you know , nobody wins and you just use MFCC. Right ?
false
QMSum_120
Mm - hmm. Mm - hmm.
But , um.
Yeah. It seems like you should try to make your baseline as good as possible. And if it turns out that you can't improve on that , well , I mean , then , you know , nobody wins and you just use MFCC. Right ?
Yeah. I mean , it seems like , uh , it should include sort of the current state of the art that you want are trying to improve , and MFCC 's , you know , or PLP or something it seems like reasonable baseline for the features , and anybody doing this task , uh , is gonna have some sort of voice activity detection at some level , in some way. They might use the whole recognizer to do it but rather than a separate thing , but but they 'll have it on some level. So , um.
false
QMSum_120
But , um.
Yeah. It seems like you should try to make your baseline as good as possible. And if it turns out that you can't improve on that , well , I mean , then , you know , nobody wins and you just use MFCC. Right ?
Yeah. I mean , it seems like , uh , it should include sort of the current state of the art that you want are trying to improve , and MFCC 's , you know , or PLP or something it seems like reasonable baseline for the features , and anybody doing this task , uh , is gonna have some sort of voice activity detection at some level , in some way. They might use the whole recognizer to do it but rather than a separate thing , but but they 'll have it on some level. So , um.
It seems like whatever they choose they shouldn't , you know , purposefully brain - damage a part of the system to make a worse baseline , or
false
QMSum_120
Yeah. It seems like you should try to make your baseline as good as possible. And if it turns out that you can't improve on that , well , I mean , then , you know , nobody wins and you just use MFCC. Right ?
Yeah. I mean , it seems like , uh , it should include sort of the current state of the art that you want are trying to improve , and MFCC 's , you know , or PLP or something it seems like reasonable baseline for the features , and anybody doing this task , uh , is gonna have some sort of voice activity detection at some level , in some way. They might use the whole recognizer to do it but rather than a separate thing , but but they 'll have it on some level. So , um.
It seems like whatever they choose they shouldn't , you know , purposefully brain - damage a part of the system to make a worse baseline , or
Well , I think people just had
false
QMSum_120
Yeah. I mean , it seems like , uh , it should include sort of the current state of the art that you want are trying to improve , and MFCC 's , you know , or PLP or something it seems like reasonable baseline for the features , and anybody doing this task , uh , is gonna have some sort of voice activity detection at some level , in some way. They might use the whole recognizer to do it but rather than a separate thing , but but they 'll have it on some level. So , um.
It seems like whatever they choose they shouldn't , you know , purposefully brain - damage a part of the system to make a worse baseline , or
Well , I think people just had
You know ?
false
QMSum_120
It seems like whatever they choose they shouldn't , you know , purposefully brain - damage a part of the system to make a worse baseline , or
Well , I think people just had
You know ?
it wasn't that they purposely brain - damaged it. I think people hadn't really thought through about the , uh the VAD issue.
false
QMSum_120
Well , I think people just had
You know ?
it wasn't that they purposely brain - damaged it. I think people hadn't really thought through about the , uh the VAD issue.
Mmm.
false
QMSum_120
You know ?
it wasn't that they purposely brain - damaged it. I think people hadn't really thought through about the , uh the VAD issue.
Mmm.
Mm - hmm.
false
QMSum_120
it wasn't that they purposely brain - damaged it. I think people hadn't really thought through about the , uh the VAD issue.
Mmm.
Mm - hmm.
And and then when the the the proposals actually came in and half of them had V A Ds and half of them didn't , and the half that did did well and the half that didn't did poorly.
false
QMSum_120
Mmm.
Mm - hmm.
And and then when the the the proposals actually came in and half of them had V A Ds and half of them didn't , and the half that did did well and the half that didn't did poorly.
Mm - hmm.
false
QMSum_120
Mm - hmm.
And and then when the the the proposals actually came in and half of them had V A Ds and half of them didn't , and the half that did did well and the half that didn't did poorly.
Mm - hmm.
So it 's
false
QMSum_120
And and then when the the the proposals actually came in and half of them had V A Ds and half of them didn't , and the half that did did well and the half that didn't did poorly.
Mm - hmm.
So it 's
Mm - hmm. Um.
false
QMSum_120
Mm - hmm.
So it 's
Mm - hmm. Um.
Uh.
false
QMSum_120
So it 's
Mm - hmm. Um.
Uh.
Yeah. So we 'll see what happen with this. And Yeah. So what happened since , um , last week is well , from OGI , these experiments on putting VAD on the baseline. And these experiments also are using , uh , some kind of noise compensation , so spectral subtraction , and putting on - line normalization , um , just after this. So I think spectral subtraction , LDA filtering , and on - line normalization , so which is similar to the pro proposal - one , but with spectral subtraction in addition , and it seems that on - line normalization doesn't help further when you have spectral subtraction.
false
QMSum_120
Mm - hmm. Um.
Uh.
Yeah. So we 'll see what happen with this. And Yeah. So what happened since , um , last week is well , from OGI , these experiments on putting VAD on the baseline. And these experiments also are using , uh , some kind of noise compensation , so spectral subtraction , and putting on - line normalization , um , just after this. So I think spectral subtraction , LDA filtering , and on - line normalization , so which is similar to the pro proposal - one , but with spectral subtraction in addition , and it seems that on - line normalization doesn't help further when you have spectral subtraction.
Is this related to the issue that you brought up a couple of meetings ago with the the musical tones
false
QMSum_120
Uh.
Yeah. So we 'll see what happen with this. And Yeah. So what happened since , um , last week is well , from OGI , these experiments on putting VAD on the baseline. And these experiments also are using , uh , some kind of noise compensation , so spectral subtraction , and putting on - line normalization , um , just after this. So I think spectral subtraction , LDA filtering , and on - line normalization , so which is similar to the pro proposal - one , but with spectral subtraction in addition , and it seems that on - line normalization doesn't help further when you have spectral subtraction.
Is this related to the issue that you brought up a couple of meetings ago with the the musical tones
and ?
false
QMSum_120
Yeah. So we 'll see what happen with this. And Yeah. So what happened since , um , last week is well , from OGI , these experiments on putting VAD on the baseline. And these experiments also are using , uh , some kind of noise compensation , so spectral subtraction , and putting on - line normalization , um , just after this. So I think spectral subtraction , LDA filtering , and on - line normalization , so which is similar to the pro proposal - one , but with spectral subtraction in addition , and it seems that on - line normalization doesn't help further when you have spectral subtraction.
Is this related to the issue that you brought up a couple of meetings ago with the the musical tones
and ?
I have no idea , because the issue I brought up was with a very simple spectral subtraction approach ,
false
QMSum_120
Is this related to the issue that you brought up a couple of meetings ago with the the musical tones
and ?
I have no idea , because the issue I brought up was with a very simple spectral subtraction approach ,
Mmm.
false
QMSum_120
and ?
I have no idea , because the issue I brought up was with a very simple spectral subtraction approach ,
Mmm.
and the one that they use at OGI is one from from the proposed the the the Aurora prop uh , proposals , which might be much better. So , yeah. I asked Sunil for more information about that , but , uh , I don't know yet. Um. And what 's happened here is that we so we have this kind of new , um , reference system which use a nice a a clean downsampling - upsampling , which use a new filter that 's much shorter and which also cuts the frequency below sixty - four hertz ,
false
QMSum_120
I have no idea , because the issue I brought up was with a very simple spectral subtraction approach ,
Mmm.
and the one that they use at OGI is one from from the proposed the the the Aurora prop uh , proposals , which might be much better. So , yeah. I asked Sunil for more information about that , but , uh , I don't know yet. Um. And what 's happened here is that we so we have this kind of new , um , reference system which use a nice a a clean downsampling - upsampling , which use a new filter that 's much shorter and which also cuts the frequency below sixty - four hertz ,
Right.
false
QMSum_120
Mmm.
and the one that they use at OGI is one from from the proposed the the the Aurora prop uh , proposals , which might be much better. So , yeah. I asked Sunil for more information about that , but , uh , I don't know yet. Um. And what 's happened here is that we so we have this kind of new , um , reference system which use a nice a a clean downsampling - upsampling , which use a new filter that 's much shorter and which also cuts the frequency below sixty - four hertz ,
Right.
which was not done on our first proposal.
false
QMSum_120
and the one that they use at OGI is one from from the proposed the the the Aurora prop uh , proposals , which might be much better. So , yeah. I asked Sunil for more information about that , but , uh , I don't know yet. Um. And what 's happened here is that we so we have this kind of new , um , reference system which use a nice a a clean downsampling - upsampling , which use a new filter that 's much shorter and which also cuts the frequency below sixty - four hertz ,
Right.
which was not done on our first proposal.
When you say " we have that " , does Sunil have it now , too ,
false
QMSum_120
Right.
which was not done on our first proposal.
When you say " we have that " , does Sunil have it now , too ,
I No.
false
QMSum_120
which was not done on our first proposal.
When you say " we have that " , does Sunil have it now , too ,
I No.
or ?
false
QMSum_120
When you say " we have that " , does Sunil have it now , too ,
I No.
or ?
No.
false
QMSum_120
I No.
or ?
No.
OK.
false
QMSum_120
or ?
No.
OK.
Because we 're still testing. So we have the result for , uh , just the features
false
QMSum_120
No.
OK.
Because we 're still testing. So we have the result for , uh , just the features
OK.
false
QMSum_120
OK.
Because we 're still testing. So we have the result for , uh , just the features
OK.
and we are currently testing with putting the neural network in the KLT. Um , it seems to improve on the well - matched case , um , but it 's a little bit worse on the mismatch and highly - mismatched I mean when we put the neural network. And with the current weighting I think it 's sh it will be better because the well - matched case is better. Mmm.
false
QMSum_120
Because we 're still testing. So we have the result for , uh , just the features
OK.
and we are currently testing with putting the neural network in the KLT. Um , it seems to improve on the well - matched case , um , but it 's a little bit worse on the mismatch and highly - mismatched I mean when we put the neural network. And with the current weighting I think it 's sh it will be better because the well - matched case is better. Mmm.
But how much worse since the weighting might change how how much worse is it on the other conditions , when you say it 's a little worse ?
false
QMSum_120
OK.
and we are currently testing with putting the neural network in the KLT. Um , it seems to improve on the well - matched case , um , but it 's a little bit worse on the mismatch and highly - mismatched I mean when we put the neural network. And with the current weighting I think it 's sh it will be better because the well - matched case is better. Mmm.
But how much worse since the weighting might change how how much worse is it on the other conditions , when you say it 's a little worse ?
It 's like , uh , fff , fff um , ten percent relative. Yeah.
false
QMSum_120
and we are currently testing with putting the neural network in the KLT. Um , it seems to improve on the well - matched case , um , but it 's a little bit worse on the mismatch and highly - mismatched I mean when we put the neural network. And with the current weighting I think it 's sh it will be better because the well - matched case is better. Mmm.
But how much worse since the weighting might change how how much worse is it on the other conditions , when you say it 's a little worse ?
It 's like , uh , fff , fff um , ten percent relative. Yeah.
OK. Um.
false
QMSum_120
But how much worse since the weighting might change how how much worse is it on the other conditions , when you say it 's a little worse ?
It 's like , uh , fff , fff um , ten percent relative. Yeah.
OK. Um.
Mm - hmm.
false
QMSum_120
It 's like , uh , fff , fff um , ten percent relative. Yeah.
OK. Um.
Mm - hmm.
But it has the , uh the latencies are much shorter. That 's
false
QMSum_120
OK. Um.
Mm - hmm.
But it has the , uh the latencies are much shorter. That 's
Uh - y w when I say it 's worse , it 's not it 's when I I uh , compare proposal - two to proposal - one , so , r uh , y putting neural network compared to n not having any neural network. I mean , this new system is is is better ,
false
QMSum_120
Mm - hmm.
But it has the , uh the latencies are much shorter. That 's
Uh - y w when I say it 's worse , it 's not it 's when I I uh , compare proposal - two to proposal - one , so , r uh , y putting neural network compared to n not having any neural network. I mean , this new system is is is better ,
Uh - huh.
false
QMSum_120
But it has the , uh the latencies are much shorter. That 's
Uh - y w when I say it 's worse , it 's not it 's when I I uh , compare proposal - two to proposal - one , so , r uh , y putting neural network compared to n not having any neural network. I mean , this new system is is is better ,
Uh - huh.
because it has um , this sixty - four hertz cut - off , uh , clean downsampling , and , um what else ? Uh , yeah , a good VAD. We put the good VAD. So. Yeah , I don't know. I I j uh , uh pr
false
QMSum_120
Uh - y w when I say it 's worse , it 's not it 's when I I uh , compare proposal - two to proposal - one , so , r uh , y putting neural network compared to n not having any neural network. I mean , this new system is is is better ,
Uh - huh.
because it has um , this sixty - four hertz cut - off , uh , clean downsampling , and , um what else ? Uh , yeah , a good VAD. We put the good VAD. So. Yeah , I don't know. I I j uh , uh pr
But the latencies but you 've got the latency shorter now.
false
QMSum_120
Uh - huh.
because it has um , this sixty - four hertz cut - off , uh , clean downsampling , and , um what else ? Uh , yeah , a good VAD. We put the good VAD. So. Yeah , I don't know. I I j uh , uh pr
But the latencies but you 've got the latency shorter now.
Latency is short is Yeah.
false
QMSum_120
because it has um , this sixty - four hertz cut - off , uh , clean downsampling , and , um what else ? Uh , yeah , a good VAD. We put the good VAD. So. Yeah , I don't know. I I j uh , uh pr
But the latencies but you 've got the latency shorter now.
Latency is short is Yeah.
Yeah.
false
QMSum_120
But the latencies but you 've got the latency shorter now.
Latency is short is Yeah.
Yeah.
Isn't it
false
QMSum_120
Latency is short is Yeah.
Yeah.
Isn't it
And so
false
QMSum_120
Yeah.
Isn't it
And so
So it 's better than the system that we had before.
false
QMSum_120
Isn't it
And so
So it 's better than the system that we had before.
Yeah. Mainly because of the sixty - four hertz and the good VAD.
false
QMSum_120
And so
So it 's better than the system that we had before.
Yeah. Mainly because of the sixty - four hertz and the good VAD.
OK.
false
QMSum_120
So it 's better than the system that we had before.
Yeah. Mainly because of the sixty - four hertz and the good VAD.
OK.
And then I took this system and , mmm , w uh , I p we put the old filters also. So we have this good system , with good VAD , with the short filter and with the long filter , and , um , with the short filter it 's not worse. So well , is it
false
QMSum_120
Yeah. Mainly because of the sixty - four hertz and the good VAD.
OK.
And then I took this system and , mmm , w uh , I p we put the old filters also. So we have this good system , with good VAD , with the short filter and with the long filter , and , um , with the short filter it 's not worse. So well , is it
OK.
false
QMSum_120
OK.
And then I took this system and , mmm , w uh , I p we put the old filters also. So we have this good system , with good VAD , with the short filter and with the long filter , and , um , with the short filter it 's not worse. So well , is it
OK.
it 's in
false
QMSum_120
And then I took this system and , mmm , w uh , I p we put the old filters also. So we have this good system , with good VAD , with the short filter and with the long filter , and , um , with the short filter it 's not worse. So well , is it
OK.
it 's in
So that 's that 's all fine.
false
QMSum_120
OK.
it 's in
So that 's that 's all fine.
Yes. Uh
false
QMSum_120
it 's in
So that 's that 's all fine.
Yes. Uh
But what you 're saying is that when you do these So let me try to understand. When when you do these same improvements to proposal - one ,
false
QMSum_120
So that 's that 's all fine.
Yes. Uh
But what you 're saying is that when you do these So let me try to understand. When when you do these same improvements to proposal - one ,
Mm - hmm.
false
QMSum_120
Yes. Uh
But what you 're saying is that when you do these So let me try to understand. When when you do these same improvements to proposal - one ,
Mm - hmm.
that , uh , on the i things are somewhat better , uh , in proposal - two for the well - matched case and somewhat worse for the other two cases.
false
QMSum_120
But what you 're saying is that when you do these So let me try to understand. When when you do these same improvements to proposal - one ,
Mm - hmm.
that , uh , on the i things are somewhat better , uh , in proposal - two for the well - matched case and somewhat worse for the other two cases.
Yeah.
false
QMSum_120
Mm - hmm.
that , uh , on the i things are somewhat better , uh , in proposal - two for the well - matched case and somewhat worse for the other two cases.
Yeah.
So does , uh when you say , uh So The th now that these other things are in there , is it the case maybe that the additions of proposal - two over proposal - one are less im important ?
false
QMSum_120
that , uh , on the i things are somewhat better , uh , in proposal - two for the well - matched case and somewhat worse for the other two cases.
Yeah.
So does , uh when you say , uh So The th now that these other things are in there , is it the case maybe that the additions of proposal - two over proposal - one are less im important ?
Yeah. Probably , yeah.
false
QMSum_120
Yeah.
So does , uh when you say , uh So The th now that these other things are in there , is it the case maybe that the additions of proposal - two over proposal - one are less im important ?
Yeah. Probably , yeah.
I get it.
false
QMSum_120
So does , uh when you say , uh So The th now that these other things are in there , is it the case maybe that the additions of proposal - two over proposal - one are less im important ?
Yeah. Probably , yeah.
I get it.
Um So , yeah. Uh. Yeah , but it 's a good thing anyway to have shorter delay. Then we tried , um , to do something like proposal - two but having , um , e using also MSG features. So there is this KLT part , which use just the standard features ,
false
QMSum_120
Yeah. Probably , yeah.
I get it.
Um So , yeah. Uh. Yeah , but it 's a good thing anyway to have shorter delay. Then we tried , um , to do something like proposal - two but having , um , e using also MSG features. So there is this KLT part , which use just the standard features ,
Mm - hmm. Right.
false
QMSum_120
I get it.
Um So , yeah. Uh. Yeah , but it 's a good thing anyway to have shorter delay. Then we tried , um , to do something like proposal - two but having , um , e using also MSG features. So there is this KLT part , which use just the standard features ,
Mm - hmm. Right.
and then two neura two neural networks.
false
QMSum_120
Um So , yeah. Uh. Yeah , but it 's a good thing anyway to have shorter delay. Then we tried , um , to do something like proposal - two but having , um , e using also MSG features. So there is this KLT part , which use just the standard features ,
Mm - hmm. Right.
and then two neura two neural networks.
Mm - hmm.
false
QMSum_120
Mm - hmm. Right.
and then two neura two neural networks.
Mm - hmm.
Mmm , and it doesn't seem to help. Um , however , we just have one result , which is the Italian mismatch , so. Uh. We have to wait for that to fill the whole table , but
false
QMSum_120
and then two neura two neural networks.
Mm - hmm.
Mmm , and it doesn't seem to help. Um , however , we just have one result , which is the Italian mismatch , so. Uh. We have to wait for that to fill the whole table , but
OK. There was a start of some effort on something related to voicing or something. Is that ?
false
QMSum_120
Mm - hmm.
Mmm , and it doesn't seem to help. Um , however , we just have one result , which is the Italian mismatch , so. Uh. We have to wait for that to fill the whole table , but
OK. There was a start of some effort on something related to voicing or something. Is that ?
Yeah. Um , yeah. So basically we try to , uh , find good features that could be used for voicing detection , uh , but it 's still , uh on the , um t
true
QMSum_120
Mmm , and it doesn't seem to help. Um , however , we just have one result , which is the Italian mismatch , so. Uh. We have to wait for that to fill the whole table , but
OK. There was a start of some effort on something related to voicing or something. Is that ?
Yeah. Um , yeah. So basically we try to , uh , find good features that could be used for voicing detection , uh , but it 's still , uh on the , um t
Oh , well , I have the picture.
false
QMSum_120
OK. There was a start of some effort on something related to voicing or something. Is that ?
Yeah. Um , yeah. So basically we try to , uh , find good features that could be used for voicing detection , uh , but it 's still , uh on the , um t
Oh , well , I have the picture.
we w basically we are still playing with Matlab to to look at at what happened ,
false
QMSum_120
Yeah. Um , yeah. So basically we try to , uh , find good features that could be used for voicing detection , uh , but it 's still , uh on the , um t
Oh , well , I have the picture.
we w basically we are still playing with Matlab to to look at at what happened ,
What sorts of
false
QMSum_120
Oh , well , I have the picture.
we w basically we are still playing with Matlab to to look at at what happened ,
What sorts of
Yeah.
false
QMSum_120
we w basically we are still playing with Matlab to to look at at what happened ,
What sorts of
Yeah.
and
false
QMSum_120
What sorts of
Yeah.
and
what sorts of features are you looking at ?
false
QMSum_120
Yeah.
and
what sorts of features are you looking at ?
We have some
false
QMSum_120